Background: I've got a bunch of strings that I'm getting from a database, and I want to return them. Traditionally, it would be something like this:
public List<string> GetStuff(string connectionString) { List<string> categoryList = new List<string>(); using (SqlConnection sqlConnection = new SqlConnection(connectionString)) { string commandText = "GetStuff"; using (SqlCommand sqlCommand = new SqlCommand(commandText, sqlConnection)) { sqlCommand.CommandType = CommandType.StoredProcedure; sqlConnection.Open(); SqlDataReader sqlDataReader = sqlCommand.ExecuteReader(); while (sqlDataReader.Read()) { categoryList.Add(sqlDataReader["myImportantColumn"].ToString()); } } } return categoryList; }
But then I figure the consumer is going to want to iterate through the items and doesn't care about much else, and I'd like to not box myself in to a List, per se, so if I return an IEnumerable everything is good/flexible. So I was thinking I could use a "yield return" type design to handle this...something like this:
public IEnumerable<string> GetStuff(string connectionString) { using (SqlConnection sqlConnection = new SqlConnection(connectionString)) { string commandText = "GetStuff"; using (SqlCommand sqlCommand = new SqlCommand(commandText, sqlConnection)) { sqlCommand.CommandType = CommandType.StoredProcedure; sqlConnection.Open(); SqlDataReader sqlDataReader = sqlCommand.ExecuteReader(); while (sqlDataReader.Read()) { yield return sqlDataReader["myImportantColumn"].ToString(); } } } }
But now that I'm reading a bit more about yield (on sites like this...msdn didn't seem to mention this), it's apparently a lazy evaluator, that keeps the state of the populator around, in anticipation of someone asking for the next value, and then only running it until it returns the next value.
This seems fine in most cases, but with a DB call, this sounds a bit dicey. As a somewhat contrived example, if someone asks for an IEnumerable from that I'm populating from a DB call, gets through half of it, and then gets stuck in a loop...as far as I can see my DB connection is going to stay open forever.
Sounds like asking for trouble in some cases if the iterator doesn't finish...am I missing something?
It's a balancing act: do you want to force all the data into memory immediately so you can free up the connection, or do you want to benefit from streaming the data, at the cost of tying up the connection for all that time?
The way I look at it, that decision should potentially be up to the caller, who knows more about what they want to do. If you write the code using an iterator block, the caller can very easily turned that streaming form into a fully-buffered form:
List<string> stuff = new List<string>(GetStuff(connectionString));
If, on the other hand, you do the buffering yourself, there's no way the caller can go back to a streaming model.
So I'd probably use the streaming model and say explicitly in the documentation what it does, and advise the caller to decide appropriately. You might even want to provide a helper method to basically call the streamed version and convert it into a list.
Of course, if you don't trust your callers to make the appropriate decision, and you have good reason to believe that they'll never really want to stream the data (e.g. it's never going to return much anyway) then go for the list approach. Either way, document it - it could very well affect how the return value is used.
Another option for dealing with large amounts of data is to use batches, of course - that's thinking somewhat away from the original question, but it's a different approach to consider in the situation where streaming would normally be attractive.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With