Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Is it a bad practice to open a new context or send one as a parameter into sub-methods?

A colleague asked for my opinion regarding the following setup. It's based on a declaration of a context in a method and then providing it into the called submethods. Schematically, it looks like so.

public void SuperMethod()
{
  using(Context context = new Context())
  {
    ...
    SetMethod(context, ...);
    ...
    GetMethodDuo(context, ...);
    ...
  }
}

public void SetMethod(Context context, ...) { ... }

public Some GetMethod(Context context, ...) { ... }

I advised him against it, motivating my answer by the idea of opening/closing access to the database as near the actual operations as possible. But now that I think about it, I'm uncertain if that was the best advice in any circumstances.

Question 1: Was the suggestion of mine correct in a general case or should I consider altering it?

I also noticed that the super-method calling the sub-methods used the context itself. My suggestion was to move the part that talked to the database in a new sub-method, hence freeing the super-method from any references to the context. I felt that it made sense to make the super-method a controller, while performing all the database related operations in the workers.

Question 2: Does it make sense to have a controlling method that calls a (possibly large) number of sub-methods carrying the actual work?

Please note that both questions are related to the usage of the context while working with Entity Framework and not a general class structure.

like image 850
Konrad Viltersten Avatar asked Oct 17 '15 10:10

Konrad Viltersten


2 Answers

IMO, the context should be opened and disposed with every unit of work (so for example, within a simple function) - which doesn't mean you shouldn't pass your context to underlying functions. This can be exactly what you want, especially considering connection pooling and context entry lifetime.

This has a few pretty simple reasons:

  1. It's pretty cheap to open a new context, it will take almost no time relatively to the main performance issues in EF like materializing values (DataSet to object and vice versa) and creating the queries - and those two have to be done with an already open context as well.

  2. One main reason against opening and disposing a context every time is the opening/disposing of connections (some DBMS, I know particularly of SQL CE, have incredible problems with the creation of connections to certain databases - and EF will create a new connection based on the provided connection string whenever it needs one). However, you can easily surpass this, by keeping a connection open (or letting it timeout isn't too bad most of the time either) and pass it to your context upon creating, using the DbContext(Connection, bool) overload with ContextOwnsConnection=false.

  3. When you keep the context open over the whole lifetime, you can't possibly know which objects are already in the change tracker, materialized or there in another form, and which aren't. For me, this was a problem when rewriting the BL of my project. I tried to modify an object, which I added earlier. It was in the context (unchanged) but not in the change tracker, I couldn't set its state, because it wasn't in the change tracker. And I couldn't attach it again, because it was already in the context. This kind of behavior will be pretty hard to control.

  4. Another form of this is as follows. Whenever a new object enters the context, EF will try to set the navigation properties of these regarding the other objects in the context. This is called relationship fixup and is one of the main reasons Include() is working so well. This means that most of the time, you'll have a huge object tree in your context. Then, upon adding/deleting it (or whatever else operation) EF will try to carry out this to the whole tree (well... sometimes ;) ), which can cause a lot of trouble, especially when trying to add a new entry with FK to already existing items.

  5. A database context is, like already mentioned, basically an object tree, which can be, depending on its lifetime, gigantic. And here, EF has to do a few things, like... Checking if an item is already there, because of obvious reasons... In best case complexity O(n*log(n)+m), where m is the number of object types and n the number of objects of this type in the context. ...Checking if an object has been modified since retrieval - well, you can imagine, since EF has to do this for every single object in every single call, this can slow things down pretty far.

  6. A bit corresponding to the last issue. What do you really want, when calling SaveChanges()? Most likely, you want to be able to tell: "ok, these are the actions I did, so EF should now issue these and these calls to the db", right? Well... But, since EF has been tracking the entities, and maybe you modified some values, or another thread did something there and there... How can you be sure, these are the only things SaveChanges() will do? How can you be sure that over the whole lifetime of the context, there's nothing fishy in your database then (which will cancel the transaction, which can be pretty big)?

But yeah, of course, there are a few issues, where you need to keep the context open (well, you don't need to - you could just pass it). For me, this was mostly in a few cases where FK correction was hard to maintain (but still within one function, while sometimes within one function I just had to dispose and re-create the context for the sake of simplicity) and whenever you call sub-functions from multiple places in your code - I had the issue that I had a context open within a calling function, which called another function, which still needed the context. Usually, that's not a problem but my connection handling is kind of... advanced. That led to performance loss, I dealt with this by passing the already open context through an optional additional context parameter to the sub-function - just like what you already mentioned, however it shouldn't really be necessary.

For additional reference, here are some links that might be helpful in this regard. One's straight from MSDN and the other from a blog.

like image 106
DevilSuichiro Avatar answered Nov 06 '22 17:11

DevilSuichiro


As @DevilSuichiro mentioned the DbContext is meant as a Unit of Work container. By default DbContext stores all loaded objects in the memory and track their changes. When the SaveChanges method is called all changes are sent to a DB in the single transaction.

So if your SuperMethod handles some kind of a logical unit of work (e.g. a HTTP request in a web application) I would instantiate the context only once and pass it as a parameter to submethods.

Regarding your second question - if you instantiate the context only once, it's IMO better to have more methods that are simple, easier to maintain and have meaningful names. If you want to create a new instance of the context in every submethod, it depends on what "possibly large" number means :-)

like image 43
Lukas Kabrt Avatar answered Nov 06 '22 17:11

Lukas Kabrt