Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Is using a Repository Abstraction over Entity Framework Good Practice or not?

I am about to begin on a new project, which will use Entity Framework(EF) as the ORM to link with SQL Server. I have been doing a large amount of reading to understand what are good approaches to working with EF with regards to abstraction and ultimately making my code testable.

Problem is the more reading I do the more confused I become with it all.

What I thought was a good way to go:

I was going to go down the repository route, make them so they are easily injectable via interfaces and easy to mock for testing purposes, from a lot of the examples i seem to find, there are many who swear by this approach. For the repository parts that interact with EF, I was just going to go with integration tests because there would be no business logic in my Repos, I would shift that up into the Service/Controller classes to deal with that.

Some Say the Repository Pattern is an unwarranted abstraction

http://ayende.com/blog/3955/repository-is-the-new-singleton

There were other links but i did not want to flood this question with too many

I kind of get this, I understand that EF and the implementation of it is essentially an abstraction, but to me it seems like a concrete one (it hits something outside the scope of the application...the DB) The issue for me is it pushes the testing issue up for me (in the initial architecture at least) where I will probably interact with the Data Context and Unit of Work stuff from within my Service, but that feels like i am mixing data access login with business logic at that level and then how the hell do i unit test that?

I seem to have read so much that i now have no clear direction and inevitable coders block has set in. I am trying to look for an approach that is clean and i can test against.

UPDATE - Solution i am going to start off with

I went a slightly different path then the one in the answer given by simon but thanks to an article that i had read but he submitted again so i went over it again. Firstly i am using Code First as i have an existing database and for some reason don't like the designer tools. I created some simple POCO objects and some mappings, to keep things simple here i am just going to show one POCO/Mapping and what not.

POCO

public abstract class BaseEntity<T>
{
    [Key]
    public T Id { get; set; }
    public string Name { get; set; }
    public DateTime CreatedOn { get; set; }
}    

public class Project : BaseEntity<int>
{
    public virtual ICollection<Site> Sites { get; set; }
    public bool Active { get; set; }
}    

Unit Of Work Interface

public interface IUnitOfWork
{
    IDbSet<Project> Projects { get; }

    void Commit();
}

Context

internal class MyContext : DbContext
{
    public MyContext(string connectionString)
        : base(connectionString)
    {

    }

    protected override void OnModelCreating(DbModelBuilder modelBuilder)
    {
        modelBuilder.Configurations.Add(new ProjectMap());
    }
}

Concrete UnitOfWork

public class UnitOfWork : IUnitOfWork
{
    readonly MyContext _context;
    const string ConnectionStringName = "DBConn";

    public UnitOfWork()
    {
        var connectionString = ConfigurationManager.ConnectionStrings[ConnectionStringName].ConnectionString;
        _context = new FosilContext(connectionString);
    }

    public IDbSet<Project> Projects
    {
        get { return _context.Set<Project>(); }
    }

    public void Commit()
    {
        _context.SaveChanges();
    }
}

Basic Test Code

        var _repo = new UnitOfWork().Projects;
        var projects = from p in _repo
                       select p;

        foreach (var project in projects)
        {
            Console.WriteLine(string.Format("{0} - {1} - {2} - {3})", project.Id, project.Name, project.Active.ToString(), project.CreatedOn.ToShortDateString()));
        }

        Console.Read();

The UnitOfWork is self explanatory but based on a concrete context, i can create a fake IUnitOfWork and pass it in a fake IDBSet for testing. For me, and this may come back to bite me architecturally the more i get into this but don't have a Repo on top of this abstracting EF away (I Think). As the Article explains IDbSet is an equivalent to a Repo so thats what i am using. I am kinda hiding my custom context behind the UoW but i am going to see how that pans out.

What i am thinking now is i can use this in service layer, which will encapsulate the retrieval on data and business rules but should be unit testable as i can fake the EF specific items. Hopefully that will make my controllers pretty lean, but we will see :)

like image 332
Modika Avatar asked Jul 24 '13 16:07

Modika


1 Answers

Don't forget that if you are using Entity Framework, DbContext is akin to a UnitOfWork and DbSet is akin to a Repository. This question, I think, invites for personnal opinion and might not be suited for Stack Overflow but it is my opinion that if you are going to use Entity Framework, you might as well embrace the pattern. Further wrapping of the DbContext and the DbSets is really to give a very thin abstraction over the concrete class for ease of unit testing and, to a certain extent, Dependency Injection.

There's a topic on MSDN (Testability and Entity Framework 4.0) that can give you a good starting point. There are a few resources here and there of suggestion on how to implement this pattern in the context of Entity Framework. In most scenario, you will use a DbContext to perform SaveChanges() and the DbSet<T> to do your CRUD operations with the help of the IQueryable<T> implement on DbSet<T>. You can implement your IUnitOfWork to mimic what you need on your DbContext and same for DbSet.

An implementation in that regards would almost be a 1-to-1 mapping of the methods in DbContext and DbSet<T>, but you can also implement a custom IUnitOfWork that implements In-Memory based IRepository<T> (using HashSets for example) that facilitate unit testing of the query logic against Queryable components. Is the repository pattern a good idea? This is what's debatable. It really depends on the projects and the flexibility you need and what you want your different layers to do (and what you don't want to).

Edit to answer comment: Your UnitOfWork implementation should not inherit DbContext but create an instance, keep it private and delegate its own method call to the context. Your Repository<T> implementation would proably take a DbSet<T> in an internal constructor. One of the big advantage of the Model-First approach is the liberty given to assembly organization. What I usually end up doing is to separate the model and the DbContext in two separate assembly, the latter referencing the former. This scenario would look like:

1 - Your Models.dll assembly contains your POCO

2 - Your Data.dll assembly contains your interfaces and implementation. You could also have (for example) Data.Core.dll that contains your interfaces and common utilities regarding your data access layer and have an interchangeable Data.Entity.dll (or Data.List.dll) for your actual implementation. If we go with the first option, it could look like:

public interface IUnitOfWork { /* Your methods */ }

public interface IRepository<T> where T : class { /* Your methods */ }

internal class YourDbContext : DbContext { /* Your implementation */ }

public class YourDatabaseContext : IUnitOfWork
{
    private readonly YourDbContext dbContext;

    public YourDatabaseContext()
    {
        // You could also go with the Lazy pattern here to defer creation
        dbContext = new YourDbContext();
    }
}

internal class DbSetRepository<T> : IRepository<T> where T : class
{
    private readonly DbSet<T> dbSet;

    public DbSetRepository(DbSet<T> dbSet)
    {
        // You could also use IDbSet<T> for a toned down version
        this.dbSet = dbSet;
    }
}

In this scenario, only your interface and your IUnitOfWork implementation is visible outside of the assembly (if you want to use Dependency Injection, for example). This is not crucial and is really a matter of design choices. Because your DbContext will be "linked" to your POCO by the definition of your internal implementation and everything will be wired up in your configurations.

like image 166
Simon Belanger Avatar answered Nov 16 '22 04:11

Simon Belanger