Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Repository Pattern and Local Caching

I have the following interfaces/class:

public interface IUnitOfWork : IDisposable
{
    event EventHandler<EventArgs> Saved;
    DbSet<T> Set<T>() where T : class;
    DbEntityEntry<T> Entry<T>(T entity) where T : class;
    void Commit();
}

And an implementation of a repository:

public class CachedSqlRepository<T, TKey, TContext> : ICacheRepository<T, TKey, TContext>
    where T : class
    where TContext : DbContext, IDisposable, new()
{
    //A list of the Navigation Properties to include
    private readonly Expression<Func<T, object>>[] _NavigationProperties;

    public CachedSqlRepository(params Expression<Func<T, object>>[] navigationProperties)
    {
        _NavigationProperties = navigationProperties;
        using (TContext dbContext = new TContext()) //Fetch the List of Entities
        {
            RefreshCache(dbContext);
        }
    }
    /// <summary>
    /// The Collection of Items in the database
    /// Note this is a Cache, but should replicate whats in the DB
    /// </summary>
    public IList<T> Items { get; private set; }

    public bool Any(Func<T, bool> predicate)
    {
        return Items.Any(predicate);
    }

    public void RefreshCache(DbContext context)
    {
        switch (_NavigationProperties.Length)
        {
            case 0:
                Items = context.Set<T>().ToList();
                break;
            case 1:
                Items = context.Set<T>().Include(_NavigationProperties[0]).ToList();
                break;
           //more here
        }
    }

    /// <summary>
    /// Refresh the internal cache
    /// </summary>
    public void RefreshCache()
    {
        using (TContext dbContext = new TContext())
        {
            RefreshCache(dbContext);
        }
    }

    public IEnumerable<T> FilterBy(Func<T, bool> predicate)
    {
        return Items.Where(predicate);
    }

    public T Add(T entity)
    {
        T newEntity;
        using (TContext dbContext = new TContext())
        {
            newEntity = dbContext.Set<T>().Add(entity);
            if (dbContext.SaveChanges() == 1) //1 change was made
                Items.Add(newEntity);
        }
        return newEntity;
    }

    public void Delete(TKey id)
    {
        using (TContext dbContext = new TContext())
        {
            var attachedEntry = dbContext.Set<T>().Find(id);
            if (attachedEntry == null) return; //it doesnt exist anyway!
            dbContext.Set<T>().Remove(attachedEntry);
            dbContext.SaveChanges();
            RefreshCache(dbContext);
        }
    }

    public void Update(T entity, TKey id)
    {
        if (entity == null) throw new ArgumentException("Cannot update a null entity.");

        using (TContext dbContext = new TContext())
        {
            var entry = dbContext.Entry(entity);

            if (entry.State != EntityState.Detached) return;
            T attachedEntity = dbContext.Set<T>().Find(id);

            if (attachedEntity != null)
            {
                var attachedEntry = dbContext.Entry(attachedEntity);
                attachedEntry.CurrentValues.SetValues(entity);
            }
            else
            {
                entry.State = EntityState.Modified; // This should attach entity
            }
            dbContext.SaveChanges();
            RefreshCache(dbContext);
        }
    }

    #region Transaction Methods
    public IUnitOfWork StartTransaction()
    {
        return new EFUnitOfWork(new TContext());
    }

    public T TransactionAdd(T entity, IUnitOfWork context)
    {
        context.Saved += OnSave;
        return context.Set<T>().Add(entity);
    }

    public void TransactionDelete(TKey id, IUnitOfWork context)
    {
        var attachedEntry = context.Set<T>().Find(id);
        if (attachedEntry == null) return; //it doesnt exist anyway
        context.Saved += OnSave;
        context.Set<T>().Remove(attachedEntry);
    }

    public void TransactionUpdate(T entity, TKey id, IUnitOfWork context)
    {
        if (entity == null) throw new ArgumentException("Cannot update a null entity.");

        var entry = context.Entry(entity);

        if (entry.State != EntityState.Detached) return;
        T attachedEntity = context.Set<T>().Find(id);

        if (attachedEntity != null)
        {
            var attachedEntry = context.Entry(attachedEntity);
            attachedEntry.CurrentValues.SetValues(entity);
        }
        else
        {
            entry.State = EntityState.Modified; // This should attach entity
        }
        context.Saved += OnSave;
    }

    private void OnSave(object sender, EventArgs e)
    {
        RefreshCache();
    }
    #endregion
}

It is adapted from various patterns on the net. I don't expect this to be useful for tables with hundreds of thousands of rows, but for lookup tables etc - I'm not always hitting the DB.

It works, but some things aren't super clean, for example where I refresh the cache - sometimes I have to pull all the data again (currently a work in progress).

Is this sound design? Or am I reinventing the wheel here?

like image 366
Simon Avatar asked Oct 22 '22 04:10

Simon


1 Answers

An interesting question +1. In my view, context content caching is one that is best done properly or left well alone. And use DB caching.

Why:

  • Parallel WPs all have a cache
  • Each WP may have threads, the context is not thread safe
  • Should each thread have a cache?
  • Is your cache session persistent?
    • No: you reload each request
    • Yes: you use global caching on ASP.NET, EnterpriseLibary cache or similar?
      • Are you managing the cache correctly?
      • How do you deal with concurrency and changes
  • Have you considered best practice around Context lifetime? Some experts suggest short lifetime only
  • Is DB located on LAN near WebServer?
  • Have you compared response times when using DB buffer access?

Having looked into this topic in various environments, not just EF/.NET/SQL Server, I have come to the conclusion that unless the DB server has become or is tending to be the CPU bottleneck and can't be easily scaled, it is a very reasonable approach to supply memory to DB and let it cache 100sMB before building or trying to cache entries. I would rather throw GBs or RAM at SQL Server before coding myself in app knots on WebServer.

When every microsecond counts, or your DB is separated on a network span with latency/throughput issues and your data is non volatile and needs no cache expiry/concurrency management. Then go ahead and implement caching.

Consider memory use, cache construction time and memory persistency model carefully.

Look at some tools made for caching for ideas and as potential solutions. e.g. Enterprise Caching Block.

Good Luck.

like image 170
phil soady Avatar answered Nov 03 '22 18:11

phil soady