Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Which is the "best" data access framework/approach for C# and .NET?

Tags:

c#

.net

sql

asp.net

People also ask

Which is the best approach in Entity Framework?

As in this diagram, if we already have domain classes, the Code First approach is best suited for our application. The same as if we have a database, Database First is a good option. If we don't have model classes and a database and require a visual entity designer tool then Model First is best suited.

Which is faster ADO.NET or Entity Framework?

Entity framework is ORM Model, which used LINQ to access database, and code is autogenerated whereas Ado.net code is larger than Entity Framework. Ado.net is faster than Entity Framework.

Which database is best for .NET framework?

NET Framework Data Provider for SQL Server is recommended instead of this provider. Recommended for single-tier applications that use Microsoft Access databases. Use of an Access database for a middle-tier application is not recommended. Recommended for middle and single-tier applications that use ODBC data sources.

What is data access layer C#?

Data Access Layers typically contain methods for accessing the underlying database data. The Northwind database, for example, has Products and Categories tables that record the products for sale and the categories to which they belong.


I think LINQ to SQL is good for projects targeted for SQL Server.

ADO.NET Entity Framework is better if we are targeting different databases. Currently I think a lot of providers are available for ADO.NET Entity Framework, Provider for PostgreSQL, MySQL, esql, Oracle and many other (check http://blogs.msdn.com/adonet/default.aspx).

I don't want to use standard ADO.NET anymore because it's a waste of time. I always go for ORM.


Having worked on 20+ different C#/ASP.NET projects I always end up using NHibernate. I often start with a completely different stack - ADO.NET, ActiveRecord, hand rolled wierdness. There are numerous reasons why NHibernate can work in a wide range of situations, but the absolutely stand out for me is the saving in time, especially when linked to code generation. You can change the datamodel, and the entities get rebuilt, but most/all the other code doesn't need to be changed.

MS does have a nasty habit of pushing technologies in this area that parallel existing open source, and then dropping them when they don't take off. Does anyone remember ObjectSpaces?


Added for new technologies:

With Microsoft Sql Server out for Linux in Beta right now, I think it's ok to not be database agnostic. The .Net Core Path and MS-SQL route allows you to run on Linux servers like Ubuntu entirely with no windows dependencies.

As such, imo, a very good flow is to not use a full ORM framework or data controls and leverage the power of SSDT Visual Studio Projects (Sql Server Data Tools) and a Micro ORM.

In Visual Studio you can create a Sql Server Project as a legit Visual Studio Project. Doing so allows you to create the entire database via table designers or raw query editing right inside visual studio.

Secondly, you get SSDT's Schema Compare tool which you can use to compare your database project to a live database in Microsoft Sql Server and update it. You can sync your Visual Studio Project with the server causing updates in your project to go out to the server. Or you can sync the server with your project causing your source code to update. Via this route you can easily pick up changes the DBA made in maintenance last night and push out your new development changes for a new feature easily with a simple tool.

Using that same tool you can compute the migration script without actually running it, if you need to pass that off to an operations department and submit a change order, it works for that flow to.

Now for writing code against you MS-SQL Database, I recommend PetaPoco.

Because PetaPoco works Perfectly inline with the above SSDT solution. PetaPoco comes with T4 text templates you can use to generate all your data entity classes, and it generates the bulk data layer classes for you.

The catch is, you have to write queries yourself, which isn't a bad thing.

So you end up with something like this:

var people = dbContext.Fetch<Person>("SELECT * FROM People where Username Like '%@0%'", "bob");

PetaPoco automatically handles parameterizing @0 for you, it also has the handy Sql class for building queries.

Furthermore, PetaPoco is an order of magnitude faster than EF6 and 8+ times faster than EF7.

So in total, this solution involves using SSDT for SCHEMA management, and PetaPoco for code integration at the gain of high maintainability, customization, and very good performance.

The only downfall to this approach, is that you're hard tieing yourself to Microsoft Sql Server. However, imo, Microsoft Sql Server is one of the best RDBM's out there.

It's got DBMail, Jobs, CLR object capabilities, and on and on. Plus the integration between Visual Studio and MS-SQL server is phenomenal and you don't get any of that if you choose a different RDBMS.


I must say that I never used NHibernate for the immense time that needed to start using... time wasted on the XML setup.

I recently did a web application in MVC2, where I did choose ADO Entities Framework and I use Linq all the time.

I must say, I was impressed with the speed! and our site was having around 35 000 unique visitors per day, in around 60Gb bandwidth per day (I reduced radically this 60Gb number by hosting all static files in Amazon S3 - Great .NET wrapper they have, I must say).

I will always go this way. It's easy to start (just add new data item, choose tables and that's it! for every change in the database we just need to refresh the model - made automatically in just 2 clicks) and it's fun to use - Linq rules!