I initially designed my system following the s# architecture example outlined in this codeproject article (Unfortunately, I am not using NHibernate). The basic idea is that for each domain object that would need to communicate with the persistence layer you would have a corresponding Data Access Object in a different library. Each Data Access Object implements an interface and when a domain object needs access to a data access method it always codes against an interface and never against the DAOs themselves.
At the time, and still, I thought this design very flexible. However, as the amount of objects in my domain model has grown I am finding myself questioning if there isn't an organizational problem here. For example, almost every object in the domain ends up with a corresponding Data Access Object and Data Access Object interface. Not only that, but each one of these is in a different place which is more difficult to maintain if I want to do something simple like shift around some namespaces.
Interestingly enough, many of these DAOs (and their corresponding interfaces) are very simple creatures - the most common has only a single GetById() method. I end up with a whole bunch of objects such as
public interface ICustomerDao {
Customer GetById(int id);
}
public interface IProductDao {
Product GetById(int id);
}
public interface IAutomaticWeaselDao {
AutomaticWeasel GetById(int id);
}
Where their implementors are usually very trivial too. This has me wondering if it wouldn't be simpler to go in a different direction, maybe switching my strategy by having a single object for simple data access tasks, and reserving the creation of dedicated Data Access Objects for those that need something a little more complicated.
public interface SimpleObjectRepository {
Customer GetCustomerById(int id);
Product GetProductById(int id);
AutomaticWeasel GetAutomaticWeaselById(int id);
Transaction GetTransactioinById(int id);
}
public interface TransactionDao {
Transaction[] GetAllCurrentlyOngoingTransactionsInitiatedByASweatyGuyNamedCarl();
}
Does anyone has any experience with an architecture like this? Overall I am very happy with the set-up as it is now my only concern being management of all these little files. I am still wondering however what other approaches toward structuring the Data Access Layer exist.
A Data Access Layer comprises of a collection of classes, interfaces and their methods and properties that are used to perform CRUD (Create, Read, Update and Delete) operations in the application.
Design principals in the data access layerAny time a business object needs to access the data tier, you use the method calls in the DAL instead of calling directly down to the data tier. This pushes database-specific code into the DAL and makes your business object database independent.
I recommend against the simple approach other than in simple systems, usually I think your better creating a custom repository for each aggregate and encapsulating as much suitable logic as you can within it.
So my approach would to have a repository for each aggregate that needs it, such as CustomerRepository. This would have an Add (save) method and, if suitable for that aggregate, a Remove (delete) method. It would also have any other custom methods that apply including queries (GetActive) and maybe some of those queries could accept specifications.
This sounds like a lot of effort but other than the custom queries most of the code is, at least if you are using a modern ORM, very simple to implement so I use inheritance (ReadWriteRepositoryBase where T: IAggregateRoot) and/or composition (calling out to a RepositoryHelper class). The base class might have methods that apply in all cases, such as GetById.
Hope this helps.
I work in PHP, but I have something similar set up for my data access layer. I have implemented an interface that looks something like this:
interface DataAccessObject
{
public static function get(array $filters = array(), array $order = array(), array $limit = array());
public function insert();
public function update();
public function delete();
}
And then each of my data access objects work something like this:
class DataAccessObject implements DataAccessObject
{
public function __construct($dao_id = null) // So you can construct an empty object
{
// Some Code that get the values from the database and assigns them as properties
}
public static function get(array $filters = array(), array $order = array(), array $limit = array()) {}; // Code to implement function
public function insert() {}; // Code to implement function
public function update() {}; // Code to implement function
public function delete() {}; // Code to implement function
}
I am currently building each of the data access object classes manually so when I add a table or modify an existing table in the database, obviously I have to write the new code by hand. In my case, this is still a huge step up from where our code base was.
However, you can also use the SQL metadata (assuming that you've got a fairly sound database design that takes advantage of foreign key constraints and the like) to generate these data access objects. Then in theory, you could use a single parent DataAccessObject class to construct the properties and methods of the class and even build relationships to the other tables in the database automatically. This would more or less accomplish the same thing that you're describing because then you could extend the DataAccessObject class to provide custom methods and properties for situations that require some amount of manually constructed code.
As a sidenote for .NET development, have you looked at a framework that handles the underlying structure of the data access layer for you, such as Subsonic? If not, I would recommend looking into just such a framework: http://subsonicproject.com/.
Or for PHP development, a framework such as Zend Framework would provide similar functionality: http://framework.zend.com
George I know exactly how you feel. Billy's architecture makes sense to me but the need to create a container, Imapper and mapper files are painfull. Then if you are using NHibernate the corrosponding .hbm file and usually a few unit test scripts to check everythings working.
I assume that even though your not using NHibernate your still using a generic base class lo load/save your containers i.e.
public class BaseDAO<T> : IDAO<T>
{
public T Save(T entity)
{
//etc......
}
}
public class YourDAO : BaseDAO<YourEntity>
{
}
I guess that without NHibernate you'd be using reflection or some other machanism to determine what SQL/SPROC to call?
Eitherway, my thought on this would be where DAO only need to perform the basic CRUD operations defined in the base class then there should be no need to write custom mappers and interfaces. The only way I can think of achiving this is to use Reflection.Emit to dynamically create your DAO on the fly.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With