For example, if I am creating a 3 layer application (data / business / UI) and the data layer is grabbing single or multiple records. Do I convert everything from data layer into generic list/collections before sending to the business layer? Is it ok to send data tables? What about sending info back to the data layer?
If I use objects/lists, are these members of the Data or Business layers? Can I use the same objects to pass to and from the layers?
Here is some pseudo code:
object user with email / password
in UI layer, user inputs email / password. UI layer does validation and then I assume creates a new object user to pass to business layer which does further validation and passes same object to Data layer to insert record. Is this correct?
I am new to .NET (come from 8+ years of ASP VBScript background) and trying to get up to speed on the 'right' way to do things.
I am updating this answer because comments left by Developr seem to indicate that he would like a bit more detail.
The short answer to your question is Yes you'll want to use class instances (objects) to mediate the interface between your UI and your Business Logic Layer. The BLL and DAL will communicate as discussed below. You should not be passing SqlDataTables or SqlDataReaders around.
The simple reasons as to why: objects are type-safe, offer Intellisense support, permit you to make additions or alterations at the Business Layer that aren't necessarily found in the database, and give you some freedom to unlink the application from the database so that you can maintain a consistent BLL interface even as the database changes (within limits, of course). It is simply good programming practice.
The big picture is that, for any page in your UI, you'll have one or more "models" that you want to display and interact with. Objects are the way to capture the current state of a model. In terms of process: the UI will request a model (which may be a single object or a list of objects) from the Business Logic Layer (BLL). The BLL then creates and returns this model - usually using the tools from the Data Access Layer (DAL). If changes are made to the model in the UI, then the UI will send the revised object(s) back to the BLL with instructions as to what to do with them (e.g. insert, update, delete).
.NET is great for this kind of Separation of Concerns because the Generic container classes - and in particular the List<> class - are perfect for this kind of work. They not only permit you to pass the data but they are easily integrated with sophisticated UI controls like grids, lists, etc. via the ObjectDataSource class. You can implement the full range of operations that you need to develop the UI using ObjectDataSource: "Fill" operations with parameters, CRUD operations, sorting, etc.).
Because this is fairly important, let me make a quick diversion to demonstrate how to define an ObjectDataSource:
<asp:ObjectDataSource ID="ObjectDataSource1" runat="server" OldValuesParameterFormatString="original_{0}" SelectMethod="GetArticles" OnObjectCreating="OnObjectCreating" TypeName="MotivationBusinessModel.ContentPagesLogic"> <SelectParameters> <asp:SessionParameter DefaultValue="News" Name="category" SessionField="CurPageCategory" Type="String" /> </SelectParameters> </asp:ObjectDataSource>
Here, MotivationBusinessModel is the namespace for the BLL and ContentPagesLogic is the class implementing the logic for, well, Content Pages. The method for pulling data is "GetArticles" and it takes a Parameter called CurPageCategory. In this particular case, the ObjectDataSource returns a list of objects that is then used by a grid. Note that I need to pass session state information to the BLL class so, in the code behind, I have a method "OnObjectCreating" that lets me create the object and pass in parameters:
public void OnObjectCreating(object sender, ObjectDataSourceEventArgs e) { e.ObjectInstance = new ContentPagesLogic(sessionObj); }
So, this is how it works. But that begs one very big question - where do the Models / Business Objects come from? ORMs like Linq to SQL and Subsonic offer code generators that let you create a class for each of your database tables. That is, these tools say that the model classes should be defined in your DAL and the map directly onto database tables. Linq to Entities lets you define your objects in a manner quite distinct from the layout of your database but is correspondingly more complex (that is why there is a distinction between Linq to SQL and Linq to Entities). In essence, it is a BLL solution. Joel and I have said in various places on this thread that, really, the Business Layer is generally where the Models should be defined (although I use a mix of BLL and DAL objects in reality).
Once you decide to do this, how do you implement the mapping from models to the database? Well, you write classes in the BLL to pull the data (using your DAL) and fill the object or list of objects. It is Business Logic because the mapping is often accompanied by additional logic to flesh out the Model (e.g. defining the value of derived fields).
Joel creates static Factory classes to implement the model-to-database mapping. This is a good approach as it uses a well-known pattern and places the mapping right in the construction of the object(s) to be returned. You always know where to go to see the mapping and the overall approach is simple and straightforward.
I've taken a different approach. Throughout my BLL, I define Logic classes and Model classes. These are generally defined in matching pairs where both classes are defined in the same file and whose names differ by their suffix (e.g. ClassModel and ClassLogic). The Logic classes know how to work with the Model classes - doing things like Fill, Save ("Upsert"), Delete, and generate feedback for a Model Instance.
In particular, to do the Fill, I leverage methods found in my primary DAL class (shown below) that let me take any class and any SQL query and find a way to create/fill instances of the class using the data returned by the query (either as a single instance or as a list). That is, the Logic class just grabs a Model class definition, defines a SQL Query and sends them to the DAL. The result is a single object or list of objects that I can then pass on to the UI. Note that the query may return fields from one table or multiple tables joined together. At the mapping level, I really don't care - I just want some objects filled.
Here is the first function. It will take an arbitrary class and map it automatically to all matching fields extracted from a query. The matching is performed by finding fields whose name matches a property in the class. If there are extra class fields (e.g. ones that you'll fill using business logic) or extra query fields, they are ignored.
public List<T> ReturnList<T>() where T : new() { try { List<T> fdList = new List<T>(); myCommand.CommandText = QueryString; SqlDataReader nwReader = myCommand.ExecuteReader(); Type objectType = typeof (T); PropertyInfo[] typeFields = objectType.GetProperties(); if (nwReader != null) { while (nwReader.Read()) { T obj = new T(); for (int i = 0; i < nwReader.FieldCount; i++) { foreach (PropertyInfo info in typeFields) { // Because the class may have fields that are *not* being filled, I don't use nwReader[info.Name] in this function. if (info.Name == nwReader.GetName(i)) { if (!nwReader[i].Equals(DBNull.Value)) info.SetValue(obj, nwReader[i], null); break; } } } fdList.Add(obj); } nwReader.Close(); } return fdList; } catch { conn.Close(); throw; } }
This is used in the context of my DAL but the only thing that you have to have in the DAL class is a holder for the QueryString, a SqlCommand object with an open Connection and any parameters. The key is just to make sure the ExecuteReader will work when this is called. A typical use of this function by my BLL thus looks like:
return qry.Command("Select AttendDate, Count(*) as ClassAttendCount From ClassAttend") .Where("ClassID", classID) .ReturnList<AttendListDateModel>();
You can also implement support for anonymous classes like so:
public List<T> ReturnList<T>(T sample) { try { List<T> fdList = new List<T>(); myCommand.CommandText = QueryString; SqlDataReader nwReader = myCommand.ExecuteReader(); var properties = TypeDescriptor.GetProperties(sample); if (nwReader != null) { while (nwReader.Read()) { int objIdx = 0; object[] objArray = new object[properties.Count]; for (int i = 0; i < nwReader.FieldCount; i++) { foreach (PropertyDescriptor info in properties) // FieldInfo info in typeFields) { if (info.Name == nwReader.GetName(i)) { objArray[objIdx++] = nwReader[info.Name]; break; } } } fdList.Add((T)Activator.CreateInstance(sample.GetType(), objArray)); } nwReader.Close(); } return fdList; } catch { conn.Close(); throw; } }
A call to this looks like:
var qList = qry.Command("Select QueryDesc, UID, StaffID From Query") .Where("SiteID", sessionObj.siteID) .ReturnList(new { QueryDesc = "", UID = 0, StaffID=0 });
Now qList is a generic list of dynamically-created class instances defined on the fly.
Let's say you have a function in your BLL that takes a pull-down list as an argument and a request to fill the list with data. Here is how you could fill the pull down with the results retrieved above:
foreach (var queryObj in qList) { pullDownList.Add(new ListItem(queryObj.QueryDesc, queryObj.UID.ToString())); }
In short, we can define anonymous Business Model classes on the fly and then fill them just by passing some (on the fly) SQL to the DAL. Thus, the BLL is very easy to update in response to evolving needs in the UI.
One last note: If you are concerned that defining and passing around objects wastes memory, you shouldn't be: if you use a SqlDataReader to pull the data and place it into the objects that make up your list, you'll only have one in-memory copy (the list) as the reader iterates through in a read-only, forward-only fashion. Of course, if you use DataAdapter and Table classes (etc.) at your data access layer then you would be incurring needless overhead (which is why you shouldn't do it).
In general, I think it is better to send objects rather than data tables. With objects, each layer knows what it is receiving (which objects with what properties etc.). You get compile time safety with objects, you can't accidentally misspell a property name etc. and it forces an inherent contract between the two tiers.
Joshua also brings up a good point, by using your custom object, you are also decoupling the other tiers from the data tier. You can always populate your custom object from another data source and the other tiers will be none the wiser. With a SQL data table, this will probably not be so easy.
Joel also made a good point. Having your data layer aware of your business objects is not a good idea for the same reason as making your business and UI layers aware of the specifics of your data layer.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With