I am currently developing a news feed android app. I try to design my app according to the principles of clean architecture.
In the data layer I am using the repository pattern as a facade for the diferent data sources: remote data from an API (https://newsapi.org/) , local data from an DB (Realm or SQLite) as well as some in-memory cache.
In my domain layer I have defined some immutable model classes (Article, NewsSource, etc.) which are being used by the domain layer as well as the presentation layer (no need for extra model classes in the presentation layer in my opinion).
Does it make sense to use different model classes for the remote data source as well as for the local data source?
E.g. The remote data source uses Retrofit to make API calls and the models need to be annotated in order to be parsed by GSON.
data class RemoteArticleModel(
@SerializedName("title") val title: String,
@SerializedName("urlToImage") val urlToImage: String,
@SerializedName("url") val url: String)
The models for the local data source also may have to fulfill some certain contract like models in a Realm DB need to extend RealmObject.
open class Dog : RealmObject() {
var name: String? = null
@LinkingObjects("dog")
val owners: RealmResults<Person>? = null
}
Obviously, I don´t want my domain models to be 'polluted' by any data source specific contract (annotations, RealmObject inheritance, etc.). So I thought it would make sense to use different models for different data sources and the repository handles the mapping between them.
E.g. We want to fetch all articles from the remote API, store them in the local DB and return them to the domain layer.
Flow would be like:
Remote data source makes http request to news api and retrieves a list of RemoteArticleModel
´s. The repository would map these models to a Domain specific article model (Article
). Then these would be mapped to DB models (e.g. RealmArticleModel
) and inserted into the DB. Finally the list of Article
´s would be returned to the caller.
Two questions arise: The above example shows how many allocations there would be using this approach. For every article that is going to be downloaded and inserted into the DB, three models would be created in that process. Would that be overkill?
Also, I know that the data layer should use different model classes than the domain layer (inner layer should no nothing about outer layer). But how would that make sense in the above example. I would already have two different model classes for the two different data sources. Adding a third one that´s being used as a 'mediator' model by the data-layer/repository to handle mapping to other models (remote, local, domain) would add even more allocations.
So should the data layer know nothing about domain models and let the domain do the mapping from a data layer model to a domain layer model?
Should there be a generic model used only by the repository/data-layer?
Thank, I really appreciate any help from more experienced developers :)
Data Layer: So the Data layer classes are responsible to get the data. It can have multiple implementations(for S rule) and the caller can provide a particular implementation to get its work done in a different way.
With Clean Architecture, the Domain and Application layers are at the centre of the design. This is known as the Core of the application. The Domain layer contains the enterprise logic and types, and the Application layer contains the business logic and types.
Entity: In clean architecture, entity means the business logic. Different from the entity in domain-driven design, the entity here can be realized as the domain in domain-driven design. Use cases: With the domain, the outer layer is use cases, which refers to clients who use domain knowledge to fulfill specific needs.
The overriding principle you should follow is separation of concerns.
The persistence layer should have classes that only deal with the storing and retrieval of data, in this case the Realm classes.
The network layer should have classes that deal with the data from the server, in this case the Retrofit classes.
Moving data from any of those layers to your business layers requires you to map the persistence and network objects to your domain.
To answer your first question, the mapping insulates goes around the different concerns, separating the domain from the data layers. The data layer should not know the domain models. The domain requests data from the data layer, the data layer gets the data and passes it through a mapper and thus returns the domain model.
To answer your second question, it would be a violation of the separation of concerns to have a generic model for your data layers if you get the data from different sources. The persistence models and the network models represent different parts of the system, and therefore should be represented by different models. The domain does not need to know this, so and data requested should therefore be mapped to domain objects before crossing the boundary back to the domain.
Adding to @Brian answer, probably you can add or encapsulate the Data layer as in the Clean Boilerplate suggests:
This way you have a common Data model which is mapped to the domain model. I'm not really sure if this adds unnecessary code and layers, because then the data and domain models will probably look pretty much the same.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With