Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How much abstraction is too much?

People also ask

Is abstraction good or bad?

You have to bear in mind that no abstraction is actually better than the wrong abstraction. Using proper abstractions in your app helps to maintain it over time, but if you're doing it wrong, you're adding unnecessary complexity to the project, which can be hard to understand in the future.

What happens if degree of abstraction is more?

Higher the level of abstraction, higher are the details. Explanation: Higher the level of abstraction, lower are the details. The best way to understand this is to consider a whole system that is highest level of abstraction as it hides everything inside.

How many layers of abstraction are you on?

The OSI model comprises seven abstraction layers.

What are the limitations of abstraction?

There are some disadvantages to abstractions in programming though. First of all, the more abstraction layers you add, the less efficient the end program will run. This is due to the higher languages has to be interpreted and eventually compiled.


The point of abstractions is to factor out common properties from the specific ones, like in the mathematical operation:

ab + ac => a(b + c)

Now you do the same thing with two operations instead of three. This factoring made our expression simpler.

A typical example of an abstraction is the file system. For example, you want your program to be able to write to many kinds of storage devices: pen drives, SD cards, hard drives, etc...

If we didn't have a file system, we would need to implement the direct disk writing logic, the pen drive writing logic and the SD card writing logic. But all of these logics have something in common: they create files and directories, so this common things can be abstracted away, by creating an abstraction layer, and providing an interface to the hardware vendor to do the specific stuff.

The more the things share a common property. The more beneficial an abstraction can be:

ab + ac + ad + ae + af

to:

a(b + c + d + e + f)

This would reduce the 9 operations to 5.

Basically each good abstraction roughly halves the complexity of a system.

You always need at least two things sharing a common property to make an abstraction useful. Of course you tear a single thing apart so it looks like an abstraction, but it does not mean it's useful:

10 => 5 * 2

You cannot define the word "common" if you have only one entity.

So to answer your question. You have enough abstractions if they making your system as simple as possible.

(In my examples, addition connects the parts of the system, while multiplication defines an abstract-concrete relationship.)


How little is too little?

When you keep working with "low level" elements on a routine basis and you constantly feel like you don';t want to be doing this. Abstract 'em away.

So when is it too much?

When you can't make bits and pieces of some code parts on a regular basis and have to debug them down to the previous layer. You feel this particular layer does not contribute anything, just an obstacle. Drop it.

Where's the sweet spot?

I like to apply the pragmatic approach. If you see a need for an abstraction and understand how it will improve your life, go for it. If you've heard there should be "officially" an extra layer of abstraction but you're not clear why, don't do it but research first. If somebody insists on abstracting something but cannot clearly explain what if will bring, tell them to go away.


So when is it too much? At what point do the empty layers and extra "might need" abstractions become overkill? How little is too little? Where's the sweet spot?

I don't think there is a definitive answer to these questions. Experience is needed to develop a feeling of what is "too much" and "too little". Maybe the usage of some metric or quality control tools can help, but it's hard to generalize. It mostly depends on each case.

Here are a few links that might inspire you in the quest of answers:

  • You ain't gonna need it
  • The use/reuse paradox
  • Project triangle: good, fast, cheap
  • All problems in computer science can be solved by another level of indirection (David Wheeler)

Development is all about finding the right balance between the various tensions that are present in any software engineering effort.


In theory, it should be a matter of simple math using only three (fairly simple) variables:

  • S = savings from use
  • C = cost of the extra abstractions
  • P = probability of use

If S * P > C , then the code is good. If S * P < C, then it's bad.

The reason that's purely theoretical, however, is that you generally can't guess at the probability of use or the savings you'll get from using it. Worse, you can't guess or or usually even measure the cost of its presence.

At least some people have drawn a conclusion from this. In TDD, the standard mantra is "you ain't gonna need it" (YAGNI). Simply put, anything that doesn't directly contribute toward the code meeting its current requirements is considered a bad thing. In essence, they've concluded that the probability of use is so low, that including such extra code is never justified.

Some of this comes back to "bottom up" versus "top down" development. I tend to think of bottom up development as "library development" -- I.e. instead of developing a specific application, you're really developing libraries for the kinds of things you'll need in the application. The thinking is that with a good enough library, you can develop almost any application of that general type relatively easily.

Quite a bit also depends on the size of the project. Huge projects that stay in use for decades justify a lot more long-term investment than smaller projects that are discarded and replaced much more quickly. This has obvious analogs in real life as well. You don't worry nearly as much about the fit, finish, or workmanship in a disposable razor you'll throw away in less than a week as you do in something like a new car that you'll be using for the next few years.


Simply put, there is too much abstraction if the code is difficult to understand.

Now this isn't to say that you should hard code everything, because that's the easiest code to write and read.

The easiest test is to either put it down for a few days, pick it back up and ask yourself, does this make any sense. A better approach is to give it to someone else, and see if they can make heads or tails of it.


The reality is that it depends on how well you can look into the future. You want to plan for changes you can foresee without creating too many extra layers. If you have a design that transfers data between systems, go ahead and create an interface and use the planned implementation as the default. For example, you use FTP to move files around but know the standard will be message-based (or whatever) next year.

As for layers within the design, sometimes the added layers make it easier to write smaller classes. It's ok to add conceptual layers if it means the concrete classes become straight forward.


See item (6a) of RFC 1925 and know that it is indeed true. The only problems you can't fix by adding abstraction layers are those caused by having too many abstraction layers. (In particular, every piece of abstraction makes the whole thing harder to understand.)