Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Should I avoid using Dependency Injection and IoC?

In my mid-size project I used static classes for repositories, services etc. and it actually worked very well, even if the most of programmers will expect the opposite. My codebase was very compact, clean and easy to understand. Now I tried to rewrite everything and use IoC (Invertion of Control) and I was absolutely disappointed. I have to manually initialize dozen of dependencies in every class, controller etc., add more projects for interfaces and so on. I really don't see any benefits in my project and it seems that it causes more problems than solves. I found the following drawbacks in IoC/DI:

  • much bigger codesize
  • ravioli-code instead of spaghetti-code
  • slower performance, need to initialize all dependencies in constructor even if the method I want to call has only one dependency
  • harder to understand when no IDE is used
  • some errors are pushed to run-time
  • adding additional dependency (DI framework itself)
  • new staff have to learn DI first in order to work with it
  • a lot of boilerplate code, which is bad for creative people (for example copy instances from constructor to properties...)

We do not test the entire codebase, but only certain methods and use real database. So, should Dependency Injection be avoided when no mocking is required for testing?

like image 397
Mark Smith Avatar asked Sep 21 '16 20:09

Mark Smith


People also ask

Why we should not use dependency injection?

Basically, dependency injection makes some (usually but not always valid) assumptions about the nature of your objects. If those are wrong, DI may not be the best solution: First, most basically, DI assumes that tight coupling of object implementations is ALWAYS bad.

Is IoC same as dependency injection?

Inversion of Control(IoC) is also known as Dependency injection (DI). The Spring container uses Dependency Injection (DI) to manage the components that build up an application and these objects are called Spring Beans. Spring implements DI by either an XML configuration file or annotations.

Is dependency injection really necessary?

The dependency injection technique enables you to improve this even further. It provides a way to separate the creation of an object from its usage. By doing that, you can replace a dependency without changing any code and it also reduces the boilerplate code in your business logic.


2 Answers

The majority of your concerns seem to boil down to either misuse or misunderstanding.

  • much bigger codesize

    This is usually a result of properly respecting both the Single Responsibility Principle and the Interface Segregation Principle. Is it drastically bigger? I suspect not as large as you claim. However, what it is doing is most likely boiling down classes to specific functionality, rather than having "catch-all" classes that do anything and everything. In most cases this is a sign of healthy separation of concerns, not an issue.

  • ravioli-code instead of spaghetti-code

    Once again, this is most likely causing you to think in stacks instead of hard-to-see dependencies. I think this is a great benefit since it leads to proper abstraction and encapsulation.

  • slower performance Just use a fast container. My favorites are SimpleInjector and LightInject.

  • need to initialize all dependencies in constructor even if the method I want to call has only one dependency

    Once again, this is a sign that you are violating the Single Responsibility Principle. This is a good thing because it is forcing you to logically think through your architecture rather than adding willy-nilly.

  • harder to understand when no IDE is used some errors are pushed to run-time

    If you are STILL not using an IDE, shame on you. There's no good argument for it with modern machines. In addition, some containers (SimpleInjector) will validate on first run if you so choose. You can easily detect this with a simple unit test.

  • adding additional dependency (DI framework itself)

    You have to pick and choose your battles. If the cost of learning a new framework is less than the cost of maintaining spaghetti code (and I suspect it will be), then the cost is justified.

  • new staff have to learn DI first in order to work with it

    If we shy away from new patterns, we never grow. I think of this as an opportunity to enrich and grow your team, not a way to hurt them. In addition, the tradeoff is learning the spaghetti code which might be far more difficult than picking up an industry-wide pattern.

  • a lot of boilerplate code which is bad for creative people (for example copy instances from constructor to properties...)

    This is plain wrong. Mandatory dependencies should always be passed in via the constructor. Only optional dependencies should be set via properties, and that should only be done in very specific circumstances since oftentimes it is violating the Single Responsibility Principle.

  • We do not test the entire codebase, but only certain methods and use real database. So, should Dependency Injection be avoided when no mocking is required for testing?

    I think this might be the biggest misconception of all. Dependency Injection isn't JUST for making testing easier. It is so you can glance at the signature of a class constructor and IMMEDIATELY know what is required to make that class tick. This is impossible with static classes since classes can call both up and down the stack whenever they like without rhyme or reason. Your goal should be to add consistency, clarity, and distinction to your code. This is the single biggest reason to use DI and it is why I highly recommend you revisit it.

like image 190
David L Avatar answered Oct 13 '22 03:10

David L


Although IoC/DI is not some silver bullet that works in all cases, it is possible that you didn't apply it correctly. The set of principles behind Dependency Injection take time to master, or at least, it sure did for me. When applied right, it can bring (among others) the following benefits:

  • Improved testability
  • Improved flexibility
  • Improved maintainability
  • Improved parallel development

From your question, I can already extract some things that might have gone wrong in your case:

I have to manually initialize dozen of dependencies in every class

This implies that each class you create is responsible of creating the dependencies it requires. This is an anti-pattern known as Control Freak. A class should not new up its dependencies itself. You might even have applied the Service Locator anti-pattern where your class requests its dependencies by calling the container (or an abstraction that represents the container) to get a particular dependency. A class should just define the dependencies it requires as constructor arguments.

dozen of dependencies

This statement implies that you are violating the Single Responsibly Principle. This is actually not coupled to IoC/DI, your old code probably already violated the Single Responsibility Principle causing it to become hard to understand and maintain for other developers. It's often hard for the original author to understand why others have a hard time maintaining code, since the thing you wrote often fits nicely in your head. Often the violation of the SRP will cause others to have trouble understanding and maintaining code. And testing classes that violate SRP is often even harder. A class should have half a dozen dependencies at most.

add more projects for interfaces and so on

This implies that you are violating the Reused Abstraction Principle. In general, the majority of components/classes in your application should be covered by a dozen of abstractions. For instance, all classes that implement some use case probably deserve one single (generic) abstraction. Classes that implement queries also deserve one abstraction. For the systems that I write, 80% to 95% of my components (classes that contain the application's behavior) are covered by 5 to 12 (mostly generic) abstractions. Most of the time you don't need to create a new project solely for the interfaces. Most of the time I place those interfaces in the root of the same project.

much bigger codesize

The amount of code you write will initially not be very different. The practice of Dependency Injection however, only works great when applying SOLID as well, and SOLID promotes small focussed classes. Classes with one single responsibility. This means that you will have many small classes that are easy to understand and easy to compose into flexible systems. And don't forget: we shouldn't strive to write less code, but rather more maintainable code.

However, with a good SOLID design and the right abstractions in place, I experienced actually having to write much less code than I had to before. For instance, applying certain cross-cutting concerns (like logging, audit trailing, authorization, etc) can be applied by just writing a few lines of code in the infrastructure layer of the application, instead of having it to be spread out throughout the complete application. It even lead me to be able to do things that werent feasible before, because they forced me to make sweeping changes throughout the entire code base, which was so time consuming that management didn't allow me to do so.

ravioli-code instead of spaghetti-code harder to understand when no IDE is used

This is kind of true. Dependency Injection promotes classes to become decoupled from one another. This can sometimes make it harder to browse to a code base, since a class usually depends on an abstraction instead of a concrete classes. In the past I found the flexibily that DI gives me outweigh the cost of finding the implementation by far. With Visual Studio 2015 I can simply do CTRL + F12 to find the implementations of an interface. If there is just one implementation, Visual Studio will jump right to that implementation.

slower performance

This is not true. The performance doesn't have to be any different than working with a code base of only static method calls. You however chose to have your classes with a Transient lifestyle which means it you new up instances all over the place. In my last applications I created all my classes just once per application, which gives roughly the same performance as only having static method calls, but with the benefit of the application being very flexible and maintainable. But note that even if you decide to new complete graphs of objects for each (web) request, the performance cost will most likely be orders of magnitude lower than any I/O (database, file system and web services calls) that you perform during that request, even with the slowest DI containers.

some errors are pushed to run-time adding additional dependency (DI framework itself)

These issues both imply the usage of a DI library. DI libraries do object composition at runtime. A DI library however is not a required tool when practicing Dependency Injection. Small applications can benefit from using Dependency Injection without a tool; a practice called Pure DI. Your application might not benefit from using a DI container, but most applications actually benefit from using Dependency Injection (when used correctly) as a practice. Againt: tools are optional, writing maintainable code isn't.

But even if you use a DI library, there are libraries that have tools built-in that allow you to verify and diagnose your configuration. They won't give you compile-time support, but they allow you to run this analysis either when the application starts up or using a unit test. This prevents you from doing a regression on the complete application just to verify whether your container is wired correctly. My advise is to pick a DI container that helps you in detecting these configuration errors.

new staff have to learn DI first in order to work with it

This is kind of true, but Dependency Injection itself isn't actually hard to learn. What is actually hard to learn is to apply the SOLID principles correctly, and you need to learn this anyway when you want to write applications that need to be maintained by more than one developer for a considerate period of time. I rather invest into teaching the developers on my team to write SOLID code instead of just letting them crank out code; that will surely cause a maintenance hell later on.

a lot of boilerplate code

There is some boilerplate code when we look at code written in C# 6, but this isn't actually that bad, especially when you consider the advantages it gives. And future versions of C# will remove the boilerplate that is mainly caused by having to define constructors that take in arguments that are null-checked and assigned to private variables. C# 7 or 8 will surely fix this when record types and non-nullable reference types are introduced.

which is bad for creative people

I'm sorry, but this argument is plain bullshit. I've seen this argument used over and over again as an excuse to write bad code by developers who didn't want to learn about design patterns and software principles and practices. Being creative is no excuse for writing code that no one else can understand or code that is impossible to test. We need to apply accepted patterns and practices and within that boundary there is enough room to be creative, while writing good code. Writing code is not an art; it’s a craft.

Like I said, DI is not appropriate in all cases, and the practices around it take time to master. I can advise you to read the book Dependency Injection in .NET by Mark Seemann; it will give many answers and will give you a good sense how and when to apply it, and when not.

like image 10
Steven Avatar answered Oct 13 '22 03:10

Steven