I'm just wondering what the best practice is for rewiring the bindings in a kernel.
I have a class with a kernel and a private class module with the default production bindings.
For tests I want to override these bindings so I can swap in my Test Doubles / Mocks objects.
does
MyClass.Kernel.Load(new InlineModule(m=> m.Bind<IDepend>().To<TestDoubleDepend>()))
override any existing bindings for IDepend?
I try to use the DI kernel directly in my code as little as possible, instead relying on constructor injection (or properties in select cases, such as Attribute
classes). Where I must, however, I use an abstraction layer, so that I can set the DI kernel object, making it mockable in unit tests.
For example:
public interface IDependencyResolver : IDisposable
{
T GetImplementationOf<T>();
}
public static class DependencyResolver
{
private static IDependencyResolver s_resolver;
public static T GetImplementationOf<T>()
{
return s_resolver.GetImplementationOf<T>();
}
public static void RegisterResolver( IDependencyResolver resolver )
{
s_resolver = resolver;
}
public static void DisposeResolver()
{
s_resolver.Dispose();
}
}
Using a pattern like this, you can set the IDependencyResolver
from unit tests by calling RegisterResolver
with a mock or fake implementation that returns whatever objects you want without having to wire up full modules. It also has a secondary benefit of abstracting your code from a particular IoC container, should you choose to switch to a different one in the future.
Naturally, you'd also want to add additional methods to IDependencyResolver
as your needs dictate, I'm just including the basics here as an example. Yes, this would then require that you write a super simple wrapper around the Ninject kernel which implements IDependencyResolver
as well.
The reason you want to do this is that your unit tests should really only be testing one thing and by using your actual IoC container, you're really exercising more than the one class under test, which can lead to false negatives that make your tests brittle and (more importantly) shake developer faith in their accuracy over time. This can lead to test apathy and abandonment since it becomes possible for tests to fail but the software to still work correctly ("don't worry, that one always fails, it's not a big deal").
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With