Dependency Injection is dead, long live Verbs!

Disclaimer

This post evaluates a new approach of how to wire our code together while staying decoupled. The reader should keep in mind, that the author has not tried this approach in a larger size project and as with any design pattern there may be problems that only become apparent when used in a real world scenario.

Why another pattern?

There has been a recent interest in writing code in a more functional style – not only because it is more elegant and fun. Yet most common patterns that are used to wire the code together lend themselves very well to a pure object oriented paradigm.

After reading this post, I was convinced that it is time to re-evaluate how we are doing things when it comes to managing our dependencies, which currently is very Noun/object centric.

It is on us to leverage the opportunities that the current languages and libraries offer in order to improve the architecture of our applications.

What is the pattern all about?

I will focus on the possibilities that open up once we use a language that allows passing functions around the same way that we are used to with objects. As a result we can decrease coupling even more than is possible with traditional Dependency Injection.

I also want to point the reader to a post by my colleague Daniel Moore, who explored and implemented a similar pattern with streams using MEF and Rx.

I will give two examples on how to implement this verb centric pattern. One example will be using CoffeScript. The other one will be in C# paired with MEF.

From here on after I will refer to objects and dependencies as nouns and to functions as verbs.

I don’t want to call anyone

Let’s say I have some controller that needs to send a message entered by the user. There will be someone in my arsenal of nouns that knows how to do that, so I will inject the Emailer or IEmailer or something along those lines.

Then I can tell the emailer to send messages via:

emailer.Send(msg)

Now let’s assume that I need to receive a confirmation message. Well, I just tell the emailer to give it to me:

reply = emailer.Receive()

All good right? But do I really need to know who is taking care of things? Not really.
All I would have to know is what verb to use in order to get a certain thing done:

sendMessage(msg)
reply = receiveMessage()

What’s wrong with Dependency Injection?

What if we change responsibilities later. Let’s say we still send messages via email, but now receive them directly from the server. At this point I have to change all the code that tells the emailer to receive the message. I would probably inject me a server and change the message receiving calls to something like:

reply = server.Receive()

Not a big deal you think? What if I have to change it in 10 places and don’t forget, since I follow good practices and am writing tests for my code, I now also need to change who does what in my test setup.
Lets remind ourselves of a principle that the authors of Code Complete elaborate on in chapter 7.5. It goes something like this:

If I have a record with 20 fields on it and a method, that only uses 3 of them, I should pass the 3 fields in as parameters instead of passing the entire record.

It makes sense. The less anyone knows about anyone else, the least coupled they are, so we shouldn’t spread more knowledge about how things are arranged than necessary.

Facebook is the best example of what happens when we don’t follow this advice. Since they so very successfully spread the knowledge about who is related to whom, we are now somehow coupled to almost anyone in the world.

This is not desirable for software systems though.

To summarize:

Knowledge containment makes for better decoupling and this applies to everything needed by a part of a system.

Dependency Injection does not adhere to this though when it comes to telling a system how to do things. Instead we inject the entire dependency that happens to have the verb we need. All the system wanted was to know how to do things, but instead we told it who knows how to do it as well.

As a result dependency injection spreads more information than necessary and that is what’s wrong with it.

What do I need to know?

In the above example, I needed to know what verb to use when sending a message and that I have to give it a message to send. If I wanted to receive a message, I had to know the appropriate verb for that and that I’m expecting a message to be returned to me. This boils down to the following generalization which maps perfectly to our examples:

output(s)     verb   input(s)
              sendMessage(msg)
reply     =   receiveMessage()

The CoffeeScript Example

As explained in a previous post, I decided to re-implement the node-chat example in CoffeeScript in a BDD manner.

Once I had it all working using dependency injection, I decided to use verbs instead and see how it works out.

Using Dependency Injection

Originally the chat server init function looked like this:

export.init = (fake = { }) ->

 sys = fake.sys or require "sys"
 router = fake.router or require("./lib/node-router")
 server = router.getServer()

 qs = fake.qs or require "querystring"
 url = fake.url or require "url"

 process = fake.process or global.process

As you can see I allow the tests to fake certain dependencies. If a faked dependency  was not present – e.g. when used in production, the appropriate node module was used.
In my tests, I injected the fakes like this:

  server_stub =
    gets: {}
    get: (id, callback) -> @gets[id]= callback; undefined
    listens_on: {}
    listen: (port, host) -> @listens_on= { port: port, host: host }

  @server_stub= server_stub
  @router_stub=
    getServer: -> server_stub
    staticHandler: (file) ->

  process_stub =
    memoryUsage: -> rss: mem_rss_stub
    env:
      PORT: port_stub

  @sut= require("./../server")
  @sut.init
    router: @router_stub
    sys: { puts: (msg) ->  } # stop sys.puts from cluttering up the test output
    process: process_stub

The bootstrapper was rather simple, as it had to inject nothing, since the real modules where to be used in this case.

chat_server = require "./server"
chat_server.init()

Routing calls like

server.get req, res { ... }

were spread throughout the chat server.

Using Verbs

As a proof of concept I refactored the chat server to use verbs where possible and thus have it know the least about where anything is coming from and how the world is arranged.
The chat server now learns from the injected config how to do things, but is not made aware of who is responsible for doing them.

exports.init = (config) ->

  throw "Need to pass in config object that has required verbs like log, route, etc." unless config?

  # States 
  env          =  config.env

  # Verbs
  memoryUsage  =  ()            -> config.memoryUsage()

  log          =  (msg)         -> config.log msg
  route_static =  (file)        -> config.route_static file
  route        =  (req, res)    -> config.route req, res
  listen       =  (port, host)  -> config.listen port, host

  # Modules
  qs           =  require "querystring"
  url          =  require "url"

As you can see, there is still one noun that is passed (env), but this is only used in order to query the state of the environment.
I formatted the code in the Verbs section in a way that clearly shows what verbs are used and what their inputs are. Anyone who needs to initialize the chat server can thus easily see what it needs to know. You can think of it as a config section.

In my tests I can easily set things up since now I just have to pass in certain functions instead of having to build up fakes.

  @listens_on= {}
  routes = {}

  @server_get= (method, req = { }) ->
    res = @res_stub
    routes["/#{method}"] req, res
    res

  @sut= require("./../server")
  @sut.init
    route_static: (file) ->
    route: (id, callback) -> routes[id]= callback; undefined
    listen: (port, host) => @listens_on= port: port, host: host
    log:  (msg) -> # don't clutter up the test output
    memoryUsage: -> rss: mem_rss_stub
    env: PORT: port_stub

The calls to server.get req, res { ... } were replaced with route req, res { ... } calls and thus no knowledge of there even being a server is spread throughout the code anymore.
The bootstrapper takes on the responsibility of figuring out who does what in order to properly initialize the chat server.

sys = require "sys"
router = require("./lib/node-router")
server = router.getServer()

chat_server = require "./server"

chat_server.init

  # Nouns
  env:            process.env

  # Verbs
  memoryUsage:    process.memoryUsage
  log:            sys.puts
  route_static:   router.staticHandler
  route:          server.get
  listen:         server.listen

I also formatted it in a way that makes it read like a config section.
This is the only place where the knowledge of how things are arranged lives and thus changing things around later becomes easy.

Verbs are Aspect Oriented Programming friendly

When injecting verbs as described I have much more control about how things are getting done, which allows me to quickly add/remove aspects concerning certain actions.
Lets say I want to log what routing calls the chat server registers, but I don’t want to touch my router code.
All I have to do is introduce a more verbose routing function, which logs information and then calls the original one and then pass that into init. Here is an abbreviated example:

verbose_route = (req, res) ->
  console.log "Routing request: ", req
  server.get req, res

chat_server.init
   [...]
  # Verbs
  route:          verbose_route
   [...]

Although this is a very simple example, it should give you an idea of how powerful this approach is and how much it simplifies extending your application.

Verbs and C# using MEF

I created a sample application that shows how to implement the same pattern in C# using MEF.

The main worker in the application is the runner:

[Export]
public class Runner
{
    private readonly Action<string> _print;
    private readonly Func<string> _read;
    private readonly Action<int> _shutDown;

    [ImportingConstructor]
    public Runner(
        [Import(Verbs.Print)] Action<string> print,
        [Import(Verbs.Read)] Func<string> read,
        [Import(Verbs.Shutdown)] Action<int> shutDown)
    {
        _print = print;
        _read = read;
        _shutDown = shutDown;
    }

    public void Run()
    {
        PrintStartupInfo();

        InteractWithUser();

        _shutDown(0);
    }

    private void InteractWithUser()
    {
        _print("Please enter your name: ");

        var name = _read();

        _print("Hello " + name);
        _print("Please press enter to shut down");

         _read();
    }

    private void PrintStartupInfo()
    {
        _print("The super verbs application has started.");
    }
}

As you can see it gets the verbs injected as single parameters instead of via a config.The Import statements are telling MEF how to resolve the injected verbs (in case you need to read up on how this works head on over here).

But where are these verbs actually coming from? Well the Runner doesn’t know nor is he supposed to. To tell you the truth, it doesn’t matter. We just need to make sure, that someone is exporting them, so that MEF can resolve them.

It so happens, that we have a UserInterface that knows how to read and write:

public class UserInterface
{
    [Export(Verbs.Print)]
    public void Print(string message)
    {
        Console.WriteLine(message);
    }

    [Export(Verbs.Read)]
    public string Read()
    {
        return Console.ReadLine();
    }
}

and the ApplicationManager knows how to shut down the application:

public class ApplicationManager
{
    private readonly Action<string> _print;

    [ImportingConstructor]
    public ApplicationManager([Import(Verbs.Print)] Action<string> print)
    {
        _print = print;
    }

    [Export(Verbs.Shutdown)]
    public void ShutDownApplication(int code)
    {
        _print("Shutting down ....");
        Environment.Exit(code);
    }
}

The Export statements use the same identifiers as the Import statements of the Runner. This allows MEF to hook everything together.
I could simply inline the strings for these identifiers, but want to avoid introducing bugs due to typos.
Therefore I created constant identifiers in a Verbs class.

public static class Verbs
{
    public const string Print = "Verbs.Print";
    public const string Read = "Verbs.Read";
    public const string Shutdown = "Verbs.Shutdown";
}

Finally we need to tell MEF to wire things up. We do this in the Program.

public class Program
{
    private CompositionContainer _container;

    public static void Main(string[] args)
    {
        var p = new Program();
        p.Run();
    }

    public void Run()
    {
        Compose();

        var runner = _container.GetExport<Runner>().Value;
        runner.Run();
    }

    private void Compose()
    {
        var catalog = new AssemblyCatalog(Assembly.GetExecutingAssembly());
        _container = new CompositionContainer(catalog);
        _container.ComposeParts(this);
    }
}

We create a container in the Compose() method in order to make MEF lookup all of the exports in our assembly. Then we resolve the Runner at which point MEF injects all of our exported Verbs.
The full example is available here on the master branch.

Adding another Printer

In order to demonstrate how extensible our little application is, lets assume we want to add a Debugger that sends a timestamped version of the message to the output window whenever someone prints it.
At the same time we want to keep printing to the console.
We can do this without changing a single line of code inside our Runner – I promise!

We need to introduce a method that when called, will aggregate and then invoke every method that claims to know how to print (via the appropriate Export). We will use MEF’s ImportMany feature to accomplish this task.
For simplicity lets just slap this method onto the ApplicationManager – we can always move it later since no one will be aware of where it lives.

public class ApplicationManager
{
    private readonly IEnumerable<Action<string>> _printToMany;

    [ImportingConstructor]
    public ApplicationManager(
        [ImportMany(Verbs.CompositePrint, AllowRecomposition = true)] 
        IEnumerable<Action<string>> printToMany)
    {
        _printToMany = printToMany;
    }

    [Export(Verbs.Shutdown)]
    public void ShutDownApplication(int code)
    {
        ApplicationPrint("Shutting down ....");
        Environment.Exit(code);
    }

    [Export(Verbs.Print)]
    public void ApplicationPrint(string msg)
    {
        foreach (var print in _printToMany)
        {
            print(msg);
        }
    }
}

It now exports its ApplicationPrint method under the Verbs.Print identifier that the Runner knows about. When invoked, it finds all print methods that where exported under the new Verbs.CompositePrint identifier and invokes them one after the other.
Since it exports itself under the same identifier that the UserInterface used to export its Print method previously, it ends up replacing it.

There are two things left to do:

First we need to update the print method in our UserInterface to export it self as the Verbs.CompositePrint (this is an extra verb we add to our Verbs class).

public class UserInterface
{
    [Export(Verbs.CompositePrint)]
    public void Print(string message)
    {
        Console.WriteLine(message);
    }

    [Export(Verbs.Read)]
    public string Read()
    {
        return Console.ReadLine();
    }
}

Secondly we now introduce the Debugger that will export a print method with the same identifier.

public class Debugger
{
    [Export(Verbs.CompositePrint)]
    public void Print(string message)
    {
        Debug.WriteLine(DateTime.Now + " - " + message);
    }
}

As a result whenever the Runner prints a message it will end up calling the ApplicationPrint method which in turn calls the print methods on the UserInterface and the Debugger with the passed message. As promised, the Runner didn’t change and is totally oblivious to the new way that things are done now.

This version of the application is available here on the multiprint branch

Advertisements
  1. #1 by Mauricio Scheffer on July 26, 2011 - 10:33 pm

    I really wonder if this should be called a “design pattern”, it’s just passing a function as parameter, which functional programmers have been doing for decades. Moreover, functions that take functions as parameters already have a name: higher-order functions: http://en.wikipedia.org/wiki/Higher-order_function

    • #2 by Thorsten Lorenz on July 30, 2011 - 2:28 pm

      I see your point, but I don’t only examine the benefit of passing around functions here.
      I wanted to see how we could use this flexibility in order to wire up our systems in a more decoupled manner. It is basically like Dependency Injection, but on a much more granular level.

      Dependency Injection is considered a design pattern (Quote: “In technical terms, it is a design pattern that separates behavior from dependency resolution, thus decoupling highly dependent components …” – from http://en.wikipedia.org/wiki/Dependency_injection).
      Since what I am describing is expanding on it and is also a pattern that can be used to architect a system, I called it a design pattern as well.

      That being said, what we call it is not very important after all. What is more important is that we rethink how we can improve on doing things with the new tools/languages at our disposal.

  2. #3 by Nick. on July 29, 2011 - 4:38 pm

    Using functional programming constructs in our programs is certainly useful in places. But I am curious why dependency injection and simple object principles couldnt achieve the same, indeed much simpler? Proxy/Decorator/Composite achieve all of the things you work in this article, which seem simpler to me. IMO decoupling or loose coupling is a technique or principle thats no different to something like OCP or SRP. I recently picked up a system built around attributes, and the static model was constructed at runtime – at compile time it was almost impossible to reason about the application – decoupling gone wild IMO. This technique here, while interesting (and I can see some uses), seems to introduce too many moving parts to achieve very little in reality.

    I always consider maintenance as one of the main driving forces of application design. Neat techniques, or “bag of tricks” are handled with care. I would say building a system like this would warrant some care.

    fwiw

    • #4 by Thorsten Lorenz on July 30, 2011 - 2:41 pm

      I agree with you. The more levels of indirection we build into our systems, the harder it gets to follow through.

      At the same token it gives us so much more power and flexibility. Additionally, a developer who extends a part of a system may not always need to know where everything is coming from. It may just be enough if he knows what verbs to use in order to get the job done.

      Either way, I believe that the times were you could just F12 (ReSharper) yourself through your code are over. Programmers need to learn how to maintain extensible and decoupled systems.
      Making them less extensible for the sake of easier maintenance (which by the way is an oxymoron since less extensibility means higher maintenance cost) is clearly not the way to go.

      We all know that with more power comes greater responsibility and the suggested pattern is no exception.
      Therefore although I do not think that this it is a “bag of tricks”, I agree that great care needs to be taken when it is applied to a system.

  3. #5 by chukked on August 1, 2011 - 8:17 pm

    Thanks for the effort, i like your article it gave me different angle to think about 🙂

  4. #6 by Christian on August 2, 2011 - 12:45 pm

    Your example is based on bad designed interfaces since IEmailer doesn’t abstract enough.

    Another thing related to all this is Scala’s structural typing where you can define interfaces implicitly through specifying a required method/function signature.

  5. #7 by extravaganza on August 2, 2011 - 12:57 pm

    “If I have a record with 20 fields on it and a method, that only uses 3 of them, I should pass the 3 fields in as parameters instead of passing the entire record.” – I don’t agree with this statement.
    What about 4 or 5 params or maybe 11? Do you really think about creating method with 11 params?

  6. #8 by anonymous coward on August 2, 2011 - 1:17 pm

    Cute idea, but has problems. First, the object may need to call a method that has the same name in two or more of its injected objects. Second, it reduces clarity. Sure, you could say sendMessage instead of emailer.sendMessage, but at each place you do that, you lose the clarity of seeing which object is doing the sending for you at the point where you actually do the sending.

    I also think you set up a straw man argument. You give the example of changing the ’emailer’ to a ‘server.’ However, if you initially give the object a name that reflects its interface, rather than its implementation (for example, call it a ‘messageService’ instead), then it doesn’t matter if you inject an emailer or a server, and you can leave the name as is and swap out implementations as needed.

  7. #9 by _Marreco_ on August 2, 2011 - 4:56 pm

    Nice article, congratulations. It’s a different approach and has its uses. But, i have to disagree with the title. Depency Injection is not intended to couple things, but to highly decouple them. Some aspects of DI have to be understood firstly. The concept of “how to do” and “who knows how to do” can be achieved using a SPEC or API, like Java does. When I say “how to do” I think in interfaces and the “who knows how to do” can be summarized with a factory|new design pattern. With DI I can inject instances within variables using a interface as a typo. Who will produces these instances, that implements the referenced interface, is up to the factory|producer implementation. As I can see, DI is beyond Verbs ’cause I can vary the “who knows how to do” depending on the context.

  8. #10 by Demis Bellot on August 2, 2011 - 5:40 pm

    This looks very much like a cross between the existing Facade pattern and the Interface Segregation Principle.

  9. #11 by Mark Seemann on August 2, 2011 - 6:14 pm

    The argument against DI presented here doesn’t relate to DI but the use of Header Interfaces instead of Role Interfaces.

    Once you realize that a function pointer/delegate is just an anonymous Role Interface then it also follows that what is proposed here is still DI (nothing wrong with that, though). The problem with delegates (at least in the statically typed OO languages that I know of) is that they tend to lack unambiguous type. This is why, in the MEF example, attributes are required to distinguish the various Verbs.

    On the other hand, that’s only a hack to satisfy MEF. With Poor Man’s DI this approach can be composed just as easily as more ‘traditional’ DI – I’ve occasionally used this technique from time to time.

  10. #12 by Joseph Daigle on August 2, 2011 - 6:31 pm

    I think what you’re doing is rather interesting. The idea boils down to the fact that you’re sending a message somewhere. In pure object oriented programming, a method call is really nothing more than sending a message to some object. The name of the message is the name of the method, and the body is the parameters.

    If, via some infrastructure/framework, you then tie the message to some message handler in a 1-to-1 relationship, you could decouple the description of the message (which is just meta data) from the “who” or the code that handles the message.

    This is an example of the Command pattern or a more abstract version of the Request/Response pattern. With either pattern you could, for example, have some class which represents a particular message. In your code you construct an instance of this object and populate the required data. You then “send” this message via some infrastructure or framework. In the case of a Command, you typically don’t get or care about a response. In Request/Response you can wait for the Reply which comes in the form of another object.

    The underlying infrastructure and framework is responsible for marshaling the message to its handler.

    With all this place, you’re left with one dependency: whatever infrastructure or framework component is responsible for “sending” messages. Call it a “message bus”.

    For an extremely advanced example of this which, even goes so far as to make the message endpoints distributed across services, check out http://www.nservicebus.com/.

  11. #13 by Luis Solano on August 2, 2011 - 6:38 pm

    Just my two cents:

    “What if we change responsibilities later. Let’s say we still send messages via email, but now receive them directly from the server. At this point I have to change all the code that tells the emailer to receive the message. I would probably inject me a server and change the message receiving calls to something like:

    reply = server.Receive()”

    Your client code is depending upon a concretion and not upon an abstraction. You client code should depend on ‘something’ that sends messages and ONLY sends messages, and it shouldn’t care if it’s the ’emailer’ or a server or whatever. This means that the client code is violating the Dependency Inversion Principle and the emailer is violating the Single Responsability Principle because it sends AND receive emails.

    “To summarize:

    Knowledge containment makes for better decoupling and this applies to everything needed by a part of a system.

    Dependency Injection does not adhere to this though when it comes to telling a system how to do things. Instead we inject the entire dependency that happens to have the verb we need. All the system wanted was to know how to do things, but instead we told it who knows how to do it as well.

    As a result dependency injection spreads more information than necessary and that is what’s wrong with it.”

    I agree with the first sentence, the more decoupled the better. I think that the mistake here is to blame DI for spreading more information than necessary. DI spread as much information as the dependencies have. If those dependencies fullfil the Single Responsability Principle (or the Interface Segregation Principle, the same thing but they needed the ‘I’ for SOLID) they will only spread the necessary amount of information (I rather say interface) to their clients.

    In conclusion, breaking your classes down and passing smaller dependencies with smaller interfaces will avoid those problems.

    – Luis

  12. #14 by Nicolas on August 2, 2011 - 8:23 pm

    A function and 1 interface with one method is the same. The later is more verbose but both have nearly same functionnality.

    I think Mark Seemann is right. Interfaces should be modelized arround role, not header. And

    Coupling some methods together in an interface as a strong meaning. If you take care to modelize by role and not by header like Mark Seemann say you’ll have clearer design than with a bunch of functions.

    Using functions as parameter is obvisouly interesting if you have only one or two and to be allowed to make lamba on the fly (A little like anonymous class inside java code).

    If you need to wire more functionnalities regrouping them in roles will really help.

  13. #15 by rod on September 24, 2011 - 8:44 pm

    Very interesting article! Personally I’ve never found problems with DI, but I love the idea of binding and injecting functionality, rather that things that have that functionality (so coupling two things when you only want one).

    As in your disclaimer, I’m also suspicious how well this will work when an application starts to grow, but I’m curious enough to take it for a test drive so have knocked up a quick JS lib to do this…

    https://github.com/rodnaph/Binder

    … and am trying it out in a small app I’m building see how well the ideas here play out. Fingers crossed.

    I agree with all the points on proper architecture removing the problems you mention are created by DI, but as I said I’ve never really found problems it – but it would be nice if we can take it a step further.

    • #16 by Thorsten Lorenz on September 24, 2011 - 10:15 pm

      Thanks for the feedback, I’m very interested in how your test drive works out, especially after the project grew in size.
      I’m going to be watching your project.

      Please let us know how the concepts worked for you in practice,

  1. Verbs over Dependencies in order to build decoupled systems » Lab49 Blog

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: