Archive for category CoffeeScript

Vim, CoffeeScript and the Node Repl

I was looking for a way to get intellisense like features for JavaScript/CoffeScript in Vim, but was disappointed by the “tags” plugins.

These plugins just give you some keywords about the language in question, but cannot really give you information about the object you are currently dealing with because they never evluate the code that created it.

Realistically since JavaScript is a dynamic language, it would be hard for any tool to give proper support without actually running parts of your code. Fortunately we have repls for that. It’s the same idea as Small talk (except in that case the repl is built into the IDE).

Usually when trying to figure things out, run some code in the repl and now it can tell you all kinds of information about objects that were created up to that point. This also includes all exports of modules – which is very useful.

Unfortunately up until now it was a multi step process to source my code in the repl in order to play with it.

I read this post, which explains, how to set up a screen in some unix terminal and then send some text over the created socket. The author also implemented a Vim plugin called slime.vim. I forked it and added some CoffeeScript specific feature which I will explain.

Finally we can send code snippets from Vim directly to the repl without any cut and pasting overhead.

It is actually quite simple to set these things up.

  • Install slime.vim into your vim plugins folder
  • Open a terminal window and start a named screen e.g. “screen -S coffee”
  • Start a repl inside that screen e.g. coffee for a CoffeeScript repl
  • Open vim, select some code snippet in visual mode and press Ctrl-c twice
  • At the prompt give the screen name e.g. coffee and accept the given window name
  • The selected code should have been sent to the repl and you can now inspect the created objects

That’s it!

Unfortunately the coffee repl is not quite as nice as the node repl (especially when it comes to inspect functions and properties of an object), which is why I extended the plugin a bit.

If you press Ctrl-C Ctrl-S after selecting a CoffeeScript code snippet, it will be compiled into JavaScript before it is sent to the screen. This means you can directly send your CoffeeScript code to a node repl and inspect it things in there.

I have yet to figure out how to send a <Tab> signal over in order to trigger completion from inside Vim, so anyone with an idea, please comment!

Leave a comment

Dependency Injection is dead, long live Verbs!

Disclaimer

This post evaluates a new approach of how to wire our code together while staying decoupled. The reader should keep in mind, that the author has not tried this approach in a larger size project and as with any design pattern there may be problems that only become apparent when used in a real world scenario.

Why another pattern?

There has been a recent interest in writing code in a more functional style – not only because it is more elegant and fun. Yet most common patterns that are used to wire the code together lend themselves very well to a pure object oriented paradigm.

After reading this post, I was convinced that it is time to re-evaluate how we are doing things when it comes to managing our dependencies, which currently is very Noun/object centric.

It is on us to leverage the opportunities that the current languages and libraries offer in order to improve the architecture of our applications.

What is the pattern all about?

I will focus on the possibilities that open up once we use a language that allows passing functions around the same way that we are used to with objects. As a result we can decrease coupling even more than is possible with traditional Dependency Injection.

I also want to point the reader to a post by my colleague Daniel Moore, who explored and implemented a similar pattern with streams using MEF and Rx.

I will give two examples on how to implement this verb centric pattern. One example will be using CoffeScript. The other one will be in C# paired with MEF.

From here on after I will refer to objects and dependencies as nouns and to functions as verbs.

I don’t want to call anyone

Let’s say I have some controller that needs to send a message entered by the user. There will be someone in my arsenal of nouns that knows how to do that, so I will inject the Emailer or IEmailer or something along those lines.

Then I can tell the emailer to send messages via:

emailer.Send(msg)

Now let’s assume that I need to receive a confirmation message. Well, I just tell the emailer to give it to me:

reply = emailer.Receive()

All good right? But do I really need to know who is taking care of things? Not really.
All I would have to know is what verb to use in order to get a certain thing done:

sendMessage(msg)
reply = receiveMessage()

What’s wrong with Dependency Injection?

What if we change responsibilities later. Let’s say we still send messages via email, but now receive them directly from the server. At this point I have to change all the code that tells the emailer to receive the message. I would probably inject me a server and change the message receiving calls to something like:

reply = server.Receive()

Not a big deal you think? What if I have to change it in 10 places and don’t forget, since I follow good practices and am writing tests for my code, I now also need to change who does what in my test setup.
Lets remind ourselves of a principle that the authors of Code Complete elaborate on in chapter 7.5. It goes something like this:

If I have a record with 20 fields on it and a method, that only uses 3 of them, I should pass the 3 fields in as parameters instead of passing the entire record.

It makes sense. The less anyone knows about anyone else, the least coupled they are, so we shouldn’t spread more knowledge about how things are arranged than necessary.

Facebook is the best example of what happens when we don’t follow this advice. Since they so very successfully spread the knowledge about who is related to whom, we are now somehow coupled to almost anyone in the world.

This is not desirable for software systems though.

To summarize:

Knowledge containment makes for better decoupling and this applies to everything needed by a part of a system.

Dependency Injection does not adhere to this though when it comes to telling a system how to do things. Instead we inject the entire dependency that happens to have the verb we need. All the system wanted was to know how to do things, but instead we told it who knows how to do it as well.

As a result dependency injection spreads more information than necessary and that is what’s wrong with it.

What do I need to know?

In the above example, I needed to know what verb to use when sending a message and that I have to give it a message to send. If I wanted to receive a message, I had to know the appropriate verb for that and that I’m expecting a message to be returned to me. This boils down to the following generalization which maps perfectly to our examples:

output(s)     verb   input(s)
              sendMessage(msg)
reply     =   receiveMessage()

The CoffeeScript Example

As explained in a previous post, I decided to re-implement the node-chat example in CoffeeScript in a BDD manner.

Once I had it all working using dependency injection, I decided to use verbs instead and see how it works out.

Using Dependency Injection

Originally the chat server init function looked like this:

export.init = (fake = { }) ->

 sys = fake.sys or require "sys"
 router = fake.router or require("./lib/node-router")
 server = router.getServer()

 qs = fake.qs or require "querystring"
 url = fake.url or require "url"

 process = fake.process or global.process

As you can see I allow the tests to fake certain dependencies. If a faked dependency  was not present – e.g. when used in production, the appropriate node module was used.
In my tests, I injected the fakes like this:

  server_stub =
    gets: {}
    get: (id, callback) -> @gets[id]= callback; undefined
    listens_on: {}
    listen: (port, host) -> @listens_on= { port: port, host: host }

  @server_stub= server_stub
  @router_stub=
    getServer: -> server_stub
    staticHandler: (file) ->

  process_stub =
    memoryUsage: -> rss: mem_rss_stub
    env:
      PORT: port_stub

  @sut= require("./../server")
  @sut.init
    router: @router_stub
    sys: { puts: (msg) ->  } # stop sys.puts from cluttering up the test output
    process: process_stub

The bootstrapper was rather simple, as it had to inject nothing, since the real modules where to be used in this case.

chat_server = require "./server"
chat_server.init()

Routing calls like

server.get req, res { ... }

were spread throughout the chat server.

Using Verbs

As a proof of concept I refactored the chat server to use verbs where possible and thus have it know the least about where anything is coming from and how the world is arranged.
The chat server now learns from the injected config how to do things, but is not made aware of who is responsible for doing them.

exports.init = (config) ->

  throw "Need to pass in config object that has required verbs like log, route, etc." unless config?

  # States 
  env          =  config.env

  # Verbs
  memoryUsage  =  ()            -> config.memoryUsage()

  log          =  (msg)         -> config.log msg
  route_static =  (file)        -> config.route_static file
  route        =  (req, res)    -> config.route req, res
  listen       =  (port, host)  -> config.listen port, host

  # Modules
  qs           =  require "querystring"
  url          =  require "url"

As you can see, there is still one noun that is passed (env), but this is only used in order to query the state of the environment.
I formatted the code in the Verbs section in a way that clearly shows what verbs are used and what their inputs are. Anyone who needs to initialize the chat server can thus easily see what it needs to know. You can think of it as a config section.

In my tests I can easily set things up since now I just have to pass in certain functions instead of having to build up fakes.

  @listens_on= {}
  routes = {}

  @server_get= (method, req = { }) ->
    res = @res_stub
    routes["/#{method}"] req, res
    res

  @sut= require("./../server")
  @sut.init
    route_static: (file) ->
    route: (id, callback) -> routes[id]= callback; undefined
    listen: (port, host) => @listens_on= port: port, host: host
    log:  (msg) -> # don't clutter up the test output
    memoryUsage: -> rss: mem_rss_stub
    env: PORT: port_stub

The calls to server.get req, res { ... } were replaced with route req, res { ... } calls and thus no knowledge of there even being a server is spread throughout the code anymore.
The bootstrapper takes on the responsibility of figuring out who does what in order to properly initialize the chat server.

sys = require "sys"
router = require("./lib/node-router")
server = router.getServer()

chat_server = require "./server"

chat_server.init

  # Nouns
  env:            process.env

  # Verbs
  memoryUsage:    process.memoryUsage
  log:            sys.puts
  route_static:   router.staticHandler
  route:          server.get
  listen:         server.listen

I also formatted it in a way that makes it read like a config section.
This is the only place where the knowledge of how things are arranged lives and thus changing things around later becomes easy.

Verbs are Aspect Oriented Programming friendly

When injecting verbs as described I have much more control about how things are getting done, which allows me to quickly add/remove aspects concerning certain actions.
Lets say I want to log what routing calls the chat server registers, but I don’t want to touch my router code.
All I have to do is introduce a more verbose routing function, which logs information and then calls the original one and then pass that into init. Here is an abbreviated example:

verbose_route = (req, res) ->
  console.log "Routing request: ", req
  server.get req, res

chat_server.init
   [...]
  # Verbs
  route:          verbose_route
   [...]

Although this is a very simple example, it should give you an idea of how powerful this approach is and how much it simplifies extending your application.

Verbs and C# using MEF

I created a sample application that shows how to implement the same pattern in C# using MEF.

The main worker in the application is the runner:

[Export]
public class Runner
{
    private readonly Action<string> _print;
    private readonly Func<string> _read;
    private readonly Action<int> _shutDown;

    [ImportingConstructor]
    public Runner(
        [Import(Verbs.Print)] Action<string> print,
        [Import(Verbs.Read)] Func<string> read,
        [Import(Verbs.Shutdown)] Action<int> shutDown)
    {
        _print = print;
        _read = read;
        _shutDown = shutDown;
    }

    public void Run()
    {
        PrintStartupInfo();

        InteractWithUser();

        _shutDown(0);
    }

    private void InteractWithUser()
    {
        _print("Please enter your name: ");

        var name = _read();

        _print("Hello " + name);
        _print("Please press enter to shut down");

         _read();
    }

    private void PrintStartupInfo()
    {
        _print("The super verbs application has started.");
    }
}

As you can see it gets the verbs injected as single parameters instead of via a config.The Import statements are telling MEF how to resolve the injected verbs (in case you need to read up on how this works head on over here).

But where are these verbs actually coming from? Well the Runner doesn’t know nor is he supposed to. To tell you the truth, it doesn’t matter. We just need to make sure, that someone is exporting them, so that MEF can resolve them.

It so happens, that we have a UserInterface that knows how to read and write:

public class UserInterface
{
    [Export(Verbs.Print)]
    public void Print(string message)
    {
        Console.WriteLine(message);
    }

    [Export(Verbs.Read)]
    public string Read()
    {
        return Console.ReadLine();
    }
}

and the ApplicationManager knows how to shut down the application:

public class ApplicationManager
{
    private readonly Action<string> _print;

    [ImportingConstructor]
    public ApplicationManager([Import(Verbs.Print)] Action<string> print)
    {
        _print = print;
    }

    [Export(Verbs.Shutdown)]
    public void ShutDownApplication(int code)
    {
        _print("Shutting down ....");
        Environment.Exit(code);
    }
}

The Export statements use the same identifiers as the Import statements of the Runner. This allows MEF to hook everything together.
I could simply inline the strings for these identifiers, but want to avoid introducing bugs due to typos.
Therefore I created constant identifiers in a Verbs class.

public static class Verbs
{
    public const string Print = "Verbs.Print";
    public const string Read = "Verbs.Read";
    public const string Shutdown = "Verbs.Shutdown";
}

Finally we need to tell MEF to wire things up. We do this in the Program.

public class Program
{
    private CompositionContainer _container;

    public static void Main(string[] args)
    {
        var p = new Program();
        p.Run();
    }

    public void Run()
    {
        Compose();

        var runner = _container.GetExport<Runner>().Value;
        runner.Run();
    }

    private void Compose()
    {
        var catalog = new AssemblyCatalog(Assembly.GetExecutingAssembly());
        _container = new CompositionContainer(catalog);
        _container.ComposeParts(this);
    }
}

We create a container in the Compose() method in order to make MEF lookup all of the exports in our assembly. Then we resolve the Runner at which point MEF injects all of our exported Verbs.
The full example is available here on the master branch.

Adding another Printer

In order to demonstrate how extensible our little application is, lets assume we want to add a Debugger that sends a timestamped version of the message to the output window whenever someone prints it.
At the same time we want to keep printing to the console.
We can do this without changing a single line of code inside our Runner – I promise!

We need to introduce a method that when called, will aggregate and then invoke every method that claims to know how to print (via the appropriate Export). We will use MEF’s ImportMany feature to accomplish this task.
For simplicity lets just slap this method onto the ApplicationManager – we can always move it later since no one will be aware of where it lives.

public class ApplicationManager
{
    private readonly IEnumerable<Action<string>> _printToMany;

    [ImportingConstructor]
    public ApplicationManager(
        [ImportMany(Verbs.CompositePrint, AllowRecomposition = true)] 
        IEnumerable<Action<string>> printToMany)
    {
        _printToMany = printToMany;
    }

    [Export(Verbs.Shutdown)]
    public void ShutDownApplication(int code)
    {
        ApplicationPrint("Shutting down ....");
        Environment.Exit(code);
    }

    [Export(Verbs.Print)]
    public void ApplicationPrint(string msg)
    {
        foreach (var print in _printToMany)
        {
            print(msg);
        }
    }
}

It now exports its ApplicationPrint method under the Verbs.Print identifier that the Runner knows about. When invoked, it finds all print methods that where exported under the new Verbs.CompositePrint identifier and invokes them one after the other.
Since it exports itself under the same identifier that the UserInterface used to export its Print method previously, it ends up replacing it.

There are two things left to do:

First we need to update the print method in our UserInterface to export it self as the Verbs.CompositePrint (this is an extra verb we add to our Verbs class).

public class UserInterface
{
    [Export(Verbs.CompositePrint)]
    public void Print(string message)
    {
        Console.WriteLine(message);
    }

    [Export(Verbs.Read)]
    public string Read()
    {
        return Console.ReadLine();
    }
}

Secondly we now introduce the Debugger that will export a print method with the same identifier.

public class Debugger
{
    [Export(Verbs.CompositePrint)]
    public void Print(string message)
    {
        Debug.WriteLine(DateTime.Now + " - " + message);
    }
}

As a result whenever the Runner prints a message it will end up calling the ApplicationPrint method which in turn calls the print methods on the UserInterface and the Debugger with the passed message. As promised, the Runner didn’t change and is totally oblivious to the new way that things are done now.

This version of the application is available here on the multiprint branch

17 Comments