This is my last post on Wordpress.

My blog has moved here.

The reasons for the move are:

  • I want everything to be part of my website
  • I want to be able to write posts faster which my new blog enables me to do via:
    • markdown support
    • built in code highlighting
    • external snippet support

In case you got interested, check out the blogging engine I created to enable that:

NodeJs powered Developer blOGging engine.


Leave a comment

Performance Concerns for Nested JavaScript Functions

Since I dabbled quite a bit in functional languages like Haskell, I came to like nested functions very much.

Searching on the net, I came across many posts and stackoverflow answers claiming that this has a performance impact since a function nested inside another has to be recreated every time the outer function is called.

After talking to my colleagues who assured me that this is not true, at least not for nodejs and chrome (both of which use the v8 JavaScript engine), I decided to find out for myself.

Simple Test

(source on github)

var calls = 99999999;

function notNested() {
    var start = new Date().getTime();

    function foo() { return 0; }

    function bar () { foo(); }

    for (var i = 0; i < calls; i++) {

    console.log('Unnested took %s ticks', new Date().getTime() - start);

function nested () {
    var start = new Date().getTime();

    function bar () {
        function foo() { return 0; }

    for (var i = 0; i < calls; i++) {

    console.log('Nested took %s ticks', new Date().getTime() - start);

function nestedReturning () {
    var start = new Date().getTime();

    var bar = (function () {
        function foo() { return 0; }

        return function () { foo(); };


    for (var i = 0; i < calls; i++) {

    console.log('Nested returning took %s ticks', new Date().getTime() - start);




(Please keep in mind that this is in no way intended to be a proper performance test, but merely a sanity check)


Running this with nodejs yields the following result:

➝  node nested-functions.js 
Unnested took 1606 ticks
Nested took 2316 ticks
Nested returning took 1614 ticks

Here we can see, in this case we can see that nesting a function causes code to run about 1.4x as slow, while ensuring that the outer function is only created once by returning it is as performant as not nesting.

Somewhat of a difference there, but what about the different browsers?

In order to make testing these easier and thanks to Charlie Robbin’s advice (see comments), I went to and created a performance test

For people better with visualizing numbers:

(shows operations per second e.g., higher is better)

If you use a browser that’s not listed yet, head on over to see how your browser is doing.

As this data shows, there is a considerable impact when “carelessly” nesting functions.

How considerable depends on the browser (e.g., Firefox and IE seem to punish us the most). Chrome  handles them best and suffers only about 30% ,

Returning the outer function as an object and thus ensuring it only gets created once , pretty much fixes the performance hit in all browsers.


  • nest functions, but do so wisely (e.g., as outlined in the ‘nestedReturning’ example)
  • be aware however that this pattern will increase the memory footprint of your application since the returned function closes over the inner one and thus prevents it from being garbage collected
  • if you are looking for the lowest footprint, highest performance option and can live with slightly less nicely structured code, declare functions at outer scope as much as possible

Note: make sure to use the revision 3 of above performance tests, as the initial ones were faulty and tested how lazy browsers evaluate functions instead of what they were supposed to.


Logging to Growl from Haskell running on Lion

I am currently developing Haskell apps on a MacAir with Lion and wanted to enjoy the luxury of growling log messages.

This is a short write up on how I got this to work.

First thing needed is GrowlNotify for Lion, and of course Growl for Lion itself.

Next step is getting hslogger which can be installed via:

cabal install hslogger

hslogger actually includes a GrowlLogHandler, but I couldn’t get it to work. My guess is, that it only works with older versions of Growl.

This assumption is confirmed when looking at the source, It basically tries to set up a socket connection to the Growl app and has a port number hard coded, that is different from the one that the current version of Growl suggests.

Fortunately we  we have the growlnotify tool and don’t have to figure out how to talk to the Growl app. Instead we call growlnotify with a given message and it will take care of all that grunt work for us.

Here is the LogHandler that accomplishes the above:

module System.Log.Handler.GrowlNotifyHandler (growlNotifyHandler) where

import System.Log (Priority)
import System.Log.Handler (LogHandler(..))
import System.Log.Formatter (nullFormatter, LogFormatter(..))
import System.Cmd (rawSystem)

data GrowlNotifyHandler = GrowlNotifyHandler
    { priority :: Priority
    , formatter :: LogFormatter GrowlNotifyHandler
    , appName :: String

instance LogHandler GrowlNotifyHandler where
    setLevel gnh p = gnh { priority = p }

    getLevel = priority

    setFormatter gh f = gh { formatter = f }
    getFormatter = formatter

    emit gnh (prio, msg) _ = do
        rawSystem "growlnotify" ["-m", (show prio) ++ "\n" ++ msg]
        return ()
    close gnh     = return ()

growlNotifyHandler :: String -> Priority -> GrowlNotifyHandler
growlNotifyHandler service priority = GrowlNotifyHandler priority nullFormatter service

Make sure to place this inside “System/Log/Handler” relative to the root of your app.

Here is an example that uses this log handler in order to pop up a log message in growl:

import System.Log.Logger
import System.Log.Handler.GrowlNotifyHandler

main = do
    updateGlobalLogger "Main.Logger" (setLevel DEBUG)
    let hdlr = growlNotifyHandler "Main.Logger" DEBUG
    updateGlobalLogger rootLoggerName (addHandler hdlr)

    debugM "Main.Logger" "This shows in a growl message with Terminal Icon"

Assuming all goes well, you should see this when running it:


Leave a comment

Bookmarklet to turn off jQuery animations

Animations on websites are usually nice, but not always.

One example, where an animation gets annoying is github. They sport a slide animation when going back and forward while browsing the code.

I found this annoying in general, but it became unacceptable once I started using Safari on Lion OS when browsing github repos.

As you know Safari already has a sliding animation when going backward or forward in the browser using the two finger gesture on the trackpad.

With the added github animation, you see two sliding animations right after each other.

Turns out a simple bookmarklet can help.

Since github is using jQuery, turning off all jQuery animations does the trick.

Unfortunately WordPress keeps expanding bookmarklet urls to, so you’ll have to follow these steps in order to install it:

  • Copy the code snippet below
  • create a new bookmark, name it something like StopFx
  • edit the bookmark and paste the copied text into the URL/Address field.

Once you arrive at a github repo invoke the bookmarklet and enjoy distraction free source code browsing.

Of course this will work  for any website with annoying jQuery animations,

In case you are interested in creating your own Bookmarklets, you can find more information here.

As a final note, in order to edit bookmarks in Safari, you have to select “Show All Bookmarks” from the “Bookmarks” folder. I am adding this, because that was not at all obvious to me.

Leave a comment

Vim, CoffeeScript and the Node Repl

I was looking for a way to get intellisense like features for JavaScript/CoffeScript in Vim, but was disappointed by the “tags” plugins.

These plugins just give you some keywords about the language in question, but cannot really give you information about the object you are currently dealing with because they never evluate the code that created it.

Realistically since JavaScript is a dynamic language, it would be hard for any tool to give proper support without actually running parts of your code. Fortunately we have repls for that. It’s the same idea as Small talk (except in that case the repl is built into the IDE).

Usually when trying to figure things out, run some code in the repl and now it can tell you all kinds of information about objects that were created up to that point. This also includes all exports of modules – which is very useful.

Unfortunately up until now it was a multi step process to source my code in the repl in order to play with it.

I read this post, which explains, how to set up a screen in some unix terminal and then send some text over the created socket. The author also implemented a Vim plugin called slime.vim. I forked it and added some CoffeeScript specific feature which I will explain.

Finally we can send code snippets from Vim directly to the repl without any cut and pasting overhead.

It is actually quite simple to set these things up.

  • Install slime.vim into your vim plugins folder
  • Open a terminal window and start a named screen e.g. “screen -S coffee”
  • Start a repl inside that screen e.g. coffee for a CoffeeScript repl
  • Open vim, select some code snippet in visual mode and press Ctrl-c twice
  • At the prompt give the screen name e.g. coffee and accept the given window name
  • The selected code should have been sent to the repl and you can now inspect the created objects

That’s it!

Unfortunately the coffee repl is not quite as nice as the node repl (especially when it comes to inspect functions and properties of an object), which is why I extended the plugin a bit.

If you press Ctrl-C Ctrl-S after selecting a CoffeeScript code snippet, it will be compiled into JavaScript before it is sent to the screen. This means you can directly send your CoffeeScript code to a node repl and inspect it things in there.

I have yet to figure out how to send a <Tab> signal over in order to trigger completion from inside Vim, so anyone with an idea, please comment!

Leave a comment

Dependency Injection is dead, long live Verbs!


This post evaluates a new approach of how to wire our code together while staying decoupled. The reader should keep in mind, that the author has not tried this approach in a larger size project and as with any design pattern there may be problems that only become apparent when used in a real world scenario.

Why another pattern?

There has been a recent interest in writing code in a more functional style – not only because it is more elegant and fun. Yet most common patterns that are used to wire the code together lend themselves very well to a pure object oriented paradigm.

After reading this post, I was convinced that it is time to re-evaluate how we are doing things when it comes to managing our dependencies, which currently is very Noun/object centric.

It is on us to leverage the opportunities that the current languages and libraries offer in order to improve the architecture of our applications.

What is the pattern all about?

I will focus on the possibilities that open up once we use a language that allows passing functions around the same way that we are used to with objects. As a result we can decrease coupling even more than is possible with traditional Dependency Injection.

I also want to point the reader to a post by my colleague Daniel Moore, who explored and implemented a similar pattern with streams using MEF and Rx.

I will give two examples on how to implement this verb centric pattern. One example will be using CoffeScript. The other one will be in C# paired with MEF.

From here on after I will refer to objects and dependencies as nouns and to functions as verbs.

I don’t want to call anyone

Let’s say I have some controller that needs to send a message entered by the user. There will be someone in my arsenal of nouns that knows how to do that, so I will inject the Emailer or IEmailer or something along those lines.

Then I can tell the emailer to send messages via:


Now let’s assume that I need to receive a confirmation message. Well, I just tell the emailer to give it to me:

reply = emailer.Receive()

All good right? But do I really need to know who is taking care of things? Not really.
All I would have to know is what verb to use in order to get a certain thing done:

reply = receiveMessage()

What’s wrong with Dependency Injection?

What if we change responsibilities later. Let’s say we still send messages via email, but now receive them directly from the server. At this point I have to change all the code that tells the emailer to receive the message. I would probably inject me a server and change the message receiving calls to something like:

reply = server.Receive()

Not a big deal you think? What if I have to change it in 10 places and don’t forget, since I follow good practices and am writing tests for my code, I now also need to change who does what in my test setup.
Lets remind ourselves of a principle that the authors of Code Complete elaborate on in chapter 7.5. It goes something like this:

If I have a record with 20 fields on it and a method, that only uses 3 of them, I should pass the 3 fields in as parameters instead of passing the entire record.

It makes sense. The less anyone knows about anyone else, the least coupled they are, so we shouldn’t spread more knowledge about how things are arranged than necessary.

Facebook is the best example of what happens when we don’t follow this advice. Since they so very successfully spread the knowledge about who is related to whom, we are now somehow coupled to almost anyone in the world.

This is not desirable for software systems though.

To summarize:

Knowledge containment makes for better decoupling and this applies to everything needed by a part of a system.

Dependency Injection does not adhere to this though when it comes to telling a system how to do things. Instead we inject the entire dependency that happens to have the verb we need. All the system wanted was to know how to do things, but instead we told it who knows how to do it as well.

As a result dependency injection spreads more information than necessary and that is what’s wrong with it.

What do I need to know?

In the above example, I needed to know what verb to use when sending a message and that I have to give it a message to send. If I wanted to receive a message, I had to know the appropriate verb for that and that I’m expecting a message to be returned to me. This boils down to the following generalization which maps perfectly to our examples:

output(s)     verb   input(s)
reply     =   receiveMessage()

The CoffeeScript Example

As explained in a previous post, I decided to re-implement the node-chat example in CoffeeScript in a BDD manner.

Once I had it all working using dependency injection, I decided to use verbs instead and see how it works out.

Using Dependency Injection

Originally the chat server init function looked like this:

export.init = (fake = { }) ->

 sys = fake.sys or require "sys"
 router = fake.router or require("./lib/node-router")
 server = router.getServer()

 qs = fake.qs or require "querystring"
 url = fake.url or require "url"

 process = fake.process or global.process

As you can see I allow the tests to fake certain dependencies. If a faked dependency  was not present – e.g. when used in production, the appropriate node module was used.
In my tests, I injected the fakes like this:

  server_stub =
    gets: {}
    get: (id, callback) -> @gets[id]= callback; undefined
    listens_on: {}
    listen: (port, host) -> @listens_on= { port: port, host: host }

  @server_stub= server_stub
    getServer: -> server_stub
    staticHandler: (file) ->

  process_stub =
    memoryUsage: -> rss: mem_rss_stub
      PORT: port_stub

  @sut= require("./../server")
    router: @router_stub
    sys: { puts: (msg) ->  } # stop sys.puts from cluttering up the test output
    process: process_stub

The bootstrapper was rather simple, as it had to inject nothing, since the real modules where to be used in this case.

chat_server = require "./server"

Routing calls like

server.get req, res { ... }

were spread throughout the chat server.

Using Verbs

As a proof of concept I refactored the chat server to use verbs where possible and thus have it know the least about where anything is coming from and how the world is arranged.
The chat server now learns from the injected config how to do things, but is not made aware of who is responsible for doing them.

exports.init = (config) ->

  throw "Need to pass in config object that has required verbs like log, route, etc." unless config?

  # States 
  env          =  config.env

  # Verbs
  memoryUsage  =  ()            -> config.memoryUsage()

  log          =  (msg)         -> config.log msg
  route_static =  (file)        -> config.route_static file
  route        =  (req, res)    -> config.route req, res
  listen       =  (port, host)  -> config.listen port, host

  # Modules
  qs           =  require "querystring"
  url          =  require "url"

As you can see, there is still one noun that is passed (env), but this is only used in order to query the state of the environment.
I formatted the code in the Verbs section in a way that clearly shows what verbs are used and what their inputs are. Anyone who needs to initialize the chat server can thus easily see what it needs to know. You can think of it as a config section.

In my tests I can easily set things up since now I just have to pass in certain functions instead of having to build up fakes.

  @listens_on= {}
  routes = {}

  @server_get= (method, req = { }) ->
    res = @res_stub
    routes["/#{method}"] req, res

  @sut= require("./../server")
    route_static: (file) ->
    route: (id, callback) -> routes[id]= callback; undefined
    listen: (port, host) => @listens_on= port: port, host: host
    log:  (msg) -> # don't clutter up the test output
    memoryUsage: -> rss: mem_rss_stub
    env: PORT: port_stub

The calls to server.get req, res { ... } were replaced with route req, res { ... } calls and thus no knowledge of there even being a server is spread throughout the code anymore.
The bootstrapper takes on the responsibility of figuring out who does what in order to properly initialize the chat server.

sys = require "sys"
router = require("./lib/node-router")
server = router.getServer()

chat_server = require "./server"


  # Nouns
  env:            process.env

  # Verbs
  memoryUsage:    process.memoryUsage
  log:            sys.puts
  route_static:   router.staticHandler
  route:          server.get
  listen:         server.listen

I also formatted it in a way that makes it read like a config section.
This is the only place where the knowledge of how things are arranged lives and thus changing things around later becomes easy.

Verbs are Aspect Oriented Programming friendly

When injecting verbs as described I have much more control about how things are getting done, which allows me to quickly add/remove aspects concerning certain actions.
Lets say I want to log what routing calls the chat server registers, but I don’t want to touch my router code.
All I have to do is introduce a more verbose routing function, which logs information and then calls the original one and then pass that into init. Here is an abbreviated example:

verbose_route = (req, res) ->
  console.log "Routing request: ", req
  server.get req, res

  # Verbs
  route:          verbose_route

Although this is a very simple example, it should give you an idea of how powerful this approach is and how much it simplifies extending your application.

Verbs and C# using MEF

I created a sample application that shows how to implement the same pattern in C# using MEF.

The main worker in the application is the runner:

public class Runner
    private readonly Action<string> _print;
    private readonly Func<string> _read;
    private readonly Action<int> _shutDown;

    public Runner(
        [Import(Verbs.Print)] Action<string> print,
        [Import(Verbs.Read)] Func<string> read,
        [Import(Verbs.Shutdown)] Action<int> shutDown)
        _print = print;
        _read = read;
        _shutDown = shutDown;

    public void Run()



    private void InteractWithUser()
        _print("Please enter your name: ");

        var name = _read();

        _print("Hello " + name);
        _print("Please press enter to shut down");


    private void PrintStartupInfo()
        _print("The super verbs application has started.");

As you can see it gets the verbs injected as single parameters instead of via a config.The Import statements are telling MEF how to resolve the injected verbs (in case you need to read up on how this works head on over here).

But where are these verbs actually coming from? Well the Runner doesn’t know nor is he supposed to. To tell you the truth, it doesn’t matter. We just need to make sure, that someone is exporting them, so that MEF can resolve them.

It so happens, that we have a UserInterface that knows how to read and write:

public class UserInterface
    public void Print(string message)

    public string Read()
        return Console.ReadLine();

and the ApplicationManager knows how to shut down the application:

public class ApplicationManager
    private readonly Action<string> _print;

    public ApplicationManager([Import(Verbs.Print)] Action<string> print)
        _print = print;

    public void ShutDownApplication(int code)
        _print("Shutting down ....");

The Export statements use the same identifiers as the Import statements of the Runner. This allows MEF to hook everything together.
I could simply inline the strings for these identifiers, but want to avoid introducing bugs due to typos.
Therefore I created constant identifiers in a Verbs class.

public static class Verbs
    public const string Print = "Verbs.Print";
    public const string Read = "Verbs.Read";
    public const string Shutdown = "Verbs.Shutdown";

Finally we need to tell MEF to wire things up. We do this in the Program.

public class Program
    private CompositionContainer _container;

    public static void Main(string[] args)
        var p = new Program();

    public void Run()

        var runner = _container.GetExport<Runner>().Value;

    private void Compose()
        var catalog = new AssemblyCatalog(Assembly.GetExecutingAssembly());
        _container = new CompositionContainer(catalog);

We create a container in the Compose() method in order to make MEF lookup all of the exports in our assembly. Then we resolve the Runner at which point MEF injects all of our exported Verbs.
The full example is available here on the master branch.

Adding another Printer

In order to demonstrate how extensible our little application is, lets assume we want to add a Debugger that sends a timestamped version of the message to the output window whenever someone prints it.
At the same time we want to keep printing to the console.
We can do this without changing a single line of code inside our Runner – I promise!

We need to introduce a method that when called, will aggregate and then invoke every method that claims to know how to print (via the appropriate Export). We will use MEF’s ImportMany feature to accomplish this task.
For simplicity lets just slap this method onto the ApplicationManager – we can always move it later since no one will be aware of where it lives.

public class ApplicationManager
    private readonly IEnumerable<Action<string>> _printToMany;

    public ApplicationManager(
        [ImportMany(Verbs.CompositePrint, AllowRecomposition = true)] 
        IEnumerable<Action<string>> printToMany)
        _printToMany = printToMany;

    public void ShutDownApplication(int code)
        ApplicationPrint("Shutting down ....");

    public void ApplicationPrint(string msg)
        foreach (var print in _printToMany)

It now exports its ApplicationPrint method under the Verbs.Print identifier that the Runner knows about. When invoked, it finds all print methods that where exported under the new Verbs.CompositePrint identifier and invokes them one after the other.
Since it exports itself under the same identifier that the UserInterface used to export its Print method previously, it ends up replacing it.

There are two things left to do:

First we need to update the print method in our UserInterface to export it self as the Verbs.CompositePrint (this is an extra verb we add to our Verbs class).

public class UserInterface
    public void Print(string message)

    public string Read()
        return Console.ReadLine();

Secondly we now introduce the Debugger that will export a print method with the same identifier.

public class Debugger
    public void Print(string message)
        Debug.WriteLine(DateTime.Now + " - " + message);

As a result whenever the Runner prints a message it will end up calling the ApplicationPrint method which in turn calls the print methods on the UserInterface and the Debugger with the passed message. As promised, the Runner didn’t change and is totally oblivious to the new way that things are done now.

This version of the application is available here on the multiprint branch


How to make node.js, CoffeeScript and Jasmine play nice with Vim

In order to see how much fun it could be to develop a web application BDD style, I decided to re-implement the node-chat sample application.

I started by implementing the chat server which runs on node.js, so making my chat server run on node.js as well was an easy decision. Why I used CoffeeScript over just Javascript  and Jasmine for testing will be explained below.

Although I will blog about the resulting implementation and specs separately, the impatient can go here to have look.


Everyone loves nice and quiet and for a lot of people that extends to their work environment; ergo we don’t like noisy code.

I believe in the idea of cleanly formatted code to make it more readable and that these clues are enough for a parser to deduce what I am trying to say. Curly braces make code less readable IMO and are not really necessary.

CoffeeScript takes this to heart.

It is a language that generates Javascript code. The generated code is as readable as Javascript allows, but at least I don’t have to look at it all day.

It borrows ideas from a number of languages and combines the best of each into a super succinct and expressive language. Ruby developers will feel right at home and some things like list comprehensions are reminiscent of Haskell.

Like Haskell and Python, CoffeeScript is whitespace sensitive and thus doesn’t need any extra information about the code structure aka noise.

CoffeeScript and Vim

In order to quickly compile and then run my coffee scripts with node, I looked for nice Vim integration and found it here.

Although the site explains how to use it, I will point out the usage scenarios that I found most important.


Setup Vim to auto-compile coffee files when they are saved via:

  autocmd BufWritePost *.coffee silent CoffeeMake! -b | cwindow

This is super useful as now I can save my file and run it right after via node.

If there are any compilation errors, they will be shown in a separate window, which disappears only after the errors were resolved and the file is saved again.

Leave off the “!” in order to have the cursor jump to the line of the error automatically.


This feature is probably most important, if you are like me new to both CoffeScript and Javascript.

It allows compiling the entire Vim buffer or the selected text only and shows the resulting Javascript right inside Vim.

I used it a lot when I wasn’t sure if the CoffeeScript would do what I expected and to just get an idea of what it would look like in Javascript.

It is so easy to select a few lines of code, run CoffeCompile and watch and learn as the screenshot shows.


Jasmine is a BDD framework for Javascript, which means you can author your tests in CoffeeScript as well of course. In order to test code running in node.js, you will need jasmine-node. Install it via

npm install -g jasmine-node

For my project I placed my specification files inside the /spec folder and could immediately run them via

jasmine-node spec

Well almost at least. The site warns:

your specifications must have ‘spec’ in the filename or jasmine-node won’t find them!

My file was called server_specs.js (after the CoffeScript compile step) and Jasmine still didn’t find it. Turns out that renaming it to server_spec.js did the trick.

So be aware:

In order for Jasmine to find your specification files they need to end with spec.js.

Why I didn’t use vows

This is a good time to mention that I gave vows a try first because it seemed to be so very much in vo(w)gue.

It’s strong point for some and weak point for others – like me, is, that it runs all tests in parallel.

This is surely a good idea, but makes re-using things like stubs very hard, especially if you use a lot of child contexts like me. Tests start to affect each other in weird ways and it is very hard to keep them separate. As far as I understand, anything that is to be truly isolated needs to be returned by the “topic” and this becomes a nuisance once you have a lot of things that are affected by setting up a context.

I dabbled with it for a few hours until I decided the hoops I had to jump through weren’t worth it – after all if I wanted to jump through hoops in order to test my code, I could just stick with static languages, right?

I found that I wasn’t the only one who feels that way. When I looked around for alternatives, I found this on the nodeunit site:

While running tests in parallel seems like a good idea for speeding up your test suite, in practice I’ve found it means writing much more complicated tests. Because of node’s module cache, running tests in parallel means mocking and stubbing is pretty much impossible.

I also think that running tests in parallel is more important if you are dealing with long running integration tests. IMO in that case it is better to separate these from the faster running unit tests and run them only once a day or so.

Jasmine and Vim

Although it could be perfectly sufficient to just save you file after you made some change and then switch to the terminal in order to run the tests, it definitely gets in the way of the red – green – refactor workflow.

I prefer to just have to hit one shortcut key in order to save/compile the code and run my tests. This allows me to keep my focus on what I am trying to accomplish.

You can certainly run jasmine-node from inside Vim like with a simple command, but with mixed results:

The weird looking numbers are color codes sent to the terminal which the simple Vim terminal interprets in its own ways.

Turns out that jasmine-node, although it sports a --colors option, turns colors on for you even if you don’t specify it. Fortunately there is a way to turn the colors off explicitly:

That’s better!

The only thing left to do is to hook saving all files and running tests up  to a shortcut by adding the following to our .vimrc: (first line ensures our leader key is a comma)

 let mapleader=","
  map <leader>m :wa \|! jasmine-node spec --noColor <CR>

In this case hitting ,m will save all my open files and then run my tests.

Clean Test Output

In order to keep my test result output from being cluttered up with log messages and such, I replaces node’s sys that is normally obtained via sys = require "sys", with a stub that does nothing when sys.puts is called.

Just to give an idea of how simple this is here is a quick example:

1. When creating the system under test, we inject the dependencies we want to stub – including sys

    router: @router_stub
    sys: { puts: (msg) ->  } # stop sys.puts from cluttering up the test output
    process: process_stub

2. The system under test then uses the injected dependencies if present or the real ones otherwise

server.init = (fake = { }) ->

  sys = fake.sys or require "sys"
  router = fake.router or require("./lib/node-router")
  server = router.getServer()

  process = fake.process orglobal.process

More details will be explained in a later post.



Reading Source Code on iPad with Vim syntax highlighting

The best way to learn new languages/technologies is to read some good sample source code.

Incidentally there is an abundance of source code, but only so little time. I thought it would be nice to read some on my iPad while I’m on the train. Of course it would be even better if I could have it syntax colored.

Fortunately as I mentioned in my previous post, there is an easy command in Vim to convert any text to html, including the colors.


does the trick.

So here is what you need to do:

  1.  Get yourself an iPad reader that supports html – I use GoodReader
  2.  Convert the source code you would like to read into html using your Vim editor and save it to a file
  3.  Upload the saved html file to your iPad reader (ideally using something quick and convenient like GoodReader’s WiFi transfer)

1 Comment

How to make your code samples in your blogs look like they look in Vim

Four simple Steps

  1. Open the code sample in your favorite editor (that would be vim)
  2. Execute :TOhtml (this generates a complete html file including a <style> section that contains the needed CSS and a <body> section that contains the HTMLversion of your code)
  3. Copy the generated CSS into the CSS file of your blog (in wordpress you’d get there via Appearance/Edit CSS)
  4. Include the generated HTML body in your blog

I haven’t tried this, but alternatively you could just copy the entire html (including the <style>) into your blog and thus combine steps 3 and 4.

The disadvantage of that approach is that in this case you’d have to include the CSS part in every blog that has a code sample.

If you don’t want to upgrade

In case you don’t want to pay the $15/year for the Custom CSS upgrade, scrap steps 3 and 4 and instead do the following in order to inline the CSS.

  1. Copy the code generated in step 2 from above to the clipboard
  2. Go to
  3. Select Paste HTML as the source
  4. Paste your code into the text field
  5. In the Options check everything except  Don’t remove <style> and <link> elements
  6. Hit Submit and copy the HTML results
  7. Include the copied HTML in your blog


Having fun with JavaScript bookmarklets to determine loaded libraries

Lately I have been looking at ways to make writing JavaScript a little more fun .
Actually the first step was to write CoffeScript instead and letting it compile the JavaScript for me.

I’m also very interested in the libraries that make a lot of things like cross browser issues easier. So a lot of times when I see a site I like, I’m wondering, what libraries it is using under the hood. I wanted a faster way to determine this than ‘view page source’.
So this became my first little JavaScript task.

I went to jsFiddle to spike how it could be done (unfortunately I couldn’t use jQuery here, since then I would have needed to reference it and it would obviously have obscured the results).

After a few attempts I had my little JavaScript function ready. Since I couldn’t rely on any map function, it is using a lot of for loops and is not too pretty, but some people may find it useful:

 1 (function showLoadedLibs() { 
 3     var names = ['jquery', 'mootools', 'backbone', 'prototype', 'yui', 'simpleyui', 'glow', 'dojo', 'modernizr', 'processing', 'ext-core', 'raphael', 'right'];
 4     var scripts = document.getElementsByTagName("script");
 5     var loadedScriptNames = "";
 6     for (var i = 0; i < scripts.length; i++) { 
 7         var src = scripts[i].src;
 8         for (var n = 0; n < names.length; n++) { 
 9             var lib = new RegExp(names[n] + '[^/]*[.]js').exec(src);
10             if (lib) { 
11                 loadedScriptNames += " " + lib + ",";
12             } 
13         } 
14     } 
15     alert("\nLoaded libraries: " + loadedScriptNames.substr(0, loadedScriptNames.length - 1) + "\n\nLooked for: " + names);
17 }());

In order to use it as a bookmarklet, I went to the Bookmarklet Crunchinator to ‘ crunch’ the above code.

The result looks like this:

javascript:(function(){(function showLoadedLibs(){var names=['jquery','mootools','backbone','prototype','yui','simpleyui','glow','dojo','modernizr','processing','ext-core','raphael','right'];var scripts=document.getElementsByTagName("script");var loadedScriptNames="";for(var i=0;i<scripts.length;i++){src=scripts[i].src;for(var n=0;n<names.length;n++){var lib=new RegExp(names[n]+'[^/]*[.]js').exec(src);if(lib){loadedScriptNames+=" "+lib+",";}}}alert("\nLoaded libraries: "+loadedScriptNames.substr(0,loadedScriptNames.length-1)+"\n\nLooked for: "+names);}());})();

Finally, I added it to my bookmark bar – the above ‘crunched’ code becomes the Url of the bookmark.

All I have to do now is click this bookmark and am informed about known libraries that are loaded on the page I am currently on.


%d bloggers like this: