Playing with Node.js

By Nacho Martín on 17 March 2011

I have been playing around with node.js lately. The pet project I did to learn it (available on github) is quite simple, but useful to me: I have a growing collection of hundreds of images in my hard drive and I need a system to bookmark new ones without mantaining a nightmare of folders and files with strange names.

This apps allows me, when I am browsing a website, to right-click on any image, and save it to the app using a Firefox extension. It saves them with a title and some tags, and then on the website of the app, I can see the collection of images, optionally filtered by tags. The resulting look is quite similar to Tumblr. At the moment it doesn’t have support for different users/collections. As simple as that.

I used redis as database, express as framework, and jade as Well, enough preamble. Let’s see some code. I am not going to comment every single line. You can access the repository for that. Only some parts that I think can give you a feeling on how is node.js, express and redis. Also, I want to write my impressions on different topics I have faced with this project, so I think a bit more about them and hopefully help somewhat other coders.

Retrieving a file and downloading it

  //retrieve it
  var theurl = http.createClient(80, host);
  var requestUrl = urlfile;
  console.log("Downloading file: " + filename);
  var request = theurl.request('GET', requestUrl, {"host": host});

  var dlprogress = 0;

  request.addListener('response', function (response) {
    var downloadfile = fs.createWriteStream("public/images/"+filename, {'flags': 'a'});
    console.log("File size " + filename + ": " + response.headers['content-length'] + " bytes.");
    response.addListener('data', function (chunk) {
      dlprogress += chunk.length;
      downloadfile.write(chunk, encoding='binary');
    //download finished
    response.addListener("end", function() {
     ///... store images, make thumbnails... do more stuff
      console.log("Finished downloading " + filename);

This was quite cool. Being mainly a PHP developer I like how this works. It is event driven, so you can have a callback called while the downloading is in progress and write some code to be executed when the download is finished. I really like this, and I will try to keep it in mind if I have to write, for example, an app that has to process videos or some other big stuff without making the user wait until you do all your stuff, which is terrible. You can do things really quickly using this.

The nesting "problem"

Being node.js event-driven, you get used very quickly to nest and sometimes over-nest code. For instance, when you want to retrieve a image object from the database, you do something like:

  client.get( 'image:'+id, function( err, data ) {
    //check if there is no data
    var obj = JSON.parse( data.toString() );
    res.render('single.jade', {
      locals: {
        image: obj,
        tags: allTags

So, the function ‘get’ accepts as an argument the callback that will be called with the result of that ‘get’. But, meanwhile, the code below the ‘get’ will be executed. Therefore, instead of writing sequentially what you whant to do with the data, you have to place that code in the callback. This pattern is used constantly, and you have soon callbacks nested into another callbacks, so you need to rewrite the code to make it more manageable. I am still a novice at this art ;)

But sometimes, nesting callbacks is not enough. For instance: what happens if you want to retrieve a list of images in a for loop and send the result to the view? You can not use the callback of every get, because in that case you don’t have the full list. You must wait to show the list until the last object is retrieved. It took me a while to find out how to overcome this.

        var count = data.length;
        for (var i = 0; i < data.length; i++) {
          var id = data[i].toString();
          client.get( 'image:'+id, function( err, dataIm ) {
            var obj = JSON.parse( dataIm.toString() );
            imagArr.push( obj);
            if (count == 0){
              res.render('index.jade', {
                locals: {
                  images: imagArr.reverse(),
                  tags: allTags,
                  pagination: pagination

Here we keep a ‘count’ variable that makes us possible to know if we are in the last iteration of the loop. Then, if that is the case, we render the view with the result. It is how the language is designed. It is quite possible that there is a more elegant way to solve this, but I haven’t found it and this works. I am not sure if I feel comfortable about it. Please, use the comments to suggest a better solution.

About Redis

I had never used Redis before, and I am very glad I have tried it. It is not a Swiss knife, but it is very good doing what it has been designed for: a key-value data store. I have used Mongo before to store recipes in trecetas.com, and it is not so different as using SQL: you store documents and you can write complex queries against your database to retrieve them. Redis, on the other hand, makes you think in a very different way. In this app I used redis to have key-values for the images, being the key ‘image:id’ and the value a json object with the filename, the title and the description. Also, I have a list of all the images’ ids in the key ‘images’, a similar list for each tag ‘tag:id’ and a ‘tags’ list with all the tags.

So Redis is basically for me a pool where I can store keys and values. But most importantly, if you decide to use Redis in a project, you are forced to think differently as you would do in SQL or Mongo. You are not expected to just store the tags in the json value of a key ‘image:id’, because you cannot query them. Instead, you build a new pool for each tag. If wanted to have multi-user support, I would have to maintain a new pool with the images for that user.

So, as I said, it is not a Swiss knife, but it is good to have a tool like this in case you need a lighting fast key-value storage.


I used the express framework for this project. If you are, like me, used to complex frameworks like Symfony, Rails or Django, and you want to try node.js, you will find that express is not the same kind of thing (node.js is a much younger platform!). It is way much simpler and you will have to extend it to make something more complex than this pet project of mine. With that said, the things the framework does, it does them well. You have a simple but nice routing system based on regular expressions, logging, support for several templating engines, and you can use things like ‘less’ with no trouble.

Being as lightweight as it is, you are faced with a range of choices for your specific needs in case you want for instance a way to keep your server running after a crash, ORMs, templating engines, and so on. It is quite refreshing if you come, like me, from Symfony, where you have almost everything you need chosen for you, to play with different small modules and to combine them.

To sum up

I am by no means a guru, but if you want the opinion of some guy that is not looking for the perfect beauty in programming nor for the perfect tool, here it goes: When I was programming in node.js I felt refreshed. It is very nice to work with a young environment and the community seems to be working on a lot of nifty tools that give you some ideas even if you go back to your favourite framework next week. It took me a while to figure out how to deal with asynchronous programming, and I feel it can be difficult to solve some problems. That means for me that it would be difficult to convince my work colleges to work with node, but on the other side it has some nice advantages (speed is clearly one of them) that makes it very worthwhile to play with.

It also motivated me to read the book and watch the videos about Javascript by Douglas Crockford, that are really (really!) worthwhile.

Written by @nacmartin

blog comments powered by Disqus