Node is much more than JavaScript on the Server.

That's Node's gimmick though right? You can now write both your server and client in JavaScript.

No longer will you toil away, learning and bouncing between multiple environments. Painfully switching contexts, excreting blood and tears when you are forced to move from the client to the server or vice versa.

Hopefully you picked up on my sarcasm.

The fact that Node uses JavaScript is but a tertiary point (a benefit to some, a downside to others). It most definitely is a contributor to it's success, but so much so that the primary point and purpose of Node is often missed. If this is obvious and apparent to you, you probably can stop reading.

Saying that you like Node because you can write JavaScript on the server side is like saying you like red Lotus Elises because they are painted red. Not everyone likes red, and even if everyone did, it's hardly the most important/objective thing that makes this car "good".

Node is fundamentally different than the dynamic language platform status quo. It was born from an obsession with efficiency. A dissatisfaction with the classic 1-to-1 relationship between threads and connections drove it's ability to handle a significantly increased level of concurrency for web applications written with dynamic languages.

Blocking and the I/O Problem

The classic bottleneck of most applications, web or otherwise, is I/O.

The application is regularly transferring information to and from a database, the file system, the network, or a combination of these. Generally, the transfer of this information is far slower than the speed of the CPU operating on this information in memory. Productive CPU over-utilization is likely not your problem.

The second problem is the way we have handled the first. Historically understanding computers, we think sequentially. This is how Computers think right? You provide a set of instructions and they are run in sequence from top to bottom. Your first program generally looks something like this.

// print
console.log("Enter your name: ");

// block execution to retrieve data
// keeping the process tied up awaiting the
// data's return
var name = console.read();

// manipulate data and report
console.log("Hello " + name + "!");

But what if you could do this in a non-blocking way

// print
console.log("Enter your name: ");

// make a request for the data, but allow the process to
// be free to do other things, providing a bit of code
// to run when the data is retrieved.
console.read(

  // When we are notified we have a
  // response, manipulate data and report
  function(err, name) {
    console.log("Hello " + name + "!");
  }
);

Making your process block and wait around for millions of clock cycles is a waste of time and resources. To handle additional requests, you must use a process manager of some kind to keep around multiple processes. As the web moves to be a more interactive medium, we are continually reducing the size of each request, while multiplying the number of requests generated by every single client. This makes it obvious why the 1-to-1 relationship between a request and a process is hard to scale.

Historically, we have taught ourselves to more easily reason our programs in the first style, but I would argue that the second style is equally, if not more, intuitive as a human being.

Thinking like a Human

As a human being you understand the asynchronous nature of information flow. You intuitively know that it's just not efficient or practical to stop everything you are doing every time you need to retrieve or communicate information.

As a developer, every time you ask a manager or stake holder of a project for clarification, do you simply sit and wait quietly for a reply? No, of course you don't. You move on to the next thing that you can make progress with, juggling a hand full of important tasks. Once the information comes back, you continue with the related task.

For a more rudimentary anecdote, you are driving in your car with a friend of yours and you need to know the weather report for tomorrow. Would you slam on the breaks and stop traffic to turn to your friend and ask what the weather will be like tomorrow? Waiting still and silent (except for the honking behind you) until he looks it up for you? Of course you wouldn't, you would continue having other conversations and driving while he retrieves the information for you.

In this way, Node's non-blocking/asynchronous model does not need to be as foreign as it would seem at first pass.

Why not threads?

Threads are hard, heavy, and generally overkill

They do solve the rudimentary problem of parallelism, but they present an all new set of problems around thread-safety and mutability. They are generally very complex and difficult for dynamic languages to implement, if implemented at all.

In Node, everything but your code, runs in parallel.

Everything that can be handled via parallelism for you, is. But because your code is never run in parallel and always in a single thread, issues of thread-safety and mutability fall away. This provides many of the benefits of parallelism for the common use case without presenting the developer with near the additional complexity.

Again, I would argue that the way Node handles parallelism again mirrors how we as humans process thoughts, not in parallel, but with highly efficient multi-tasking.

Language X has a non-blocking server too...

Non-blocking I/O is not a new thing, but what makes Node different is the platform is built entirely around this principle. Non-blocking is philosophically adopted as the rule, not the exception. So when you go to grab that Node MySQL library, you can expect it will be fully compatible with your non-blocking application. Generally this cannot be said of other event driven frameworks written in other dynamic languages.