Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Why is this NodeJS 2x faster than native C?

Tags:

For the sake of a presentation at work, I wanted to compare the performance of NodeJS to C. Here is what I wrote:

Node.js (for.js):

var d = 0.0,
    start = new Date().getTime();

for (var i = 0; i < 100000000; i++)
{
  d += i >> 1;
}

var end = new Date().getTime();

console.log(d);
console.log(end - start);

C (for.c)

#include <stdio.h>
#include <time.h>

int main () {
  clock_t start = clock();

  long d = 0.0;

  for (long i = 0; i < 100000000; i++)
  {
    d += i >> 1;    
  }

  clock_t end = clock();
  clock_t elapsed = (end - start) / (CLOCKS_PER_SEC / 1000); 

  printf("%ld\n", d);
  printf("%lu\n", elapsed);
}

Using GCC I compiled my for.c and ran it:

gcc for.c
./a.out

Results:

2499999950000000
198

Then I tried it in NodeJS:

node for.js

Results:

2499999950000000
116

After running numerous times, I discovered this held true no matter what. If I switched for.c to use a double instead of a long in the loop, the time C took was even longer!

Not trying to start a flame war, but why is Node.JS (116 ms.) so much faster than native C (198 ms.) at performing this same operation? Is Node.JS applying an optimization that GCC does not do out of the box?

EDIT:

Per suggestion in comments, I ran gcc -Wall -O2 for.c. Results improved to C taking 29 ms. This begs the question, how is it that the native C settings are not optimized as much as a Javascript compiler? Also, what is -Wall and -02 doing. I'm really curious about the details of what is going on here.