Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

node.js response.write(data) taking long time when size of data is small

Tags:

node.js

I've noticed a strange behavior of the performance of the following code in node.js. When the size of content is 1.4KB, the response time of the request is roughly 16ms. However, when the size of content is only 988 bytes, the response time of the request is strangely much longer, roughly 200ms:

response.writeHead(200, {"Content-Type": "application/json"});
response.write(JSON.stringify(content, null, 0));
response.end();

This does not seem intuitive. Looking into Firebug's net tab, the increase/difference all comes from receiving (waiting on the other hand is 16ms for both).

I've made the following change to fix it so that both cases have 16ms response time:

response.writeHead(200, {"Content-Type": "application/json"});
response.end(JSON.stringify(content, null, 0));

I've looked through node.js doc but so far haven't found related info. My guess this is related to the buffering, but could node.js preempt between write() and end()?

Update:

This was tested on v0.10.1 on Linux.

I tried to peek into source and have identified the difference between the 2 path. The first version has 2 Socket.write calls.

writeHead(...)
write(chunk)
  chunk = Buffer.byteLength(chunk).toString(16) + CRLF + chunk + CRLF;
  ret = this._send(chunk);
    this._writeRaw(chunk);
      this.connection.write(chunk);
end()
  ret = this._send('0\r\n' + this._trailer + '\r\n'); // Last chunk.
    this._writeRaw(chunk);
      this.connection.write(chunk);

The second, good version has just 1 Socket.write call:

writeHead(...)
end(chunk)
  var l = Buffer.byteLength(chunk).toString(16);
  ret = this.connection.write(this._header + l + CRLF +
                              chunk + '\r\n0\r\n' +
                              this._trailer + '\r\n', encoding);

Still not sure what makes the first version not working well with smaller response size.

like image 754
bryantsai Avatar asked May 24 '13 01:05

bryantsai


People also ask

How do I increase response time in REST API node JS?

Another way to improve Node. js server response time is by updating database indexes. This can be done by adding new indexes, removing unused indexes, or rebuilding existing indexes. Adding new indexes can improve query performance by allowing the database to find the data it needs more quickly.

How much data can node js handle?

As it turns out, although Node. js is streaming the file input and output, in between it is still attempting to hold the entire file contents in memory, which it can't do with a file that size. Node can hold up to 1.5GB in memory at one time, but no more.


1 Answers

Short answer:

you can explicitly set the Content-Length header. It will reduce to response time from around 200ms to 20ms.

var body = JSON.stringify(content, null, 0);
response.writeHead(200, {
    "Content-Type": "application/json",
    'Content-Length': body.length
});
response.write(content);
response.end();

Facts:

After a few experiments, I found that if the content is small enough (In my case, less than 1310 bytes) for a single MTU to carry on, the response time would be around 200ms. However, for any content larger than that value, the response time would be roughly 20ms.

Then I used wireshark to capture the server side's network packages. Below is a typical result:

For small content:

  • [0000ms]response.write(content)
  • [0200ms]received the ACK package from client
  • [0201ms]response.end()

For larger content:

  • [0000ms]response.write(content) //The first MTU is sent
  • [0001ms]the second MTU is sent
  • [0070ms]received the ACK package from client
  • [0071ms]response.end()

Possible Explanation:

If Content-Length header is not set, the data would be transferred in "Chunked" mode. In "Chunked" mode, neither server nor the client know the exactly length of the data, so the client would wait for while (200ms) to see if there is any following packages.

However, this explanation raise another question: why in the larger content case, the client did not wait for 200ms (instead, it waited only around 50ms)?

like image 98
Calvin Zhang Avatar answered Oct 15 '22 12:10

Calvin Zhang