I've noticed a strange behavior of the performance of the following code in node.js. When the size of content
is 1.4KB, the response time of the request is roughly 16ms. However, when the size of content
is only 988 bytes, the response time of the request is strangely much longer, roughly 200ms:
response.writeHead(200, {"Content-Type": "application/json"});
response.write(JSON.stringify(content, null, 0));
response.end();
This does not seem intuitive. Looking into Firebug's net tab, the increase/difference all comes from receiving (waiting on the other hand is 16ms for both).
I've made the following change to fix it so that both cases have 16ms response time:
response.writeHead(200, {"Content-Type": "application/json"});
response.end(JSON.stringify(content, null, 0));
I've looked through node.js doc but so far haven't found related info. My guess this is related to the buffering, but could node.js preempt between write()
and end()
?
Update:
This was tested on v0.10.1 on Linux.
I tried to peek into source and have identified the difference between the 2 path. The first version has 2 Socket.write calls.
writeHead(...)
write(chunk)
chunk = Buffer.byteLength(chunk).toString(16) + CRLF + chunk + CRLF;
ret = this._send(chunk);
this._writeRaw(chunk);
this.connection.write(chunk);
end()
ret = this._send('0\r\n' + this._trailer + '\r\n'); // Last chunk.
this._writeRaw(chunk);
this.connection.write(chunk);
The second, good version has just 1 Socket.write call:
writeHead(...)
end(chunk)
var l = Buffer.byteLength(chunk).toString(16);
ret = this.connection.write(this._header + l + CRLF +
chunk + '\r\n0\r\n' +
this._trailer + '\r\n', encoding);
Still not sure what makes the first version not working well with smaller response size.
Another way to improve Node. js server response time is by updating database indexes. This can be done by adding new indexes, removing unused indexes, or rebuilding existing indexes. Adding new indexes can improve query performance by allowing the database to find the data it needs more quickly.
As it turns out, although Node. js is streaming the file input and output, in between it is still attempting to hold the entire file contents in memory, which it can't do with a file that size. Node can hold up to 1.5GB in memory at one time, but no more.
Short answer:
you can explicitly set the Content-Length
header. It will reduce to response time from around 200ms to 20ms.
var body = JSON.stringify(content, null, 0);
response.writeHead(200, {
"Content-Type": "application/json",
'Content-Length': body.length
});
response.write(content);
response.end();
Facts:
After a few experiments, I found that if the content
is small enough (In my case, less than 1310 bytes) for a single MTU to carry on, the response time would be around 200ms. However, for any content
larger than that value, the response time would be roughly 20ms.
Then I used wireshark to capture the server side's network packages. Below is a typical result:
For small content
:
response.write(content)
response.end()
For larger content
:
response.write(content)
//The first MTU is sentresponse.end()
Possible Explanation:
If Content-Length
header is not set, the data would be transferred in "Chunked" mode. In "Chunked" mode, neither server nor the client know the exactly length of the data, so the client would wait for while (200ms) to see if there is any following packages.
However, this explanation raise another question: why in the larger content
case, the client did not wait for 200ms (instead, it waited only around 50ms)?
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With