Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

nodejs tcp socket sends multiple data event

I'm trying to demonstrate the simpliest way of how an http server works by coding one with the tcp package. I did this several times before but today I'm facing an unexpected behavior, in that the data event from the socket object is fired either one or multiple times randomly for a similar request and I'm wondering why, and how to fix it properly.

Please note that I know I should use the stream way of processing data, which I do in my 2nd demonstration. The point is increasing complexity each time to make the presentation easier to follow.

Here's the server. As you can see, it's straight forward and easy to get.

const net = require('net')

const response = `HTTP/1.1 200 OK
Access-Control-Allow-Origin: *
Foo: Bar

foobar
`

net.createServer(socket => {
  socket.on('data', buffer => {
    console.log('----- socket data', Date.now())
    console.log(buffer.toString())
    socket.write(response)
    socket.end()
    console.log('-----')
  })

  socket.on('end', () => console.log('----- socket end.'))
  socket.on('close', () => console.log('----- socket close.', '\n'))
}).listen(2000)

To test my server, I just open whatever web browser to http://localhost:2000 and get the response ; but when using the following payload (using the javascript console of the browser), sometimes the data event would be triggered twice, ending up in a bug because the write/end process cannot proceed a second time.

var xhr = new XMLHttpRequest();
xhr.open("POST", "/");
xhr.setRequestHeader("Content-Type", "application/json");
xhr.send(JSON.stringify({ foo: "bar" }));

Here is a snap of the logs from the server, if anything could help :

----- socket data 1479133993862
POST / HTTP/1.1
Host: localhost:2000
Connection: keep-alive
Content-Length: 13
Pragma: no-cache
Cache-Control: no-cache
Origin: http://localhost:2000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.71 Safari/537.36
Content-Type: application/json
Accept: */*
Referer: http://localhost:2000/
Accept-Encoding: gzip, deflate, br
Accept-Language: fr-FR,fr;q=0.8,en-US;q=0.6,en;q=0.4,id;q=0.2,ms;q=0.2,ko;q=0.2

{"foo":"bar"}
-----
----- socket end.
----- socket close.

----- socket data 1479133994515
POST / HTTP/1.1
Host: localhost:2000
Connection: keep-alive
Content-Length: 13
Pragma: no-cache
Cache-Control: no-cache
Origin: http://localhost:2000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.71 Safari/537.36
Content-Type: application/json
Accept: */*
Referer: http://localhost:2000/
Accept-Encoding: gzip, deflate, br
Accept-Language: fr-FR,fr;q=0.8,en-US;q=0.6,en;q=0.4,id;q=0.2,ms;q=0.2,ko;q=0.2

{"foo":"bar"}
-----
----- socket end.
----- socket close.

----- socket data 1479133995166
POST / HTTP/1.1
Host: localhost:2000
Connection: keep-alive
Content-Length: 13
Pragma: no-cache
Cache-Control: no-cache
Origin: http://localhost:2000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.71 Safari/537.36
Content-Type: application/json
Accept: */*
Referer: http://localhost:2000/
Accept-Encoding: gzip, deflate, br
Accept-Language: fr-FR,fr;q=0.8,en-US;q=0.6,en;q=0.4,id;q=0.2,ms;q=0.2,ko;q=0.2


-----
----- socket data 1479133995167
{"foo":"bar"}
events.js:154
      throw er; // Unhandled 'error' event
      ^

Error: write after end
    at writeAfterEnd (_stream_writable.js:167:12)
    at Socket.Writable.write (_stream_writable.js:212:5)
    at Socket.write (net.js:624:40)
    at Socket.<anonymous> (/Users/julien/Temp/foo.js:14:12)
    at emitOne (events.js:90:13)
    at Socket.emit (events.js:182:7)
    at readableAddChunk (_stream_readable.js:153:18)
    at Socket.Readable.push (_stream_readable.js:111:10)
    at TCP.onread (net.js:529:20)

As you can see, the firsts 2 requests are fine, but the 3rd one is cut into 2 distinct parts. The headers of the request would be in one data event, and the body in an other.

I discussed this with few developers and we guess it could be related to the TCP stack of my OS, which is OSX Sierra if that could matter.

I couldn't see any other way to patch it than accumulating the buffer into a variable declared in an upper scope, then use an ugly timer trick which end up something similar to a cancellable setImmediate.

var timer = false, data = '';
socket.on('data', buffer => {
  data += buffer.toString();

  clearTimeout(timer);
  timer = setTimeout(() => process(socket, data), 1)
})

Question is simple : I know this fix is very wrong in many ways, but I can't see other without using streams or http package. Could you enlight me?

like image 834
y_nk Avatar asked Nov 14 '16 14:11

y_nk


1 Answers

That's just how TCP works. TCP is a stream of bytes. There are no packets with boundaries (or even requests) on application layer. A write call for n-bytes on one end of the connection could lead up to n 1-byte read calls on the other end. You have to be prepared that each read yields an arbitrary amount of bytes (up to the buffer size given to the read call - however as you get data pushed in node.js you can't influence that). If you need packets on application level you need to handle that yourself, e.g. by writing length-prefixed packets to the stream.

However HTTP does not require the notion of packets since it's already defined by the HTTP protocol where a header and the body ends.

like image 116
Matthias247 Avatar answered Nov 09 '22 23:11

Matthias247