Since everything should be https, I enabled it, and noticed how much slower https is compared to http.
I have a Ubuntu/YAWS server in Dallas. I start YAWS using "yaws --daemon --nodebug"
If I do
time curl -i https://share.spreadsheetconverter.com/echo/
and
time curl -i http://share.spreadsheetconverter.com/echo/
from the server itself, https takes about 100ms and http 20ms, i.e. the difference is 80ms.
When we try from Sweden, Europe, https is 1400ms and http is 350ms. These figures can be logical, due to the latency over the Atlantic.
However, now to the strange thing.
I also have a Windows/IIS server in Dallas.
If I compare simple http-get request on both servers, the difference https-penalty is much bigger for the YAWS server than for IIS. (I have also tested Tomcat, and it behaves similar to IIS).
It also seems to be latency dependent, i.e. the longer from the server you are, the bigger is the difference between IIS and YAWS.
When I do similar tests with the IIS server in Dallas, from Sweden, https is 1000ms, and http is the same as for YAWS, i.e. IIS is much faster (400ms) at https than Yaws. It is almost as if YAWS makes an extra network call.
I have also been experimenting with
http://tools.pingdom.com/fpt/
and just extracted the SSL time reported by them. Notice that SSL time increases faster for YAWS
| YAWS | IIS
Dallas | 79ms | 75ms
New York | 212ms | 87ms
Amsterdam | 503ms | 315ms
Ok, what should I do?
Update 2015-08-20
I updated to yaws 2.0, and yes, the performance difference is still there.
By using
curl -v --trace-time --trace-ascii echo.log https://share.spreadsheetconverter.com/echo/
and comparing it with
curl -v --trace-time --trace-ascii server1.log https://www.spreadsheetserver.com/server1/
I compared all the rows, I see that we loose 300ms at one single row.
This is how it looks when we talk to Yaws 2.0
17:37:54.606668 == Info: TLSv1.2, TLS handshake, Finished (20):
17:37:54.606692 => Send SSL data, 16 bytes (0x10)
0000: ......Jb.9...#.^
17:37:54.758726 == Info: TLSv1.2, TLS change cipher, Client hello (1):
17:37:54.758761 <= Recv SSL data, 1 bytes (0x1)
0000: .
17:37:55.107695 == Info: TLSv1.2, TLS handshake, Finished (20):
17:37:55.107726 <= Recv SSL data, 16 bytes (0x10)
0000: ..........Y.xV.!
17:37:55.107784 == Info: SSL connection using TLSv1.2 / ECDHE-RSA-AES256-GCM-SHA
and this when I talk to IIS
17:40:25.247308 == Info: TLSv1.0, TLS handshake, Finished (20):
17:40:25.247329 => Send SSL data, 16 bytes (0x10)
0000: ........f4..qh:(
17:40:25.376893 == Info: TLSv1.0, TLS change cipher, Client hello (1):
17:40:25.376925 <= Recv SSL data, 1 bytes (0x1)
0000: .
17:40:25.377081 == Info: TLSv1.0, TLS handshake, Finished (20):
17:40:25.377103 <= Recv SSL data, 16 bytes (0x10)
0000: ....C..'.A,..'R.
17:40:25.377142 == Info: SSL connection using TLSv1.0 / AES128-SHA
For both Yaws and IIS, the first "Send SSL data" takes 150 ms
For IIS, two immediate "Recv SSL data" follows with no delay.
However, in the Yaws case, we have to wait 350ms for the first "Recv SSL data", and then the next is immediate
It is like something is async in IIS, but sync in Yaws. In IIS, the data to be received is combined with the ack of the Send, but in Yaws, it is two separate requests.
All these requests have to pass the Atlantic Ocean. If I instead do it with the same data center, the differences are much much smaller.
Nginx or HAproxy can solve this problem. You should terminate your HTTPS traffic on proxy node. And after that you should pass HTTP traffic on erlang node. Moreover keep the erlang node in internet without proxy is not a good practice.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With