Sorry that I'm an Erlang newbie and may make stupid question. But please help me to solve the issue.
I have written an Erlang server to replace the one I'm using with Node.js, which ate all my memory and I'm praying that Erlang could be a way out. The server works properly under unit test and internal testing, but face a high CPU usage in stress test.
After trimming down, I found that the CPU burst was due to the TCP receive from clients.
receiveClientPacket(Sock) ->
inet:setopts(Sock, [{active, once}, {buffer, ?CLIENTHEARTBEATSIZE}]),
receive
{tcp, Sock, Data} ->
{ok, Data};
{tcp_closed, Sock} ->
{error, closed}
after ?CLIENTRECCEIVETIMEOUT ->
{error, timeout}
end.
I tried making the process sleep for 10 hours at the beginning of the function (to prevent it from calling receive), the CPU didn't burst at all. Therefore I conclude that the burst of CPU is due to TCP receive. (Please correct me if I made any mistake)
Here are information about my stress test:
I'm using an Amazon Linux AMI (large instance, 64-bit) for the Erlang server. Is the burst due to the linux? As I have no idea how the system will use up the CPU. Or is it my poor code's problem? (I believe so...)
In real situation, our servers don't only receive ping pong, but also messages, which is a lot more loading... This is only the first step...
Millions of thanks to anyone who can save me.
Anita~*
~~~~~~~~~~~~~~~~~~~~~~~
Information about large instance (for reference):
This article about Building a non-blocking TCP server using OTP principles might be an interesting read for you. You also could have a look at Ranch which is used by Cowboy the Erlang HTTP server which can support a very large number of connections keeping a low memory footprint.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With