Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Erlang/OTP - Timing Applications

I am interested in bench-marking different parts of my program for speed. I having tried using info(statistics) and erlang:now()

I need to know down to the microsecond what the average speed is. I don't know why I am having trouble with a script I wrote.

It should be able to start anywhere and end anywhere. I ran into a problem when I tried starting it on a process that may be running up to four times in parallel.

Is there anyone who already has a solution to this issue?

EDIT:

Willing to give a bounty if someone can provide a script to do it. It needs to spawn though multiple process'. I cannot accept a function like timer.. at least in the implementations I have seen. IT only traverses one process and even then some major editing is necessary for a full test of a full program. Hope I made it clear enough.

like image 504
BAR Avatar asked Dec 01 '10 22:12

BAR


People also ask

What is Erlang OTP used for?

1.1 Erlang and OTP OTP (Open Telecom Platform) is aimed at providing time-saving and flexible development for robust, adaptable telecom systems. It consists of an Erlang runtime system, a number of ready-to-use components mainly written in Erlang, and a set of design principles for Erlang programs.

What is Erlang OTP 23?

Erlang/OTP 23.3 is the third and final maintenance patch package for OTP 23, with mostly bug fixes as well as a few improvements. A full list of bug fixes and improvements in the readme.

What is Erlang JIT?

1 BeamAsm, the Erlang JIT. BeamAsm provides load-time conversion of Erlang BEAM instructions into native code on x86-64 and aarch64. This allows the loader to eliminate any instruction dispatching overhead and also specialize each instruction on their argument types.


1 Answers

Here's how to use eprof, likely the easiest solution for you:

First you need to start it, like most applications out there:

23> eprof:start().
{ok,<0.95.0>}

Eprof supports two profiling mode. You can call it and ask to profile a certain function, but we can't use that because other processes will mess everything up. We need to manually start it profiling and tell it when to stop (this is why you won't have an easy script, by the way).

24> eprof:start_profiling([self()]).
profiling

This tells eprof to profile everything that will be run and spawned from the shell. New processes will be included here. I will run some arbitrary multiprocessing function I have, which spawns about 4 processes communicating with each other for a few seconds:

25> trade_calls:main_ab().
Spawned Carl: <0.99.0>
Spawned Jim: <0.101.0>
<0.100.0>
Jim: asking user <0.99.0> for a trade
Carl: <0.101.0> asked for a trade negotiation
Carl: accepting negotiation
Jim: starting negotiation
... <snip> ...

We can now tell eprof to stop profiling once the function is done running.

26> eprof:stop_profiling().
profiling_stopped

And we want the logs. Eprof will print them to screen by default. You can ask it to also log to a file with eprof:log(File). Then you can tell it to analyze the results. We tell it to collapse the run time from all processes into a single table with the option total (see the manual for more options):

27> eprof:analyze(total).           
FUNCTION                                  CALLS      %  TIME  [uS / CALLS]
--------                                  -----    ---  ----  [----------]
io:o_request/3                               46   0.00     0  [      0.00]
io:columns/0                                  2   0.00     0  [      0.00]
io:columns/1                                  2   0.00     0  [      0.00]
io:format/1                                   4   0.00     0  [      0.00]
io:format/2                                  46   0.00     0  [      0.00]
io:request/2                                 48   0.00     0  [      0.00]
...
erlang:atom_to_list/1                         5   0.00     0  [      0.00]
io:format/3                                  46  16.67  1000  [     21.74]
erl_eval:bindings/1                           4  16.67  1000  [    250.00]
dict:store_bkt_val/3                        400  16.67  1000  [      2.50]
dict:store/3                                114  50.00  3000  [     26.32]

And you can see that most of the time (50%) is spent in dict:store/3. 16.67% is taken in outputting the result, another 16.67% is taken by erl_eval (this is why you get by running short functions in the shell -- parsing them becomes longer than running them).

You can then start going from there. That's the basics of profiling run times with Erlang. Handle with care, eprof can be quite a load on a production system or for functions that run for too long. Especially on a production system.

like image 77
I GIVE TERRIBLE ADVICE Avatar answered Oct 15 '22 20:10

I GIVE TERRIBLE ADVICE