Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Making sense from GHC profiler

I'm trying to make sense from GHC profiler. There is a rather simple app, which uses werq and lens-aeson libraries, and while learning about GHC profiling, I decided to play with it a bit.

Using different options (time tool, +RTS -p -RTS and +RTS -p -h) I acquired entirely different numbers of my memory usage. Having all those numbers, I'm now completely lost trying to understand what is going on, and how much memory the app actually uses.

This situation reminds me the phrase by Arthur Bloch: "A man with a watch knows what time it is. A man with two watches is never sure."

Can you, please, suggest me, how I can read all those numbers, and what is the meaning of each of them.

Here are the numbers:

time -l reports around 19M

#/usr/bin/time -l ./simple-wreq
...
        3.02 real         0.39 user         0.17 sys
  19070976  maximum resident set size
         0  average shared memory size
         0  average unshared data size
         0  average unshared stack size
     21040  page reclaims
         0  page faults
         0  swaps
         0  block input operations
         0  block output operations
        71  messages sent
        71  messages received
      2991  signals received
        43  voluntary context switches
      6490  involuntary context switches

Using +RTS -p -RTS flag reports around 92M. Although it says "total alloc" it seems strange to me, that a simple app like this one can allocate and release 91M

# ./simple-wreq +RTS -p -RTS      
# cat simple-wreq.prof
        Fri Oct 14 15:08 2016 Time and Allocation Profiling Report  (Final)

           simple-wreq +RTS -N -p -RTS

        total time  =        0.07 secs   (69 ticks @ 1000 us, 1 processor)
        total alloc =  91,905,888 bytes  (excludes profiling overheads)

COST CENTRE                             MODULE                          %time %alloc

main.g                                  Main                             60.9   88.8
MAIN                                    MAIN                             24.6    2.5
decodeLenient/look                      Data.ByteString.Base64.Internal   5.8    2.6
decodeLenientWithTable/fill             Data.ByteString.Base64.Internal   2.9    0.1
decodeLenientWithTable.\.\.fill         Data.ByteString.Base64.Internal   1.4    0.0
decodeLenientWithTable.\.\.fill.\       Data.ByteString.Base64.Internal   1.4    0.1
decodeLenientWithTable.\.\.fill.\.\.\.\ Data.ByteString.Base64.Internal   1.4    3.3
decodeLenient                           Data.ByteString.Base64.Lazy       1.4    1.4


                                                                                                             individual     inherited
COST CENTRE                                              MODULE                            no.     entries  %time %alloc   %time %alloc

MAIN                                                     MAIN                              443           0   24.6    2.5   100.0  100.0
 main                                                    Main                              887           0    0.0    0.0    75.4   97.4
  main.g                                                 Main                              889           0   60.9   88.8    75.4   97.4
   object_                                               Data.Aeson.Parser.Internal        925           0    0.0    0.0     0.0    0.2
    jstring_                                             Data.Aeson.Parser.Internal        927          50    0.0    0.2     0.0    0.2
   unstream/resize                                       Data.Text.Internal.Fusion         923         600    0.0    0.3     0.0    0.3
   decodeLenient                                         Data.ByteString.Base64.Lazy       891           0    1.4    1.4    14.5    8.1
    decodeLenient                                        Data.ByteString.Base64            897         500    0.0    0.0    13.0    6.7
....

+RTS -p -h and hp2ps show me the following picture and two numbers: 114K in the header and something around 1.8Mb on the graph. Memory Profiling

And, just in case, here is the app:

module Main where

import Network.Wreq
import Control.Lens
import Data.Aeson.Lens
import Control.Monad

main :: IO ()
main = replicateM_ 10 g
  where 
    g = do
        r <- get "http://httpbin.org/get"
        print $ r ^. responseBody
                   . key "headers"
                   . key "User-Agent"
                   . _String

UPDATE 1: Thank everyone for incredible good responses. As was suggested, I add +RTS -s output, so the entire picture builds up for everyone who read it.

#./simple-wreq +RTS -s
...
     128,875,432 bytes allocated in the heap
      32,414,616 bytes copied during GC
       2,394,888 bytes maximum residency (16 sample(s))
         355,192 bytes maximum slop
               7 MB total memory in use (0 MB lost due to fragmentation)

                                     Tot time (elapsed)  Avg pause  Max pause
  Gen  0       194 colls,     0 par    0.018s   0.022s     0.0001s    0.0022s
  Gen  1        16 colls,     0 par    0.027s   0.031s     0.0019s    0.0042s

UPDATE 2: The size of the executable:

#du -h simple-wreq
63M     simple-wreq
like image 971
Slabko Avatar asked Oct 14 '16 13:10

Slabko


1 Answers

A man with a watch knows what time it is. A man with two watches is never sure.

Ah, but what do does two watches show? Are both meant to show the current time in UTC? Or is one of them supposed to show the time in UTC, and the other one the time on a certain point on Mars? As long as they are in sync, the second scenario wouldn't be a problem, right?

And that is exactly what is happening here. You compare different memory measurements:

  • the maximum residency
  • the total amount of allocated memory

The maximum residency is the highest amount of memory your program ever uses at a given time. That's 19MB. However, the total amount of allocated memory is a lot more, since that's how GHC works: it "allocates" memory for objects that are garbage collected, which is almost everything that's not unpacked.

Let us inspect a C example for this:

int main() {
   int i;
   char * mem;

   for(i = 0; i < 5; ++i) {
      mem = malloc(19 * 1000 * 1000);
      free(mem);
   }
   return 0;
}

Whenever we use malloc, we will allocate 19 megabytes of memory. However, we free the memory immediately after. The highest amount of memory we ever have at one point is therefore 19 megabytes (and a little bit more for the stack and the program itself).

However, in total, we allocate 5 * 19M, 95M total. Still, we could run our little program with just 20 megs of RAM fine. That's the difference between total allocated memory and maximum residency. Note that the residency reported by time is always at least du <executable>, since that has to reside in memory too.

That being said, the easiest way to generate statistics is -s, which will show how what was the maximum residency from the Haskell's program point of view. In your case, it will be the 1.9M, the number in your heap profile (or double the amount due to profiling). And yeah, Haskell executables tend to get extremely large, since libraries are statically linked.

like image 181
Zeta Avatar answered Oct 04 '22 08:10

Zeta