Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

What caused my elapsed time much longer than user time?

Tags:

performance

r

I am benchmarking some R statements (see details here) and found that my elapsed time is way longer than my user time.

   user  system elapsed 
  7.910   7.750  53.916 

Could someone help me to understand what factors (R or hardware) determine the difference between user time and elapsed time, and how I can improve it? In case it helps: I am running data.table data manipulation on a Macbook Air 1.7Ghz i5 with 4GB RAM.

Update: My crude understanding is that user time is what it takes my CPU to process my job. elapsed time is the length from I submit a job until I get the data back. What else did my computer need to do after processing for 8 seconds?

Update: as suggested in the comment, I run a couple times on two data.table: Y, with 104 columns (sorry, I add more columns as time goes by), and X as a subset of Y with only 3 keys. Below are the updates. Please note that I ran these two procedures consecutively, so the memory state should be similar.

 X<- Y[, list(Year, MemberID, Month)]

 system.time(
   {X[ , Month:= -Month]
   setkey(X,Year, MemberID, Month)
   X[,Month:=-Month]}
  )
   user  system elapsed 
  3.490   0.031   3.519 

 system.time(
 {Y[ , Month:= -Month]
  setkey(Y,Year, MemberID, Month)
  Y[,Month:=-Month]}
 )
   user  system elapsed 
  8.444   5.564  36.284 

Here are the size of the only two objects in my workspace (commas added). :

object.size(X)
83,237,624 bytes

 object.size(Y)
2,449,521,080 bytes

Thank you

like image 437
AdamNYC Avatar asked Dec 03 '12 17:12

AdamNYC


People also ask

Why is the total time larger than the sum of the user and system times?

If multiple processes are running on your server and if your application doesn't get enough CPU cycles to run, it will start to wait. When the application starts to wait then real time will be considerably higher than user + sys time.

What is reason of elapsed time?

Elapsed time is the time between designating a resource to a task and the completion of the task. In simple terms, it is the passage of calendar days.

What is CPU elapsed time?

Elapsed time refers to the amount of “wall clock” time from initiation to termination of the application. For typical sorting applications, elapsed time is composed primarily of I/O time, with CPU time and I/O queueing time also contributing significantly.


1 Answers

User time is how many seconds the computer spent doing your calculations. System time is how much time the operating system spent responding to your program's requests. Elapsed time is the sum of those two, plus whatever "waiting around" your program and/or the OS had to do. It's important to note that these numbers are the aggregate of time spent. Your program might compute for 1 second, then wait on the OS for one second, then wait on disk for 3 seconds and repeat this cycle many times while it's running.

Based on the fact that your program took as much system time as user time it was a very IO intensive thing. Reading from disk a lot or writing to disk a lot. RAM is pretty fast, a few hundred nanoseconds usually. So if everything fits in RAM elapsed time is usually just a little bit longer than user time. But disk might take a few milliseconds to seek and even longer to reply with the data. That's slower by a factor of of a million.

We've determined that your processor was "doing stuff" for ~8 + ~8 = ~ 16 seconds. What was it doing for the other ~54 - ~16 = ~38 seconds? Waiting for the hard drive to send it the data it asked for.

UPDATE1:

Matthew had made some excellent points that I'm making assumptions that I probably shouldn't be making. Adam, if you'd care to publish a list of all the rows in your table (datatypes are all we need) we can get a better idea of what's going on.

I just cooked up a little do-nothing program to validate my assumption that time not spent in userspace and kernel space is likely spent waiting for IO.

#include <stdio.h>
int main()
{
    int i;
    for(i = 0; i < 1000000000; i++)
    {
        int j, k, l, m;
        j = 10;
        k = i;
        l = j + k;
        m = j + k - i + l;
    }
    return 0;
}

When I run the resulting program and time it I see something like this:

mike@computer:~$ time ./waste_user
real    0m4.670s
user    0m4.660s
sys 0m0.000s
mike@computer:~$ 

As you can see by inspection the program does no real work and as such it doesn't ask the kernel to do anything short of load it into RAM and start it running. So nearly ALL the "real" time is spent as "user" time.

Now a kernel-heavy do-nothing program (with a few less iterations to keep the time reasonable):

#include <stdio.h>
int main()
{
    FILE * random;
    random = fopen("/dev/urandom", "r");
    int i;
    for(i = 0; i < 10000000; i++)
    {
        fgetc(random);
    }
    return 0;
}

When I run that one, I see something more like this:

mike@computer:~$ time ./waste_sys
real    0m1.138s
user    0m0.090s
sys     0m1.040s
mike@computer:~$ 

Again it's easy to see by inspection that the program does little more than ask the kernel to give it random bytes. /dev/urandom is a non-blocking source of entropy. What does that mean? The kernel uses a pseudo-random number generator to quickly generate "random" values for our little test program. That means the kernel has to do some computation but it can return very quickly. So this program mostly waits for the kernel to compute for it, and we can see that reflected in the fact that almost all the time is spent on sys.

Now we're going to make one little change. Instead of reading from /dev/urandom which is non-blocking we'll read from /dev/random which is blocking. What does that mean? It doesn't do much computing but rather it waits around for stuff to happen on your computer that the kernel developers have empirically determined is random. (We'll also do far fewer iterations since this stuff takes much longer)

#include <stdio.h>
int main()
{
    FILE * random;
    random = fopen("/dev/random", "r");
    int i;
    for(i = 0; i < 100; i++)
    {
        fgetc(random);
    }
    return 0;
}

And when I run and time this version of the program, here's what I see:

mike@computer:~$ time ./waste_io
real    0m41.451s
user    0m0.000s
sys     0m0.000s
mike@computer:~$ 

It took 41 seconds to run, but immeasurably small amounts of time on user and real. Why is that? All the time was spent in the kernel, but not doing active computation. The kernel was just waiting for stuff to happen. Once enough entropy was collected the kernel would wake back up and send back the data to the program. (Note it might take much less or much more time to run on your computer depending on what all is going on). I argue that the difference in time between user+sys and real is IO.

So what does all this mean? It doesn't prove that my answer is right because there could be other explanations for why you're seeing the behavior that you are. But it does demonstrate the differences between user compute time, kernel compute time and what I'm claiming is time spent doing IO.

Here's my source for the difference between /dev/urandom and /dev/random: http://en.wikipedia.org/wiki//dev/random

UPDATE2:

I thought I would try and address Matthew's suggestion that perhaps L2 cache misses are at the root of the problem. The Core i7 has a 64 byte cache line. I don't know how much you know about caches, so I'll provide some details. When you ask for a value from memory the CPU doesn't get just that one value, it gets all 64 bytes around it. That means if you're accessing memory in a very predictable pattern -- like say array[0], array[1], array[2], etc -- it takes a while to get value 0, but then 1, 2, 3, 4... are much faster. Until you get to the next cache line, that is. If this were an array of ints, 0 would be slow, 1..15 would be fast, 16 would be slow, 17..31 would be fast, etc.

http://software.intel.com/en-us/forums/topic/296674

In order to test this out I've made two programs. They both have an array of structs in them with 1024*1024 elements. In one case the struct has a single double in it, in the other it's got 8 doubles in it. A double is 8 bytes long so in the second program we're accessing memory in the worst possible fashion for a cache. The first will get to use the cache nicely.

#include <stdio.h>
#include <stdlib.h>
#define MANY_MEGS 1048576
typedef struct {
    double a;
} PartialLine;
int main()
{
    int i, j;
    PartialLine* many_lines;
    int total_bytes = MANY_MEGS * sizeof(PartialLine);
    printf("Striding through %d total bytes, %d bytes at a time\n", total_bytes, sizeof(PartialLine));
    many_lines = (PartialLine*) malloc(total_bytes);
    PartialLine line;
    double x;
    for(i = 0; i < 300; i++)
    {
        for(j = 0; j < MANY_MEGS; j++)
        {
            line = many_lines[j];
            x = line.a;
        }
    }
    return 0;
}

When I run this program I see this output:

mike@computer:~$ time ./cache_hits
Striding through 8388608 total bytes, 8 bytes at a time
real    0m3.194s
user    0m3.140s
sys     0m0.016s
mike@computer:~$

Here's the program with the big structs, they each take up 64 bytes of memory, not 8.

#include <stdio.h>
#include <stdlib.h>
#define MANY_MEGS 1048576
typedef struct {
    double a, b, c, d, e, f, g, h;
} WholeLine;
int main()
{
    int i, j;
    WholeLine* many_lines;
    int total_bytes = MANY_MEGS * sizeof(WholeLine);
    printf("Striding through %d total bytes, %d bytes at a time\n", total_bytes, sizeof(WholeLine));
    many_lines = (WholeLine*) malloc(total_bytes);
    WholeLine line;
    double x;
    for(i = 0; i < 300; i++)
    {
        for(j = 0; j < MANY_MEGS; j++)
        {
            line = many_lines[j];
            x = line.a;
        }
    }
    return 0;
}

And when I run it, I see this:

mike@computer:~$ time ./cache_misses
Striding through 67108864 total bytes, 64 bytes at a time
real    0m14.367s
user    0m14.245s
sys     0m0.088s
mike@computer:~$ 

The second program -- the one designed to have cache misses -- it took five times as long to run for the exact same number of memory accesses.

Also worth noting is that in both cases, all the time spent was spent in user, not sys. That means that the OS is counting the time your program has to wait for data against your program, not against the operating system. Given these two examples I think it's unlikely that cache misses are causing your elapsed time to be substantially longer than your user time.

UPDATE3:

I just saw your update that the really slimmed down table ran about 10x faster than the regular-sized one. That too would indicate to me that (as another Matthew also said) you're running out of RAM.

Once your program tries to use more memory than your computer actually has installed it starts swapping to disk. This is better than your program crashing, but its much slower than RAM and can cause substantial slowdowns.

I'll try and put together an example that shows swap problems tomorrow.

UPDATE4:

Okay, here's an example program which is very similar to the previous one. But now the struct is 4096 bytes, not 8 bytes. In total this program will use 2GB of memory rather than 64MB. I also change things up a bit and make sure that I access things randomly instead of element-by-element so that the kernel can't get smart and start anticipating my programs needs. The caches are driven by hardware (driven solely by simple heuristics) but it's entirely possible that kswapd (the kernel swap daemon) could be substantially smarter than the cache.

#include <stdio.h>
#include <stdlib.h>
typedef struct {
    double numbers[512];
} WholePage;
int main()
{
    int memory_ops = 1024*1024;
    int total_memory = memory_ops / 2;
    int num_chunks = 8;
    int chunk_bytes = total_memory / num_chunks * sizeof(WholePage);
    int i, j, k, l;
    printf("Bouncing through %u MB, %d bytes at a time\n", chunk_bytes/1024*num_chunks/1024, sizeof(WholePage));
    WholePage* many_pages[num_chunks];
    for(i = 0; i < num_chunks; i++)
    {
        many_pages[i] = (WholePage*) malloc(chunk_bytes);
        if(many_pages[i] == 0){ exit(1); }
    }
    WholePage* page_list;
    WholePage* page;
    double x;
    for(i = 0; i < 300*memory_ops; i++)
    {
        j = rand() % num_chunks;
        k = rand() % (total_memory / num_chunks);
        l = rand() % 512;
        page_list = many_pages[j];
        page = page_list + k;
        x = page->numbers[l];
    }
    return 0;
}

From the program I called cache_hits to cache_misses we saw the size of memory increased 8x and execution time increased 5x. What do you expect to see when we run this program? It uses 32x as much memory than cache_misses but has the same number of memory accesses.

mike@computer:~$ time ./page_misses
Bouncing through 2048 MB, 4096 bytes at a time
real    2m1.327s
user    1m56.483s
sys     0m0.588s
mike@computer:~$ 

It took 8x as long as cache_misses and 40x as long as cache_hits. And this is on a computer with 4GB of RAM. I used 50% of my RAM in this program versus 1.5% for cache_misses and 0.2% for cache_hits. It got substantially slower even though it wasn't using up ALL the RAM my computer has. It was enough to be significant.

I hope this is a decent primer on how to diagnose problems with programs running slow.

like image 153
Mike Sandford Avatar answered Oct 20 '22 21:10

Mike Sandford