Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to produce the cpu cache effect in C and java?

In Ulrich Drepper's paper What every programmer should know about memory, the 3rd part: CPU Caches, he shows a graph that shows the relationship between "working set" size and the cpu cycle consuming per operation (in this case, sequential reading). And there are two jumps in the graph which indicate the size of L1 cache and L2 cache. I wrote my own program to reproduce the effect in c. It just simply read a int[] array sequentially from head to tail, and I've tried different size of the array(from 1KB to 1MB). I plot the data into a graph and there is no jump, the graph is a straight line.

My questions are:

  1. Is there something wrong with my method? What is the right way to produce the cpu cache effect(to see the jumps).
  2. I was thinking, if it is sequential read, then it should operate like this: When read the first element, it's a cache miss, and within the cache line size(64K), there will be hits. With the help of the prefetching, the latency of reading the next cache line will be hidden. It will contiguously read data into the L1 cache, even when the working set size is over the L1 cache size, it will evict the least recently used ones, and continue prefetch. So most of the cache misses will be hidden, the time consumed by fetch data from L2 will be hidden behind the reading activity, meaning they are operating at the same time. the assosiativity (8 way in my case) will hide the latency of reading data from L2. So, phenomenon of my program should be right, am I missing something?
  3. Is it possible to get the same effect in java?

By the way, I am doing this in linux.


Edit 1

Thanks for Stephen C's suggestion, here are some additional Information: This is my code:

int *arrayInt;

void initInt(long len) {
    int i;
    arrayInt = (int *)malloc(len * sizeof(int));
    memset(arrayInt, 0, len * sizeof(int));
}

long sreadInt(long len) {   
    int sum = 0;
    struct timespec tsStart, tsEnd;

    initInt(len);

    clock_gettime(CLOCK_REALTIME, &tsStart);
    for(i = 0; i < len; i++) {
        sum += arrayInt[i];
    }
    clock_gettime(CLOCK_REALTIME, &tsEnd);
    free(arrayInt);
    return (tsEnd.tv_nsec - tsStart.tv_nsec) / len;
}

In main() function, I've tried from 1KB to 100MB of the array size, still the same, average time consuming per element is 2 nanoseconds. I think the time is the access time of L1d.

My cache size:

L1d == 32k

L2 == 256k

L3 == 6144k


EDIT 2

I've changed my code to use a linked list.

// element type
struct l {
    struct l *n;
    long int pad[NPAD]; // the NPAD could be changed, in my case I set it to 1
};

struct l *array;
long globalSum;

// for init the array
void init(long len) {
    long i, j;

    struct l *ptr;

    array = (struct l*)malloc(sizeof(struct l));
    ptr = array;
    for(j = 0; j < NPAD; j++) {
        ptr->pad[j] = j;
    }
    ptr->n = NULL;

    for(i = 1; i < len; i++) {
        ptr->n = (struct l*)malloc(sizeof(struct l));
        ptr = ptr->n;
        for(j = 0; j < NPAD; j++) {
            ptr->pad[j] = i + j;
        }
        ptr->n = NULL;
    }

}

// for free the array when operation is done
void release() {
    struct l *ptr = array;
    struct l *tmp = NULL;
    while(ptr) {
        tmp = ptr;
        ptr = ptr->n;
        free(tmp);
    }
}

double sread(long len) {
    int i;
    long sum = 0;

    struct l *ptr;
    struct timespec tsStart, tsEnd;


    init(len);

    ptr = array;

    clock_gettime(CLOCK_REALTIME, &tsStart);
    while(ptr) {
        for(i = 0; i < NPAD; i++) {
            sum += ptr->pad[i];
        }
        ptr = ptr->n;
    }
    clock_gettime(CLOCK_REALTIME, &tsEnd);

    release();

    globalSum += sum;

    return (double)(tsEnd.tv_nsec - tsStart.tv_nsec) / (double)len;
}

At last, I will printf out the globalSum in order to avoid the compiler optimization. As you can see, it is still a sequential read, I've even tried up to 500MB of the array size, the average time per element is approximately 4 nanoseconds (maybe because it has to access the data 'pad' and the pointer 'n', two accesses), the same as 1KB of the array size. So, I think it is because the cache optimization like prefetch hide the latency very well, am I right? I will try a random access, and put the result on later.


EDIT 3

I've tried a random access to the linked list, this is the result: randomly access a linked list

the first red line is my L1 cache size, the second is L2. So we can see a little jump there. And some times the latency still be hidden well.

like image 448
dawnstar Avatar asked Sep 22 '12 23:09

dawnstar


People also ask

What is caching and how can you implement your cache in Java?

The Java Object Cache provides caching for expensive or frequently used Java objects when the application servers use a Java program to supply their content. Cached Java objects can contain generated pages or can provide support objects within the program to assist in creating new content.

How do you cache CPU?

A CPU cache is a hardware cache used by the central processing unit (CPU) of a computer to reduce the average cost (time or energy) to access data from the main memory. A cache is a smaller, faster memory, located closer to a processor core, which stores copies of the data from frequently used main memory locations.

How cache affects the CPU?

Cache is a small amount of high-speed random access memory (RAM) built directly within the processor. It is used to temporarily hold data and instructions that the processor is likely to reuse. The bigger its cache, the less time a processor has to wait for instructions to be fetched.


2 Answers

This answer isn't an answer, but more of a set of notes.

First, the CPU tends to operate on cache lines, not on individual bytes/words/dwords. This means that if you sequentially read/write an array of integers then the first access to a cache line may cause a cache miss but subsequent accesses to different integers in that same cache line won't. For 64-byte cache lines and 4-byte integers this means that you'd only get a cache miss once for every 16 accesses; which will dilute the results.

Second, the CPU has a "hardware pre-fetcher." If it detects that cache lines are being read sequentially, the hardware pre-fetcher will automatically pre-fetch cache lines it predicts will be needed next (in an attempt to fetch them into cache before they're needed).

Third, the CPU does other things (like "out of order execution") to hide fetch costs. The time difference (between cache hit and cache miss) that you can measure is the time that the CPU couldn't hide and not the total cost of the fetch.

These 3 things combined mean that; for sequentially reading an array of integers, it's likely that the CPU pre-fetches the next cache line while you're doing 16 reads from the previous cache line; and any cache miss costs won't be noticeable and may be entirely hidden. To prevent this; you'd want to "randomly" access each cache line once, to maximise the performance difference measured between "working set fits in cache/s" and "working set doesn't fit in cache/s."

Finally, there are other factors that may influence measurements. For example, for an OS that uses paging (e.g. Linux and almost all other modern OSs) there's a whole layer of caching above all this (TLBs/Translation Look-aside Buffers), and TLB misses once the working set gets beyond a certain size; which should be visible as a fourth "step" in the graph. There's also interference from the kernel (IRQs, page faults, task switches, multiple CPUs, etc); which might be visible as random static/error in the graph (unless tests are repeated often and outliers discarded). There are also artifacts of the cache design (cache associativity) that can reduce the effectiveness of the cache in ways that depend on the physical address/es allocated by the kernel; which might be seen as the "steps" in the graph shifting to different places.

like image 89
Brendan Avatar answered Oct 02 '22 16:10

Brendan


Is there something wrong with my method?

Possibly, but without seeing your actual code that cannot be answered.

  • Your description of what your code is doing does not say whether you are reading the array once or many times.

  • The array may not be big enough ... depending on your hardware. (Don't some modern chips have a 3rd level cache of a few megabytes?)

  • In the Java case in particular you have to do lots of things the right way to implement a meaningful micro-benchmark.


In the C case:

  • You might try adjusting the C compiler's optimization switches.

  • Since your code is accessing the array serially, the compiler might be able to order the instructions so that the CPU can keep up, or the CPU might be optimistically prefetching or doing wide fetches. You could try reading the array elements in a less predictable order.

  • It is even possible that the compiler has entirely optimized the loop away because result of the loop calculation is not used for anything.

(According to this Q&A - How much time does it take to fetch one word from memory?, a fetch from L2 cache is ~7 nanoseconds and a fetch from main memory is ~100 nanoseconds. But you are getting ~2 nanoseconds. Something clever has to be going on here to make it run as fast as you are observing.)

like image 23
Stephen C Avatar answered Oct 02 '22 15:10

Stephen C