Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

GPU shared memory size is very small - what can I do about it?

The size of the shared memory ("local memory" in OpenCL terms) is only 16 KiB on most nVIDIA GPUs of today.
I have an application in which I need to create an array that has 10,000 integers. so the amount of memory I will need to fit 10,000 integers = 10,000 * 4b = 40kb.

  • How can I work around this?
  • Is there any GPU that has more than 16 KiB of shared memory ?
like image 213
rana Avatar asked Feb 13 '11 11:02

rana


People also ask

Why is GPU memory so small?

It's because of the ultra-high bandwidth the memory has to provide. You cannot just add more memory chips because the increased electrical capacity (and decreased resistance) would spoil the timing on the memory bus.

Can I change shared GPU memory?

Once you reach the BIOS menu, look for a menu similar to Graphics Settings, Video Settings or VGA Share Memory Size. You can typically find it under the Advanced menu. Then, up the Pre-Allocated VRAM to whichever option suits you best. Save the configuration and restart your computer.

Why is shared GPU memory not used?

It just means that there are some jobs that need the CPU to do some work as well as the GPU. If shared memory was up at 6GB as well as or instead of the dedicated GPU then you might have cause to be concerned, but at such a tiny amount it is not doing any significant amount of work in that area.

Where does GPU get shared memory?

Shared GPU memory is “sourced” and taken from your System RAM – it's not physical, but virtual – basically just an allocation or reserved area on your System RAM; this memory can then be used as VRAM (Video-RAM) once your dedicated GPU memory (if you have any) is full.


2 Answers

Think of shared memory as explicitly managed cache. You will need to store your array in global memory and cache parts of it in shared memory as needed, either by making multiple passes or some other scheme which minimises the number of loads and stores to/from global memory.

How you implement this will depend on your algorithm - if you can give some details of what it is exactly that you are trying to implement you may get some more concrete suggestions.

One last point - be aware that shared memory is shared between all threads in a block - you have way less than 16 kb per thread, unless you have a single data structure which is common to all threads in a block.

like image 111
Paul R Avatar answered Jan 30 '23 03:01

Paul R


All compute capability 2.0 and greater devices (most in the last year or two) have 48KB of available shared memory per multiprocessor. That begin said, Paul's answer is correct in that you likely will not want to load all 10K integers into a single multiprocessor.

like image 44
Devin Lane Avatar answered Jan 30 '23 04:01

Devin Lane