Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How is 3D texture memory cached?

Tags:

cuda

I have an application where 96% of the time is spent in 3D texture memory interpolation reads (red points in diagram).

My kernels are designed to do 1000~ memory reads on a line that crosses the texture memory arbitrarily, a thread per line (blue lines). This lines are densely packed, very close to each other, travelling in almost parallel directions.

The image shows the concept of what I am talking about. Imagine the image is a single "slice" from the 3D texture memory, e.g. z=24. The image is repeated for all z. enter image description here

At the moment, I am executing threads just one line after the other, but I realized that I might be able to benefit from texture memory locality if I call adjacent lines in the same block, reducing the time for memory reads.

My questions are

  • if I have 3D texture with linear interpolation, how could I benefit most from the data locality? By running adjacent lines in the same block in 2D or adjacent lines in 3D (3D neighbors or just neighbors per slice)?

  • How "big" is the cache (or how can I check this in the specs)? Does it load e.g. the asked voxel and +-50 around it in every direction? This will directly relate with the amount of neighboring lines I'd put in each block!

  • How does the interpolation applies to texture memory cache? Is the interpolation also performed in the cache, or the fact that its interpolated will reduce the memory latency because it needs to be done in the text memory itself?


Working on a NVIDIA TESLA K40, CUDA 7.5, if it helps.

like image 613
Ander Biguri Avatar asked Mar 11 '16 10:03

Ander Biguri


People also ask

What is texture caching?

The texture cache is an essential component of modern GPUs and plays an important role in achieving real-time performance when generating realistic images. The texture cache is a read-only cache that stores image data that is used for putting images onto triangles, a process called texture mapping.

What is texture cache size?

Transactions from the texture cache are 32 byte units.

What is Cuda texture memory?

CUDA Texture/Surface Memory Example. Texture Memory is a specific fast memory which is optimized for spatial read-only of 2D data. Additionally there are possibilities to use the texture processing unit (TPU) of the graphics card for interpolation.


1 Answers

As this question is getting old, and no answers seem to exist to some of the questions I asked, I will give a benchmark answer, based on my research building the TIGRE toolbox. You can get the source code in the Github repo.

As the answer is based in the specific application of the toolbox, computed tomography, it means that my results are not necessarily true for all applications using texture memory. Additionally, my GPU (see above) its quite a decent one, so your mileage may vary in different hardware.


The specifics

It is important to note: this is a Cone Beam Computed Tomography applications. This means that:

  • The lines are more or less uniformily distributed along the image, covering most of it
  • The lines are more or less parallel with adjacent lines, and will predominantly be always in a single plane. E.g. They always are more or less horizontal, never vertical.
  • The sample rate on top of the lines is the same, meaning that adjacent lines will always sample the next point very close to each other.

All this information is important for memory locality.

Additionally, as said in the question, 96% of the time of the kernel is memory reading, so its safe to assume that the variation of the kernel times reported are due to changes in speed of memory reading.


The questions

If I have 3D texture with linear interpolation, how could I benefit most from the data locality? By running adjacent lines in the same block in 2D or adjacent lines in 3D (3D neighbors or just neighbors per slice)?

Once one gets a bit more experienced with the texture memory sees that the straightforward answer is: run as many as possible adjacent lines together. The closer to each other the memory reads are in image index, the better.

This effectively for tomography means running square detector pixel blocks. Packing rays (blue lines in the original image) together.

How "big" is the cache (or how can I check this in the specs)? Does it load e.g. the asked voxel and +-50 around it in every direction? This will directly relate with the amount of neighboring lines I'd put in each block!

While impossible to say, empirically I found that running smaller blocks is better. My results show that for a 512^3 image, with 512^2 rays, with a sample rate of ~2 samples/voxel, the block size:

32x32 -> [18~25] ms
16x16 -> [14~18] ms
8x8   -> [11~14] ms
4x4   -> [25~29] ms

The block sizes are effectively the size of a square adjacent rays that are computed together. E.g. 32x32 means that 1024 Xrays will be computed in parallel, adjacent to each other in a square 32x32 block. As the exact same operations are performed in each line, this means that the samples are taken about a 32x32 plane on the image, covering approximately 32x32x1 indexes.

It is predictable that at some point when reducing the size of the blocks the speed would get slow again, but this is at (at least for me) surprisingly low value. I think this hints that the memory cache loads relatively small chunks of data from the image.

This results shows an additional information not asked in the original question: what happens with out of bounds samples regarding speed. As adding any if condition to the kernel would significantly slow it down, the way I programmed the kernel is by starting sampling in a point in the line that is ensured to be out of the image, and stop in a similar case. This has been done by creating a fictional "sphere" around the image, and always sampling the same amount, independent of the angle between the image and the lines themselves.

If you see the times for each kernel that I have shown, you'd notice all of them are [t ~sqrt(2)*t], and I have checked that indeed the longer times are from when the angle between the lines and the image is multiples of 45 degrees, where more samples fall inside the image (texture).

This means that sampling out of the image index (tex3d(tex, -5,-5,-5)) is computationally free. No time is spend in reading out of bounds. Its better to read a lot of out of bounds points than to check if the points fall inside the image, as the if condition will slow the kernel and sampling out of bounds has zero cost.

How does the interpolation applies to texture memory cache? Is the interpolation also performed in the cache, or the fact that its interpolated will reduce the memory latency because it needs to be done in the text memory itself?

To test this, I ran the same code but with linear interpolation (cudaFilterModeLinear)and nearest neighbor interpolation (cudaFilterModePoint). As expected, improvement of speed is present when nearest neighbor interpolation is added. For 8x8 blocks with the previously mentioned image sizes, in my pc:

Linear  ->  [11~14] ms
Nearest ->  [ 9~10] ms

The speedup is not massive but its significant. This hints, as expected, that the time that the cache takes in interpolating the data is measurable, so one needs to be aware of it when designing applications.

like image 82
Ander Biguri Avatar answered Sep 22 '22 21:09

Ander Biguri