Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

GLSL - Does a dot product really only cost one cycle?

Tags:

gpgpu

glsl

shader

I've come across several situations where the claim is made that doing a dot product in GLSL will end up being run in one cycle. For example:

Vertex and fragment processors operate on four-vectors, performing four-component instructions such as additions, multiplications, multiply-accumulates, or dot products in a single cycle.

http://http.developer.nvidia.com/GPUGems2/gpugems2_chapter35.html

I've also seen a claim in comments somewhere that:

    dot(value, vec4(.25))

would be a more efficient way to average four values, compared to:

    (x + y + z + w) / 4.0

Again, the claim was that dot(vec4, vec4) would run in one cycle.

I see that ARB says that dot product (DP3 and DP4) and cross product (XPD are single instructions, but does that mean that those are just as computationally expensive as doing a vec4 add? Is there basically some hardware implementation, along the lines of multiply-accumulate on steroids, in play here? I can see how something like that is useful in computer graphics, but doing in one cycle what could be quite a few instructions on their own sounds like a lot.

like image 468
ultramiraculous Avatar asked May 25 '12 23:05

ultramiraculous


1 Answers

The question cannot be answered in any definitive way as a whole. How long any operation takes in hardware is not just hardware-specific, but also code specific. That is, the surrounding code can completely mask the performance an operation takes, or it can make it take longer.

In general, you should not assume that a dot product is single-cycle.

However, there are certain aspects that can certainly be answered:

I've also seen a claim in comments somewhere that:

would be a more efficient way to average four values, compared to:

I would expect this to be kinda true, so long as x, y, z, and w are in fact different float values rather than members of the same vec4 (that is, they're not value.x, value.y, etc). If they are elements of the same vector, I would say that any decent optimizing compiler should compile both of these to the same set of instructions. A good peephole optimizer should catch patterns like this.

I say that it is "kinda true", because it depends on the hardware. The dot-product version should at the very least not be slower. And again, if they are elements of the same vector, the optimizer should handle it.

single instructions, but does that mean that those are just as computationally expensive as doing a vec4 add?

You should not assume that ARB assembly has any relation to the actual hardware machine instruction code.

Is there basically some hardware implementation, along the lines of multiply-accumulate on steroids, in play here?

If you want to talk about hardware, it's very hardware-specific. Once upon a time, there was specialized dot-product hardware. This was in the days of so-called "DOT3 bumpmapping" and the early DX8-era of shaders.

However, in order to speed up general operations, they had to take that sort of thing out. So now, for most modern hardware (aka: anything Radeon HD-class or NVIDIA 8xxx or better. So-called DX10 or 11 hardware), dot-products do pretty much what they say they do. Each multiply/add takes up a cycle.

However, this hardware also allows for a lot of parallelism, so you could have 4 separate vec4 dot products happening simultaneously. Each one would take 4 cycles. But, as long as the results of these operations are not used in the others, they can all execute in parallel. And therefore, the four of them total would take 4 cycles.

So again, it's very complicated. And hardware-dependent.

Your best bet is to start with something that is reasonable. Then learn about the hardware you're trying to code towards, and work from there.

like image 191
Nicol Bolas Avatar answered Oct 24 '22 05:10

Nicol Bolas