Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Is there a way to use clz() in a Vulkan compute shader?

I'm interested in implementing a particular algorithm in a set of Vulkan compute shaders. The algorithm uses a clz() function at one point. I expect that my NVIDIA GPU probably offers hardware support for this function; CUDA uses a clz instruction apparently, and clz() is in OpenCL 1.2 as well. So I don't want to write my own clz(). Is there any way for me to call the function in the way CUDA or OpenCL would do?

I suppose I could try compiling an OpenCL kernel to SPIR-V and using that in Vulkan, but I don't suppose Vulkan would be very happy about that...?

Another thought I've had is that maybe I could translate a very simple OpenCL kernel containing a clz() call to SPIR-V assembly, do the same with my GLSL shader, and then manually hack the clz() call, as it appears in the kernel assembly code, into the shader's assembly code. But I don't really know anything about the details of SPIR-V, or about any limits Vulkan may place on what sorts of SPIR-V instructions a compute shader may use, so I have hardly any idea about whether that could actually work.

like image 461
mjwach Avatar asked Feb 07 '23 08:02

mjwach


1 Answers

Vulkan-bound SPIR-V has access to the GLSL extended instruction set, which includes the function FindUMSB, which finds the most-significant bit. You can use that to emulate clz by doing 31 - FindUMSB. It's possible, if the hardware has an explicit clz instruction, that the compiler can factor out the subtraction and replace the expression with the internal clz.

like image 200
Nicol Bolas Avatar answered Mar 03 '23 07:03

Nicol Bolas