I have an image that I generate programmatically and I want to send this image as a texture to a compute shader. The way I generate this image is that I calculate each of the RGBA components as UInt8
values, and combine them into a UInt32
and store it in the buffer of the image. I do this with the following piece of code:
guard let cgContext = CGContext(data: nil,
width: width,
height: height,
bitsPerComponent: 8,
bytesPerRow: 0,
space: CGColorSpaceCreateDeviceRGB(),
bitmapInfo: RGBA32.bitmapInfo) else {
print("Unable to create CGContext")
return
}
guard let buffer = cgContext.data else {
print("Unable to create textures")
return
}
let pixelBuffer = buffer.bindMemory(to: RGBA32.self, capacity: width * height)
let heightFloat = Float(height)
let widthFloat = Float(width)
for i in 0 ..< height {
let latitude = Float(i + 1) / heightFloat
for j in 0 ..< width {
let longitude = Float(j + 1) / widthFloat
let x = UInt8(((sin(longitude * Float.pi * 2) * cos(latitude * Float.pi) + 1) / 2) * 255)
let y = UInt8(((sin(longitude * Float.pi * 2) * sin(latitude * Float.pi) + 1) / 2) * 255)
let z = UInt8(((cos(latitude * Float.pi) + 1) / 2) * 255)
let offset = width * i + j
pixelBuffer[offset] = RGBA32(red: x, green: y, blue: z, alpha: 255)
}
}
let coordinateConversionImage = cgContext.makeImage()
where RGBA32
is a little struct that does the shifting and creating the UInt32
value. This image turns out fine as I can convert it to UIImage
and save it to my photos library.
The problem arises when I try to send this image as a texture to a compute shader. Below is my shader code:
kernel void updateEnvironmentMap(texture2d<uint, access::read> currentFrameTexture [[texture(0)]],
texture2d<uint, access::read> coordinateConversionTexture [[texture(1)]],
texture2d<uint, access::write> environmentMap [[texture(2)]]
uint2 gid [[thread_position_in_grid]])
{
const uint4 pixel = {255, 127, 63, 255};
environmentMap.write(pixel, gid);
}
The problem with this code is that the type of my textures is uint
, which is 32-bits, and I want to generate 32-bit pixels the same way I do on the CPU, by appending 4 8-bit values. However, I can't seem to do that on Metal as there is no byte
type that I can just append together and make up a uint32
. So, my question is, what is the correct way to handle 2D textures and set 32-bit pixels on a Metal compute shader?
Bonus question: Also, I've seen example shader codes with texture2d<float, access::read>
as the input texture type. I'm assuming it represents a value between 0.0 and 1.0 but what advantage that does that have over an unsigned int with values between 0 and 255?
Edit: To clarify, the output texture of the shader, environmentMap
, has the exact same properties (width, height, pixelFormat, etc.) as the input textures. Why I think this is counter intuitive is that we are setting a uint4
as a pixel, which means it's composed of 4 32-bit values, whereas each pixel should be 32-bits. With this current code, {255, 127, 63, 255}
has the exact same result as {2550, 127, 63, 255}
, meaning the values somehow get clamped between 0-255 before being written to the output texture. But this is extremely counter-intuitive.
There's a bit more magic at play than you seem to be familiar with, so I'll try to elucidate.
First of all, by design, there is a loose connection between the storage format of textures in Metal and the type you get when you read/sample. You can have a texture in .bgra8Unorm
format that, when sampled through a texture bound as texture2d<float, access::sample>
will give you a float4
with its components in RGBA order. The conversion from those packed bytes to the float vector with swizzled components follows well-documented conversion rules as specified in the Metal Shading Language Specification.
It is also the case that, when writing to a texture whose storage is (for example) 8 bits per component, values will be clamped to fit in the underlying storage format. This is further affected by whether or not the texture is a norm
type: if the format contains norm
, the values are interpreted as if they specified a value between 0 and 1. Otherwise, the values you read are not normalized.
An example: if a texture is .bgra8Unorm
and a given pixel contains the byte values [0, 64, 128, 255]
, then when read in a shader that requests float
components, you will get [0.5, 0.25, 0, 1.0]
when you sample it. By contrast, if the format is .rgba8Uint
, you will get [0, 64, 128, 255]
. The storage format of the texture has a prevailing effect on how its contents get interpreted upon sampling.
I assume that the pixel format of your texture is something like .rgba8Unorm
. If that's the case, you can achieve what you want by writing your kernel like this:
kernel void updateEnvironmentMap(texture2d<float, access::read> currentFrameTexture [[texture(0)]],
texture2d<float, access::read> coordinateConversionTexture [[texture(1)]],
texture2d<float, access::write> environmentMap [[texture(2)]]
uint2 gid [[thread_position_in_grid]])
{
const float4 pixel(255, 127, 63, 255);
environmentMap.write(pixel * (1 / 255.0), gid);
}
By contrast, if your texture has a format of .rgba8Uint
, you'll get the same effect by writing it like this:
kernel void updateEnvironmentMap(texture2d<float, access::read> currentFrameTexture [[texture(0)]],
texture2d<float, access::read> coordinateConversionTexture [[texture(1)]],
texture2d<float, access::write> environmentMap [[texture(2)]]
uint2 gid [[thread_position_in_grid]])
{
const float4 pixel(255, 127, 63, 255);
environmentMap.write(pixel, gid);
}
I understand that this is a toy example, but I hope that with the foregoing information, you can figure out how to correctly store and sample values to achieve what you want.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With