I am working on a drawing application and I am noticing significant differences in textures loaded on a 32-bit iPad vs. a 64-bit iPad.
Here is the texture drawn on a 32-bit iPad:
Here is the texture drawn on a 64-bit iPad:
The 64-bit is what I desire, but it seems like maybe it is losing some data?
I create a default brush texture with this code:
UIGraphicsBeginImageContext(CGSizeMake(64, 64));
CGContextRef defBrushTextureContext = UIGraphicsGetCurrentContext();
UIGraphicsPushContext(defBrushTextureContext);
size_t num_locations = 3;
CGFloat locations[3] = { 0.0, 0.8, 1.0 };
CGFloat components[12] = { 1.0,1.0,1.0, 1.0,
1.0,1.0,1.0, 1.0,
1.0,1.0,1.0, 0.0 };
CGColorSpaceRef myColorspace = CGColorSpaceCreateDeviceRGB();
CGGradientRef myGradient = CGGradientCreateWithColorComponents (myColorspace, components, locations, num_locations);
CGPoint myCentrePoint = CGPointMake(32, 32);
float myRadius = 20;
CGGradientDrawingOptions options = kCGGradientDrawsBeforeStartLocation | kCGGradientDrawsAfterEndLocation;
CGContextDrawRadialGradient (UIGraphicsGetCurrentContext(), myGradient, myCentrePoint,
0, myCentrePoint, myRadius,
options);
CFRelease(myGradient);
CFRelease(myColorspace);
UIGraphicsPopContext();
[self setBrushTexture:UIGraphicsGetImageFromCurrentImageContext()];
UIGraphicsEndImageContext();
And then actually set the brush texture like this:
-(void) setBrushTexture:(UIImage*)brushImage{
// save our current texture.
currentTexture = brushImage;
// first, delete the old texture if needed
if (brushTexture){
glDeleteTextures(1, &brushTexture);
brushTexture = 0;
}
// fetch the cgimage for us to draw into a texture
CGImageRef brushCGImage = brushImage.CGImage;
// Make sure the image exists
if(brushCGImage) {
// Get the width and height of the image
GLint width = CGImageGetWidth(brushCGImage);
GLint height = CGImageGetHeight(brushCGImage);
// Texture dimensions must be a power of 2. If you write an application that allows users to supply an image,
// you'll want to add code that checks the dimensions and takes appropriate action if they are not a power of 2.
// Allocate memory needed for the bitmap context
GLubyte* brushData = (GLubyte *) calloc(width * height * 4, sizeof(GLubyte));
// Use the bitmatp creation function provided by the Core Graphics framework.
CGContextRef brushContext = CGBitmapContextCreate(brushData, width, height, 8, width * 4, CGImageGetColorSpace(brushCGImage), kCGImageAlphaPremultipliedLast);
// After you create the context, you can draw the image to the context.
CGContextDrawImage(brushContext, CGRectMake(0.0, 0.0, (CGFloat)width, (CGFloat)height), brushCGImage);
// You don't need the context at this point, so you need to release it to avoid memory leaks.
CGContextRelease(brushContext);
// Use OpenGL ES to generate a name for the texture.
glGenTextures(1, &brushTexture);
// Bind the texture name.
glBindTexture(GL_TEXTURE_2D, brushTexture);
// Set the texture parameters to use a minifying filter and a linear filer (weighted average)
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
// Specify a 2D texture image, providing the a pointer to the image data in memory
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, brushData);
// Release the image data; it's no longer needed
free(brushData);
}
}
Update:
I've updated CGFloats to be GLfloats with no success. Maybe there is an issue with this rendering code?
if(frameBuffer){
// draw the stroke element
[self prepOpenGLStateForFBO:frameBuffer];
[self prepOpenGLBlendModeForColor:element.color];
CheckGLError();
}
// find our screen scale so that we can convert from
// points to pixels
GLfloat scale = self.contentScaleFactor;
// fetch the vertex data from the element
struct Vertex* vertexBuffer = [element generatedVertexArrayWithPreviousElement:previousElement forScale:scale];
glLineWidth(2);
// if the element has any data, then draw it
if(vertexBuffer){
glVertexPointer(2, GL_FLOAT, sizeof(struct Vertex), &vertexBuffer[0].Position[0]);
glColorPointer(4, GL_FLOAT, sizeof(struct Vertex), &vertexBuffer[0].Color[0]);
glTexCoordPointer(2, GL_FLOAT, sizeof(struct Vertex), &vertexBuffer[0].Texture[0]);
glDrawArrays(GL_TRIANGLES, 0, (GLint)[element numberOfSteps] * (GLint)[element numberOfVerticesPerStep]);
CheckGLError();
}
if(frameBuffer){
[self unprepOpenGLState];
}
The vertex struct is the following:
struct Vertex{
GLfloat Position[2]; // x,y position
GLfloat Color [4]; // rgba color
GLfloat Texture[2]; // x,y texture coord
};
Update:
The issue does not actually appear to be 32-bit, 64-bit based, but rather something different about the A7 GPU and GL drivers. I found this out by running a 32-bit build and 64-bit build on the 64-bit iPad. The textures ended up looking exactly the same on both builds of the app.
I would like you to check two things.
Check your alpha blending logic(or option) in OpenGL.
Check your interpolation logic which is proportional to velocity of dragging.
It seems you don't have second one or not effective which is required to drawing app
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With