I have a "software renderer" that I am porting from PC to the iPhone. what is the fastest way to manually update the screen with a buffer of pixels on the iphone? for instance in windows the fastest function I have found is SetDIBitsToDevice.
I don't know much about the iphone, or the libraries, and there seem to be so many layers and different types of UI elements, so I might need a lot of explanation...
for now I'm just going to constantly update a texture in opengl and render that to the screen, I very much doubt that this is going to be the best way to do it.
UPDATE:
I have tried the openGL screen sized texture method:
I got 17fps...
I used a 512x512 texture (because it needs to be a power of two)
just the call of
glTexSubImage2D(GL_TEXTURE_2D,0,0,0,512,512,GL_RGBA,GL_UNSIGNED_BYTE, baseWindowGUI->GetBuffer());
seemed pretty much responsible for ALL the slow down.
commenting it out, and leaving in all my software rendering GUI code, and the rendering of the now non updating texture, resulted in 60fps, 30% renderer usage, and no notable spikes from the cpu.
note that GetBuffer() simply returns a pointer to the software backbuffer of the GUI system, there is no re-gigging or resizing of the buffer in anyway, it is properly sized and formatted for the texture, so I am fairly certain the slowdown has nothing to do with the software renderer, which is the good news, it looks like if I can find a way to update the screen at 60, software rendering should work for the time being.
I tried doing the update texture call with 512,320 rather than 512,512 this was oddly even slower... running at 10fps, also it says the render utilization is only like 5%, and all the time is being wasted in a call to Untwiddle32bpp inside openGLES.
I can change my software render to natively render to any pixle format, if it would result in a more direct blit.
fyi, tested on a 2.2.1 ipod touch G2 (so like an Iphone 3G on steroids)
UPDATE 2:
I have just finished writting the CoreAnimation/Graphics method, it looks good, but I am a little worried about how it updates the screen each frame, basically ditching the old CGImage, creating a brand new one... check it out in 'someRandomFunction' below: is this the quickest way to update the image? any help would be greatly appreciated.
//
// catestAppDelegate.m
// catest
//
// Created by User on 3/14/10.
// Copyright __MyCompanyName__ 2010. All rights reserved.
//
#import "catestAppDelegate.h"
#import "catestViewController.h"
#import "QuartzCore/QuartzCore.h"
const void* GetBytePointer(void* info)
{
// this is currently only called once
return info; // info is a pointer to the buffer
}
void ReleaseBytePointer(void*info, const void* pointer)
{
// don't care, just using the one static buffer at the moment
}
size_t GetBytesAtPosition(void* info, void* buffer, off_t position, size_t count)
{
// I don't think this ever gets called
memcpy(buffer, ((char*)info) + position, count);
return count;
}
CGDataProviderDirectCallbacks providerCallbacks =
{ 0, GetBytePointer, ReleaseBytePointer, GetBytesAtPosition, 0 };
static CGImageRef cgIm;
static CGDataProviderRef dataProvider;
unsigned char* imageData;
const size_t imageDataSize = 320 * 480 * 4;
NSTimer *animationTimer;
NSTimeInterval animationInterval= 1.0f/60.0f;
@implementation catestAppDelegate
@synthesize window;
@synthesize viewController;
- (void)applicationDidFinishLaunching:(UIApplication *)application {
[window makeKeyAndVisible];
const size_t byteRowSize = 320 * 4;
imageData = malloc(imageDataSize);
for(int i=0;i<imageDataSize/4;i++)
((unsigned int*)imageData)[i] = 0xFFFF00FF; // just set it to some random init color, currently yellow
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
dataProvider =
CGDataProviderCreateDirect(imageData, imageDataSize,
&providerCallbacks); // currently global
cgIm = CGImageCreate
(320, 480,
8, 32, 320*4, colorSpace,
kCGImageAlphaNone | kCGBitmapByteOrder32Little,
dataProvider, 0, false, kCGRenderingIntentDefault); // also global, probably doesn't need to be
self.window.layer.contents = cgIm; // set the UIWindow's CALayer's contents to the image, yay works!
// CGImageRelease(cgIm); // we should do this at some stage...
// CGDataProviderRelease(dataProvider);
animationTimer = [NSTimer scheduledTimerWithTimeInterval:animationInterval target:self selector:@selector(someRandomFunction) userInfo:nil repeats:YES];
// set up a timer in the attempt to update the image
}
float col = 0;
-(void)someRandomFunction
{
// update the original buffer
for(int i=0;i<imageDataSize;i++)
imageData[i] = (unsigned char)(int)col;
col+=256.0f/60.0f;
// and currently the only way I know how to apply that buffer update to the screen is to
// create a new image and bind it to the layer...???
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
cgIm = CGImageCreate
(320, 480,
8, 32, 320*4, colorSpace,
kCGImageAlphaNone | kCGBitmapByteOrder32Little,
dataProvider, 0, false, kCGRenderingIntentDefault);
CGColorSpaceRelease(colorSpace);
self.window.layer.contents = cgIm;
// and that currently works, updating the screen, but i don't know how well it runs...
}
- (void)dealloc {
[viewController release];
[window release];
[super dealloc];
}
@end
The fastest App Store approved way to do CPU-only 2D graphics is to create a CGImage
backed by a buffer using CGDataProviderCreateDirect
and assign that to a CALayer
's contents
property.
For best results use the kCGImageAlphaPremultipliedFirst | kCGBitmapByteOrder32Little
or kCGImageAlphaNone | kCGBitmapByteOrder32Little
bitmap types and double buffer so that the display is never in an inconsistent state.
edit: this should be faster than drawing to an OpenGL texture in theory, but as always, profile to be sure.
edit2: CADisplayLink
is a useful class no matter which compositing method you use.
The fastest way is to use IOFrameBuffer/IOSurface, which are private frameworks.
So OpenGL seems to be the only possible way for AppStore apps.
Just to post my comment to @rpetrich's answer in the form of an answer, I will say in my tests I found OpenGL to be the fastest way. I've implemented a simple object (UIView subclass) called EEPixelViewer that does this generically enough that it should work for most people I think.
It uses OpenGL to push pixels in a wide variety of formats (24bpp RGB, 32-bit RGBA, and several YpCbCr formats) to the screen as efficiently as possible. The solution achieves 60fps for most pixel formats on almost every single iOS device, including older ones. Usage is super simple and requires no OpenGL knowledge:
pixelViewer.pixelFormat = kCVPixelFormatType_32RGBA;
pixelViewer.sourceImageSize = CGSizeMake(1024, 768);
EEPixelViewerPlane plane;
plane.width = 1024;
plane.height = 768;
plane.data = pixelBuffer;
plane.rowBytes = plane.width * 4;
[pixelViewer displayPixelBufferPlanes: &plane count: 1 withCompletion:nil];
Repeat the displayPixelBufferPlanes call for each frame (which loads the pixel buffer to the GPU using glTexImage2D), and that's pretty much all there is to it. The code is smart in that it tries to use the GPU for any kind of simple processing required such as permuting the color channels, converting YpCbCr to RGB, etc.
There is also quite a bit of logic for honoring scaling using the UIView's contentMode
property, so UIViewContentModeScaleToFit
/Fill, etc. all work as expected.
Perhaps you could abstract the methods used in the software renderer to a GPU shader... might get better performance. You'd need to send the encoded "video" data as a texture.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With