Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Mac OS Cocoa: Draw a simple pixel on a canvas

I wish I would find an answer for this. I have searched and searched and couldn't the right answer. Here is my situation:

In a Mac OS Cocoa Application, I want to draw a pixel (actually a few pixels) onto a dedicated area on my application window. I figured, it would be nicer to have a NSImageView placed there (I did so with IB and connected the outlet to my app delegate) and draw on that instead of my NSWindow.

How in the world can I do that? Mac OS seems to offer NSBezierPath as the most basic drawing tool — is that true? This is completely shocking to me. I come from a long history of Windows programming and drawing a pixel onto a canvas is the most simple thing, typically.

I do not want to use OpenGL and I am not sure to what extent Quartz is involved in this.

All I want is some help on how I can pull off this pseudocode in real Objective-C/Cocoa:

imageObj.drawPixel(10,10,blackColor);

I would love to hear your answers on this and I am sure this will help a lot of people starting with Cocoa.

Thanks!

like image 546
Roman Avatar asked Dec 04 '10 23:12

Roman


2 Answers

What you are asking for is either of these two methods:

NSBitmapRep setColor:atX:y: Changes the color of the pixel at the specified coordinates.

NSBitmapRep setPixel:atX:y: Sets the receiver's pixel at the specified coordinates to the specified raw pixel values.

Note that these aren't available on iOS. On iOS, it appears that the way to do this is to create a raw buffer of pixel data for a given colorspace (likely RGB), fill that with color data (write a little setPixel method to do this) and then call CGImageCreate() like so:

    //Create a raw buffer to hold pixel data which we will fill algorithmically
    NSInteger width = theWidthYouWant;
    NSInteger height = theHeightYouWant;
    NSInteger dataLength = width * height * 4;
    UInt8 *data = (UInt8*)malloc(dataLength * sizeof(UInt8));

    //Fill pixel buffer with color data
    for (int j=0; j<height; j++) {
        for (int i=0; i<width; i++) {

            //Here I'm just filling every pixel with red
            float red   = 1.0f;
            float green = 0.0f;
            float blue  = 0.0f;
            float alpha = 1.0f;

            int index = 4*(i+j*width);
            data[index]  =255*red;
            data[++index]=255*green;
            data[++index]=255*blue;
            data[++index]=255*alpha;

        }
    }

    // Create a CGImage with the pixel data
    CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, data, dataLength, NULL);
    CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB();
    CGImageRef image = CGImageCreate(width, height, 8, 32, width * 4, colorspace, kCGBitmapByteOrder32Big | kCGImageAlphaPremultipliedLast,

                            provider, NULL, true, kCGRenderingIntentDefault);

    //Clean up
    CGColorSpaceRelease(colorspace);
    CGDataProviderRelease(provider);
    // Don't forget to free(data) when you are done with the CGImage

Lastly, you might be wanting to manipulate pixels in an image you've already loaded into a CGImage. There is sample code for doing that in an Apple Technical Q&A titled QA1509 Getting the pixel data from a CGImage object.

like image 173
Logachu Avatar answered Oct 18 '22 22:10

Logachu


Cocoa's low-level drawing API is Core Graphics (Quartz). You obtain a drawing context and issue commands to draw onto that context. The API is designed to be device-independent (you use the same commands to draw onto the screen as you would to draw onto paper, when printing). Therefore, there are no commands for filling in individual pixels, because there's no such thing as a pixel on paper. Even on the screen, your view may have been transformed in some way so that a single point doesn't map to a single device pixel.

If you want to draw a single pixel, you need to specify a rectangle that is the size of a single pixel, then fill it in. For the pixel at (x,y), you would want a rectangle with origin of (x-0.5,y-0.5) and a size of (1,1).

You can do that with NSBezierPath, or you can get a Core Graphics context (CGContextRef) from [[NSGraphicsContext currentContext] graphicsPort] and use functions like CGContextFillRect().

This obviously won't be very fast if you are drawing a lot of pixels; that's not what the API is designed for. If that's what you need to do, consider creating a buffer with malloc and writing your pixel data to that, then using Core Graphics to convert it into a CGImageRef, which can be drawn to the screen.

like image 41
benzado Avatar answered Oct 18 '22 23:10

benzado