I'm fairly new to android programming, but I am a quick learner. So I found an intresting piece of code here: http://code.google.com/p/camdroiduni/source/browse/trunk/code/eclipse_workspace/camdroid/src/de/aes/camdroid/CameraView.java
And it's about live streaming from your device's camera to your browser.
But I want to know how the code actually works.
These are the things I want to understand:
1) How do they stream to the webbrowser. I understand that they send a index.html file to the ip adress of the device (on wifi) and that file reloads the page every second. But how do they send the index.html file to the desired ip address with sockets?
2) http://code.google.com/p/camdroiduni/wiki/Status#save_pictures_frequently , Here they mention they are using video, but I am still convinced they take pictures and send them as I don't see the mediarecorder anywhere.
Now my question is how they keep sending AND saving those images into the SD folder (i think). I think it's done with this code, but how does it works. Like with c.takepicture, it takes long to save and start previewing again, so that's no option to livestream.
public synchronized byte[] getPicture() {
try {
while (!isPreviewOn) wait();
isDecoding = true;
mCamera.setOneShotPreviewCallback(this);
while (isDecoding) wait();
} catch (Exception e) {
return null;
}
return mCurrentFrame;
}
private LayoutParams calcResolution (int origWidth, int origHeight, int aimWidth, int aimHeight) {
double origRatio = (double)origWidth/(double)origHeight;
double aimRatio = (double)aimWidth/(double)aimHeight;
if (aimRatio>origRatio)
return new LayoutParams(origWidth,(int)(origWidth/aimRatio));
else
return new LayoutParams((int)(origHeight*aimRatio),origHeight);
}
private void raw2jpg(int[] rgb, byte[] raw, int width, int height) {
final int frameSize = width * height;
for (int j = 0, yp = 0; j < height; j++) {
int uvp = frameSize + (j >> 1) * width, u = 0, v = 0;
for (int i = 0; i < width; i++, yp++) {
int y=0;
if(yp < raw.length) {
y = (0xff & ((int) raw[yp])) - 16;
}
if (y < 0) y = 0;
if ((i & 1) == 0) {
if(uvp<raw.length) {
v = (0xff & raw[uvp++]) - 128;
u = (0xff & raw[uvp++]) - 128;
}
}
int y1192 = 1192 * y;
int r = (y1192 + 1634 * v);
int g = (y1192 - 833 * v - 400 * u);
int b = (y1192 + 2066 * u);
if (r < 0) r = 0; else if (r > 262143) r = 262143;
if (g < 0) g = 0; else if (g > 262143) g = 262143;
if (b < 0) b = 0; else if (b > 262143) b = 262143;
rgb[yp] = 0xff000000 | ((r << 6) &
0xff0000) | ((g >> 2) &
0xff00) | ((b >> 10) &
0xff);
}
}
}
@Override
public synchronized void onPreviewFrame(byte[] data, Camera camera) {
int width = mSettings.PictureW() ;
int height = mSettings.PictureH();
// API 8 and above
// YuvImage yuvi = new YuvImage(data, ImageFormat.NV21 , width, height, null);
// Rect rect = new Rect(0,0,yuvi.getWidth() ,yuvi.getHeight() );
// OutputStream out = new ByteArrayOutputStream();
// yuvi.compressToJpeg(rect, 10, out);
// byte[] ref = ((ByteArrayOutputStream)out).toByteArray();
// API 7
int[] temp = new int[width*height];
OutputStream out = new ByteArrayOutputStream();
// byte[] ref = null;
Bitmap bm = null;
raw2jpg(temp, data, width, height);
bm = Bitmap.createBitmap(temp, width, height, Bitmap.Config.RGB_565);
bm.compress(CompressFormat.JPEG, mSettings.PictureQ(), out);
/*ref*/mCurrentFrame = ((ByteArrayOutputStream)out).toByteArray();
// mCurrentFrame = new byte[ref.length];
// System.arraycopy(ref, 0, mCurrentFrame, 0, ref.length);
isDecoding = false;
notify();
}
I really hope someone can explain these things as good as possible. That would really much be appreciated.
Ok, If anyone is intrested, I have the answer.
The code repeatedly takes a snapshot from the camera preview using setOneShotPreviewCallback() to call onPreviewFrame(). The frame is delivered in YUV format so raw2jpg() converts it into 32 bit ARGB for the jpeg encoder. NV21 is a YUV planar format as described here.
getPicture() is called, presumably by the application, and produces the jpeg data for the image in the private byte array mCurrentFrame and returns that array. What happens to if afterwards is not in that code fragment. Note that getPicture() does a couple of wait()s. This is because the image acquisition code is running in a separate thread to that of the application.
In the Main activity, the public static byte CurrentJPEG get this: cameraFrame.getPicture(); in public void run(). In this webservice it is send with a socket to the desired ip.
Correct me if I'm wrong.
Now I just still wonder how the image is displayed in the browser as a picture because you send byte data to it right? Please check this out: http://code.google.com/p/camdroiduni/source/browse/trunk/code/eclipse_workspace/camdroid/src/de/aes/camdroid/WebServer.java
Nothing in that code is sending any data to any URL. The getPicture method is returning a byte array, probably being used as an outputstream in some other method/Class that is then funneling it to a web service through some sort of protocol (UDP likely).
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With