Here is my problem,
I have implemented a server side application using Red5, which sends H.264 encoded live stream, on client side the stream is received as byte[]
In order to decode it on Android client side i have followed the Javacv-FFmpeg library. The code for decoding is as follows
public Frame decodeVideo(byte[] data,long timestamp){
frame.image = null;
frame.samples = null;
avcodec.av_init_packet(pkt);
BytePointer video_data = new BytePointer(data);
avcodec.AVCodec codec = avcodec.avcodec_find_decoder(codec_id);
video_c = null;
video_c = avcodec.avcodec_alloc_context3(codec);
video_c.width(320);
video_c.height(240);
video_c.pix_fmt(0);
video_c.flags2(video_c.flags2()|avcodec.CODEC_FLAG2_CHUNKS);
avcodec.avcodec_open2(video_c, codec, null))
picture = avcodec.avcodec_alloc_frame()
pkt.data(video_data);
pkt.size(data.length);
int len = avcodec.avcodec_decode_video2(video_c, picture, got_frame, pkt);
if ((len >= 0) && ( got_frame[0] != 0)) {
....
process the decoded frame into **IPLImage of Javacv** and render it with **Imageview** of Android
}
}
Data received from server is as follows
Few Frames having following pattern
17 01 00 00 00 00 00 00 02 09 10 00 00 00 0F 06 00 01 C0 01 07 09 08 04 9A 00 00 03 00 80 00 00 16 EF 65 88 80 07 00 05 6C 98 90 00...
Many frames having following pattern
27 01 00 00 00 00 00 00 02 09 30 00 00 00 0C 06 01 07 09 08 05 9A 00 00 03 00 80 00 00 0D 77 41 9A 02 04 15 B5 06 20 E3 11 E2 3C 46 ....
With H.264 codec for decoder, decoder outputs length >0 but got_frames=0 always.
With MPEG1 codec, decoder outputs length >0 and got_frames>0 but the output image is green or distorted.
However following FFmpegFrameGrabber code of javacv i can decode the local files( H.264 encoded ) with similar code as above.
I wonder what details i am missing, and header related data manipulation or setting codec appropriate for decoder.
Any suggestion, help appreciated.
Thanks in advance.
Atlast... finally got to working after lots of RnD.
What i am missing is alalyze the video frame structure. Video is made up of "I" , "P" frames.. "I" frame is information frame, which stores the information about next subsequent frames. "P" frame is picture frame, which holds actual video frame...
So i need to decode the "P" frames w.r.t information in "I" frame..
So the final code is something as follows
public IplImage decodeFromVideo(byte[] data, long timeStamp) {
avcodec.av_init_packet(reveivedVideoPacket); // Empty AVPacket
/*
* Determine if the frame is a Data Frame or Key. IFrame 1 = PFrame 0 = Key
* Frame
*/
byte frameFlag = data[1];
byte[] subData = Arrays.copyOfRange(data, 5, data.length);
BytePointer videoData = new BytePointer(subData);
if (frameFlag == 0) {
avcodec.AVCodec codec = avcodec
.avcodec_find_decoder(avcodec.AV_CODEC_ID_H264);
if (codec != null) {
videoCodecContext = null;
videoCodecContext = avcodec.avcodec_alloc_context3(codec);
videoCodecContext.width(320);
videoCodecContext.height(240);
videoCodecContext.pix_fmt(avutil.AV_PIX_FMT_YUV420P);
videoCodecContext.codec_type(avutil.AVMEDIA_TYPE_VIDEO);
videoCodecContext.extradata(videoData);
videoCodecContext.extradata_size(videoData.capacity());
videoCodecContext.flags2(videoCodecContext.flags2()
| avcodec.CODEC_FLAG2_CHUNKS);
avcodec.avcodec_open2(videoCodecContext, codec,
(PointerPointer) null);
if ((videoCodecContext.time_base().num() > 1000)
&& (videoCodecContext.time_base().den() == 1)) {
videoCodecContext.time_base().den(1000);
}
} else {
Log.e("test", "Codec could not be opened");
}
}
if ((decodedPicture = avcodec.avcodec_alloc_frame()) != null) {
if ((processedPicture = avcodec.avcodec_alloc_frame()) != null) {
int width = getImageWidth() > 0 ? getImageWidth()
: videoCodecContext.width();
int height = getImageHeight() > 0 ? getImageHeight()
: videoCodecContext.height();
switch (imageMode) {
case COLOR:
case GRAY:
int fmt = 3;
int size = avcodec.avpicture_get_size(fmt, width, height);
processPictureBuffer = new BytePointer(
avutil.av_malloc(size));
avcodec.avpicture_fill(new AVPicture(processedPicture),
processPictureBuffer, fmt, width, height);
returnImageFrame = opencv_core.IplImage.createHeader(320,
240, 8, 1);
break;
case RAW:
processPictureBuffer = null;
returnImageFrame = opencv_core.IplImage.createHeader(320,
240, 8, 1);
break;
default:
Log.d("showit",
"At default of swith case 1.$SwitchMap$com$googlecode$javacv$FrameGrabber$ImageMode[ imageMode.ordinal()]");
}
reveivedVideoPacket.data(videoData);
reveivedVideoPacket.size(videoData.capacity());
reveivedVideoPacket.pts(timeStamp);
videoCodecContext.pix_fmt(avutil.AV_PIX_FMT_YUV420P);
decodedFrameLength = avcodec.avcodec_decode_video2(videoCodecContext,
decodedPicture, isVideoDecoded, reveivedVideoPacket);
if ((decodedFrameLength >= 0) && (isVideoDecoded[0] != 0)) {
.... Process image same as javacv .....
}
Hope it wil help others..
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With