I'm using a project based on the latest FFmpeg git source tree, and linking to the shared DLL's published by Zeranoe at https://ffmpeg.zeranoe.com/builds/
The playback code works and loops. It plays back h265 files (raw), mpeg, avi, and mpg files. However as soon as an mp4 or mkv container is specified as input file, regardless of what's inside,a lot of errors are dumped from the codec. It doesn't matter if it's HEVC or h264.
[h264 @ 00000000xyz] No start code is found
[h264 @ 00000000xyz] Error splitting the input into NAL units.
To make everything really strange, ffplay.exe plays these files just fine.
I realize that I can probably fix this by converting files into a raw format first, but I would like to be able to read and parse mp4 files a they are. Since I am using the pre-build libs of Zeraneo, my guess would be that something was not enabled during the build, but then I would expect ffplay to fail too. Do I need to set a flag in the format_context or codec_context, or provide some sort of filter identifier?
Movies that play fine came from http://bbb3d.renderfarming.net/download.html, http://www.w6rz.net/ and http://www.sample-videos.com/
These work:
big_buck_bunny_480p_surround-fix.avi
bigbuckbunny_480x272.h265
Being a total noob at ffmpeg, please help me understand what is wrong and how to fix it. If the pre-build libs are the culprit, then the second question is if someone has a convenient cmake setup to build this for windows X64 and x32 debug and release targets.
Here's the source for initializing ffmpeg for reading
avdevice_register_all();
avfilter_register_all();
av_register_all();
avformat_network_init();
The format is parsed as follows:
m_FormatContext = avformat_alloc_context();
if (avformat_open_input(&m_FormatContext, file.GetPath().ToString().c_str(), NULL, NULL) != 0)
{
//std::cout << "failed to open input" << std::endl;
success = false;
}
// find stream info
if (success)
{
if (avformat_find_stream_info(m_FormatContext, NULL) < 0)
{
//std::cout << "failed to get stream info" << std::endl;
success = false;
}
}
The stream is opened as follows:
m_VideoStream = avstream;
m_FormatContext = formatContext;
if (m_VideoStream)
{
m_StreamIndex = m_VideoStream->stream_identifier;
AVCodecParameters *codecpar = m_VideoStream->codecpar;
if (codecpar)
{
AVCodecID codec_id = codecpar->codec_id;
m_Decoder = avcodec_find_decoder(codec_id);
if (m_Decoder)
{
m_CodecContext = avcodec_alloc_context3(m_Decoder);
if (m_CodecContext)
{
m_CodecContext->width = codecpar->width;
m_CodecContext->height = codecpar->height;
m_VideoSize = i3(codecpar->width, codecpar->height,1);
success = 0 == avcodec_open2(m_CodecContext, m_Decoder, NULL);
if (success)
{
if(m_CodecContext)
{
int size = av_image_get_buffer_size(format, m_CodecContext->width, m_CodecContext->height, 1);
if (size > 0)
{
av_frame = av_frame_alloc();
gl_frame = av_frame_alloc();
uint8_t *internal_buffer = (uint8_t *)av_malloc(size * sizeof(uint8_t));
av_image_fill_arrays((uint8_t**)((AVPicture *)gl_frame->data), (int*) ((AVPicture *)gl_frame->linesize), internal_buffer, format, m_CodecContext->width, m_CodecContext->height,1);
m_Packet = (AVPacket *)av_malloc(sizeof(AVPacket));
}
}
}
if (!success)
{
avcodec_close(m_CodecContext);
avcodec_free_context(&m_CodecContext);
m_CodecContext = NULL;
m_Decoder = NULL;
m_VideoStream = NULL;
}
}
else
{
m_Decoder = NULL;
m_VideoStream = NULL;
}
}
}
}
And decoding on a single thread:
do
{
if (av_read_frame(m_FormatContext, m_Packet) < 0)
{
av_packet_unref(m_Packet);
m_AllPacketsSent = true;
}
else
{
if (m_Packet->stream_index == m_StreamIndex)
{
avcodec_send_packet(m_CodecContext, m_Packet);
}
}
int frame_finished = avcodec_receive_frame(m_CodecContext, av_frame);
if (frame_finished == 0)
{
if (!conv_ctx)
{
conv_ctx = sws_getContext(m_CodecContext->width,
m_CodecContext->height, m_CodecContext->pix_fmt,
m_CodecContext->width, m_CodecContext->height, format, SWS_BICUBIC, NULL, NULL, NULL);
}
sws_scale(conv_ctx, av_frame->data, av_frame->linesize, 0, m_CodecContext->height, gl_frame->data, gl_frame->linesize);
switch(format)
{
case AV_PIX_FMT_BGR32_1:
case AV_PIX_FMT_RGB32_1:
case AV_PIX_FMT_0BGR32:
case AV_PIX_FMT_0RGB32:
case AV_PIX_FMT_BGR32:
case AV_PIX_FMT_RGB32:
{
m_CodecContext->bits_per_raw_sample = 32; break;
}
default:
{
FWASSERT(format == AV_PIX_FMT_RGB32, "The format changed, update the bits per raw sample!"); break;
}
}
size_t bufferSize = m_CodecContext->width * m_CodecContext->height * m_CodecContext->bits_per_raw_sample / 8;
m_Buffer.Realloc(bufferSize, false, gl_frame->data[0]);
m_VideoSize = i3(m_CodecContext->width, m_CodecContext->height,1);
result = true;
// sends the image buffer straight to the locked texture here..
// glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, codec_ctx->width, codec_ctx->height, GL_RGB, GL_UNSIGNED_BYTE, gl_frame->data[0]);
}
av_packet_unref(m_Packet);
} while (m_Packet->stream_index != m_StreamIndex);
m_FrameDecoded = result;
Any insight is appreciated!
Instead of implicitly provide width and height here:
m_CodecContext->width = codecpar->width;
m_CodecContext->height = codecpar->height;
you should call avcodec_parameters_to_context()
.
To add a bit more explanation to whoever will bump into it: mkv containers are storing SPS/PPS data aside from frame so the default decoder context construction will always cause NAL search error.
Read H264 SPS & PPS NAL bytes using libavformat APIs
If you are really has no luck getting AVCodecParameters due to some code/architecture issues - you have to fill AVCodecContext->extradata manually, specifying SPS/PPS fields required for h264 stream parser.
How to fill 'extradata' field of AVCodecContext with SPS and PPS data?
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With