So this is a more theoretical question/discussion, as I haven't been able to come to a clear answer reading other SO posts and sources from the web. It seems like there are a lot of options:
Brad Larson's comment about AVFoundation
Video Decode Acceleration
VideoToolbox
If I want to do hardware decoding on iOS for H.264 (mov) files , can I simply use the AVFoundation and AVAssets, or should I use VideoToolbox (or any other frameworks). When using these, how can I profile/benchmark the hardware performance when running a project? - Is it by simply looking at the CPU usage in the "Debug Navigator" in XCode?
In short, I'm basically asking if AVFoundation & AVAssets perform hardware encoding or not? Are they sufficient, and how do I benchmark the actual performance?
Thanks!
If you want to decode a local file that is already on your iOS device - I'd use AVFoundation.
If you want to decode a network stream (RTP or RTMP) use Video Toolbox - since you have to unpack the video stream yourself.
With AVFoundation or Video Toolbox you will get hardware decoding.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With