I'm trying to use AVCaptureSession to capture video from the camera and then I would like to use AVAssetWriter to write the results to a file (specifically, use multiple AVAssetWriters to write the capture to chunk videos, but we don't need to complicate this question with that). However, I'm having trouble figuring out where data actually needs to be passed to the AVAssetWriter. In the Apple Developer documentation I've only seen AVCaptureSession data being passed to an AVCaptureFileOutput. Maybe I'm just missing something though. Can the AVAssetWriter just be used as an output of the capture session? A relevant example or bit of code (while not necessary) would be appreciated. Thank you much!
Take a look at http://www.gdcl.co.uk/2013/02/20/iOS-Video-Encoding.html. This shows how to connect the capture output with the asset writer, and then extracts the data from the asset writer for streaming.
G
What's your goal, exactly? Because you're asking for (the use an AVAssetWriter as an output for an AVCaptureSession) isn't possible.
Basically, an AVCaptureSession
object has inputs (eg: a camera, represented by some AVCaptureInput
subclass) and outputs (in the form of AVCaptureOutput
's). And an AVAssetWriter
is not an AVCaptureOutput
subclass, so there is no way to use it directly from an AVCaptureSession.
If you want to use an AVAssetWriter, you'll have to write the data out using an AVCaptureFileOutput
instance, and then read it back with an AVAssetReader
, modify your data somehow, and then output it via an AVAssetWriter
.
Final thing to keep in mind: AVAssetReader
is documented to not guarantee real-time operations.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With