I have an AVPlayer which is playing a HLS video stream. My user interface provides a row of buttons, one for each "chapter" in the video (the buttons are labeled "1", "2", "3"). The app downloads some meta-data from a server which contains the list of chapter cut-in points denoted in seconds. For example, one video is 12 minutes in length - the list of chapter cut-in points are 0, 58, 71, 230, 530, etc., etc.
When the user taps one of the "chapter buttons" the button handler code does this:
[self.avPlayer pause]; [self.avPlayer seekToTime: CMTimeMakeWithSeconds(seekTime, 600) toleranceBefore: kCMTimeZero toleranceAfter: kCMTimeZero completionHandler: ^(BOOL finished) { [self.avPlayer play]; }];
Where "seekTime" is a local var which contains the cut-in point (as described above).
The problem is that the video does not always start at the correct point. Sometimes it does. But sometimes it is anywhere from a tenth of a second, to 2 seconds BEFORE the requested seekTime. It NEVER starts after the requested seekTime.
Here are some stats on the video encoding:
Encoder: handbrakeCLI Codec: h.264 Frame rate: 24 (actually, 23.976 - same as how it was shot) Video Bitrate: multiple bitrates (64/150/300/500/800/1200) Audio Bitrate: 128k Keyframes: 23.976 (1 per second)
I am using the Apple mediafilesegmenter tool, of course, and the variantplaylistcreator to generate the playlist.
The files are being served from an Amazon Cloud/S3 bucket.
One area which I remain unclear about is CMTimeMakeWithSeconds - I have tried several variations based on different articles/docs I have read. For example, in the above excerpt I am using:
CMTimeMakeWithSeconds(seekTime, 600)
I have also tried:
CMTimeMakeWithSeconds(seekTime, 1)
I can't tell which is correct, though BOTH seem to produce the same inconsistent results!
I have also tried:
CMTimeMakeWithSeconds(seekTime, 23.967)
Some articles claim this works like a numerator/denomenator, so n/1 should be correct where 'n' is number of seconds (as in CMTimeMakeWithseconds(n, 1)). But, the code was originally created by a different programmer (who is gone now) and he used the 600 number for the preferredTimeScale (ie. CMTimeMakeWithseconds(n, 600)).
Can anyone offer any clues as to what I am doing wrong, or even if the kind of accuracy I am trying to achieve is even possible?
And in case someone is tempted to offer "alternative" solutions, we are already considering breaking the video up into separate streams, one per chapter, but we do not believe that will give us the same performance in the sense that changing chapters will take longer as a new AVPlayerItem will have to be created and loaded, etc., etc., etc. So if you think this is the only solution that will work (and we do expect this will achieve the result we want - ie. each chapter WILL start exactly where we want it to) feel free to say so.
Thanks in advance!
int32_t timeScale = self.player.currentItem.asset.duration.timescale; CMTime time = CMTimeMakeWithSeconds(77.000000, timeScale); [self.player seekToTime:time toleranceBefore:kCMTimeZero toleranceAfter:kCMTimeZero];
I had a problem with 'seekToTime'. I solved my problem with this code. 'timescale' is trick for this problem.
Swift version:
let playerTimescale = self.player.currentItem?.asset.duration.timescale ?? 1 let time = CMTime(seconds: 77.000000, preferredTimescale: playerTimescale) self.player.seek(to: time, toleranceBefore: kCMTimeZero, toleranceAfter: kCMTimeZero) { (finished) in /* Add your completion code here */ }
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With