In my Android app, I'm trying to create a video file, adding an audio track at a given time position on the video.
I used a MediaMuxer and changed the value of presentationTimeUs
to shift the audio.
But apparently this is not the way to go, because the starting time of the video is also shifted.
Another problem is that mp3 audio does not work.
Here is my attempt so far:
final long audioPositionUs = 10000000;
File fileOut = new File (Environment.getExternalStoragePublicDirectory (
Environment.DIRECTORY_MOVIES) + "/output.mp4");
fileOut.createNewFile ();
MediaExtractor videoExtractor = new MediaExtractor ();
MediaExtractor audioExtractor = new MediaExtractor ();
AssetFileDescriptor videoDescriptor = getAssets ().openFd ("video.mp4");
// AssetFileDescriptor audioDescriptor = getAssets ().openFd ("audio.mp3"); // ?!
AssetFileDescriptor audioDescriptor = getAssets ().openFd ("audio.aac");
videoExtractor.setDataSource (videoDescriptor.getFileDescriptor (),
videoDescriptor.getStartOffset (), videoDescriptor.getLength ());
audioExtractor.setDataSource (audioDescriptor.getFileDescriptor (),
audioDescriptor.getStartOffset (), audioDescriptor.getLength ());
MediaFormat videoFormat = null;
for (int i = 0; i < videoExtractor.getTrackCount (); i++) {
if (videoExtractor.getTrackFormat (i).getString (
MediaFormat.KEY_MIME).startsWith ("video/")) {
videoExtractor.selectTrack (i);
videoFormat = videoExtractor.getTrackFormat (i);
break;
}
}
audioExtractor.selectTrack (0);
MediaFormat audioFormat = audioExtractor.getTrackFormat (0);
MediaMuxer muxer = new MediaMuxer (fileOut.getAbsolutePath (),
MediaMuxer.OutputFormat.MUXER_OUTPUT_MPEG_4);
int videoTrack = muxer.addTrack (videoFormat);
int audioTrack = muxer.addTrack (audioFormat);
boolean end = false;
int sampleSize = 256 * 1024;
ByteBuffer videoBuffer = ByteBuffer.allocate (sampleSize);
ByteBuffer audioBuffer = ByteBuffer.allocate (sampleSize);
MediaCodec.BufferInfo videoBufferInfo = new MediaCodec.BufferInfo ();
MediaCodec.BufferInfo audioBufferInfo = new MediaCodec.BufferInfo ();
videoExtractor.seekTo (0, MediaExtractor.SEEK_TO_CLOSEST_SYNC);
audioExtractor.seekTo (0, MediaExtractor.SEEK_TO_CLOSEST_SYNC);
muxer.start ();
while (!end) {
videoBufferInfo.size = videoExtractor.readSampleData (videoBuffer, 0);
if (videoBufferInfo.size < 0) {
end = true;
videoBufferInfo.size = 0;
} else {
videoBufferInfo.presentationTimeUs = videoExtractor.getSampleTime ();
videoBufferInfo.flags = videoExtractor.getSampleFlags ();
muxer.writeSampleData (videoTrack, videoBuffer, videoBufferInfo);
videoExtractor.advance ();
}
}
end = false;
while (!end) {
audioBufferInfo.size = audioExtractor.readSampleData (audioBuffer, 0);
if (audioBufferInfo.size < 0) {
end = true;
audioBufferInfo.size = 0;
} else {
audioBufferInfo.presentationTimeUs = audioExtractor.getSampleTime () +
audioPositionUs;
audioBufferInfo.flags = audioExtractor.getSampleFlags ();
muxer.writeSampleData (audioTrack, audioBuffer, audioBufferInfo);
audioExtractor.advance ();
}
}
muxer.stop ();
muxer.release ();
Can you please give details (and code if possible) to help me solve this?
Steps to convert videos on your Android phoneDownload and install the Timbre app from Google Play Store. Launch the app and select convert from the options. Now select the format from the list to which you want to convert the video. Tap the covert button and the video will start converting in the background.
Click the Create Video File button in the Options Panel. Select a movie template. In the Create Video File dialog box, click Options. In the Options dialog box, select Preview range and click OK.
You can use cloud-based services like Google Drive or Dropbox if you need to transfer a large video from Android to computer. Just download the required file to the cloud and then access it from anywhere or copy the video link and send it via email.
Send AudioRecord's samples to a MediaCodec + MediaMuxer wrapper. Using the system time at audioRecord.read(...) works sufficiently well as an audio timestamp, provided you poll often enough to avoid filling up AudioRecord's internal buffer (to avoid drift between the time you call read and the time AudioRecord recorded the samples). Too bad AudioRecord doesn't directly communicate timestamps...
// Setup AudioRecord
while (isRecording) {
audioPresentationTimeNs = System.nanoTime();
audioRecord.read(dataBuffer, 0, samplesPerFrame);
hwEncoder.offerAudioEncoder(dataBuffer.clone(), audioPresentationTimeNs);
}
Note that AudioRecord only guarantees support for 16 bit PCM samples, though MediaCodec.queueInputBuffer takes input as byte[]. Passing a byte[] to audioRecord.read(dataBuffer,...) will truncate split the 16 bit samples into 8 bit for you.
I found that polling in this way still occasionally generated a timestampUs XXX < lastTimestampUs XXX for Audio track error, so I included some logic to keep track of the bufferInfo.presentationTimeUs reported by mediaCodec.dequeueOutputBuffer(bufferInfo, timeoutMs) and adjust if necessary before calling mediaMuxer.writeSampleData(trackIndex, encodedData, bufferInfo).
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With