I'm trying to extract audio from video. This code works well:
ffmpeg('1.mp4').output('1.mp3')
.noVideo()
.format('mp3')
.outputOptions('-ab','192k')
.run();
But if I read the file with a stream like this:
var video = fs.createReadStream('1.mp4');
var audio = fs.createWriteStream('1.mp3');
ffmpeg(video).output(audio)
.noVideo()
.format('mp3')
.outputOptions('-ab','192k')
.run();
The output file weighs 1KB and does not have anything in it.
How can I extract audio using streams?
For further investigation, I added a callback for different sort of errors while transcoding, as well as other events. First hint was the following callback on progress
event:
{ frames: NaN,
currentFps: NaN,
currentKbps: NaN,
targetSize: 0,
timemark: '00:00:00.00' }
As well, there was a stderr
event:
ffmpeg version n4.1 Copyright (c) 2000-2018 the FFmpeg developers
built with gcc 8.2.1 (GCC) 20180831
configuration: --prefix=/usr --disable-debug --disable-static --disable-stripping --enable-fontconfig --enable-gmp --enable-gnutls --enable-gpl --enable-ladspa --enable-libaom --enable-libass --enable-libbluray --enable-libdrm --enable-libfreetype --enable-libfribidi --enable-libgsm --enable-libiec61883 --enable-libjack --enable-libmodplug --enable-libmp3lame --enable-libopencore_amrnb --enable-libopencore_amrwb --enable-libopenjpeg --enable-libopus --enable-libpulse --enable-libsoxr --enable-libspeex --enable-libssh --enable-libtheora --enable-libv4l2 --enable-libvidstab --enable-libvorbis --enable-libvpx --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxcb --enable-libxml2 --enable-libxvid --enable-nvdec --enable-nvenc --enable-omx --enable-shared --enable-version3
libavutil 56. 22.100 / 56. 22.100
libavcodec 58. 35.100 / 58. 35.100
libavformat 58. 20.100 / 58. 20.100
libavdevice 58. 5.100 / 58. 5.100
libavfilter 7. 40.101 / 7. 40.101
libswscale 5. 3.100 / 5. 3.100
libswresample 3. 3.100 / 3. 3.100
libpostproc 55. 3.100 / 55. 3.100
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x562c9e18de80] stream 0, offset 0x30: partial file
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x562c9e18de80] Could not find codec parameters for stream 0 (Video: h264 (avc1 / 0x31637661), none, 240x180, 374 kb/s): unspecified pixel format
Consider increasing the value for the 'analyzeduration' and 'probesize' options
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'pipe:0':
Metadata:
major_brand : isom
minor_version : 512
compatible_brands: isomiso2avc1mp41
encoder : Lavf54.63.104
Duration: 00:00:14.72, start: 0.000000, bitrate: N/A
Stream #0:0(und): Video: h264 (avc1 / 0x31637661), none, 240x180, 374 kb/s, 25 fps, 25 tbr, 12800 tbn, 25600 tbc (default)
Metadata:
handler_name : VideoHandler
Stream #0:1(und): Audio: aac (mp4a / 0x6134706D), 44100 Hz, stereo, fltp, 75 kb/s (default)
Metadata:
handler_name : SoundHandler
Stream mapping:
Stream #0:1 -> #0:0 (aac (native) -> mp3 (libmp3lame))
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x562c9e18de80] stream 1, offset 0xa1e: partial file
pipe:0: Invalid data found when processing input
Output #0, mp3, to 'pipe:1':
Metadata:
major_brand : isom
minor_version : 512
compatible_brands: isomiso2avc1mp41
TSSE : Lavf58.20.100
Stream #0:0(und): Audio: mp3 (libmp3lame), 44100 Hz, stereo, fltp, 192 kb/s (default)
Metadata:
handler_name : SoundHandler
encoder : Lavc58.35.100 libmp3lame
size= 0kB time=00:00:00.00 bitrate=N/A speed= 0x
{ frames: NaN,
currentFps: NaN,
currentKbps: NaN,
targetSize: 0,
timemark: '00:00:00.00' }
video:0kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: unknown
Output file is empty, nothing was encoded (check -ss / -t / -frames parameters if used)
The latter explains the empty file you're seeing in the output. As to why the error only happens while streaming the file, and not while opening it through another method, this answer explains the details: https://stackoverflow.com/a/40028894/1494833
Unfortunately the library doesn't yet support movflags
input option as recommended above (https://github.com/fluent-ffmpeg/node-fluent-ffmpeg/issues/823).
ffmpeg doesn't like using using streams as inputs in my experience. the following code worked for me, first let ffmpeg do its thing, then create/play with streams for the input/output:
let video = `${files[0].filename}`;
readstream.pipe(fs.createWriteStream(`./${video}`))
.on('error', () => {
console.log("Some error occurred in download:" + error);
res.send(error);
})
.on('finish', () => {
let audio = `${video.split(".")[0]}.mp3`
ffmpeg(`${video}`).output(audio)
.noVideo()
.format(`mp3`)
.on(`end`, (stdout, stderr) => {
let newReadstream = fs.createReadStream(`${audio}`);
})
.run()
}
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With