Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Web Audio API: How to play a stream of MP3 chunks

So I'm trying to use Web Audio API to decode & play MP3 file chunks streamed to the browser using Node.js & Socket.IO.

Is my only option, in this context, to create a new AudioBufferSourceNode for each audio data chunk received or is it possible to create a single AudioBufferSourceNode for all chunks and simply append the new audio data to the end of source node's buffer attribute?

Currently this is how I'm receiving my MP3 chunks, decoding them and scheduling them for playback. I have already verified that each chunk being received is a 'valid MP3 chunk' and is being successfully decoded by the Web Audio API.

audioContext = new AudioContext();
startTime = 0;

socket.on('chunk_received', function(chunk) {
    audioContext.decodeAudioData(toArrayBuffer(data.audio), function(buffer) {
        var source = audioContext.createBufferSource();
        source.buffer = buffer;
        source.connect(audioContext.destination);

        source.start(startTime);
        startTime += buffer.duration;
    });
});

Any advice or insight into how best to 'update' Web Audio API playback with new audio data would be greatly appreciated.

like image 358
Jonathan Byrne Avatar asked Nov 21 '13 23:11

Jonathan Byrne


3 Answers

Currently, decodeAudioData() requires complete files and cannot provide chunk-based decoding on incomplete files. The next version of the Web Audio API should provide this feature: https://github.com/WebAudio/web-audio-api/issues/337

Meanwhile, I've began writing examples for decoding audio in chunks until the new API version is available.

https://github.com/AnthumChris/fetch-stream-audio

like image 69
AnthumChris Avatar answered Nov 18 '22 09:11

AnthumChris


No, you can't reuse an AudioBufferSourceNode, and you cant push onto an AudioBuffer. Their lengths are immutable.

This article (http://www.html5rocks.com/en/tutorials/audio/scheduling/) has some good information about scheduling with the Web Audio API. But you're on the right track.

like image 30
Kevin Ennis Avatar answered Nov 18 '22 10:11

Kevin Ennis


I see at least 2 possible approaches.

  1. Setting up a scriptProcessorNode, which will feed queue of received & decoded data to realtime flow of web-audio.

  2. Exploiting the property of audioBufferSource.loop - updating audioBuffer’s content depending on the audio time.

Both approaches are implemented in https://github.com/audio-lab/web-audio-stream. You can technically use that to feed received data to web-audio.

like image 1
dy_ Avatar answered Nov 18 '22 10:11

dy_