Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

how to stream audio chunks using web audio API coming from web-socket?

I am streaming audio data in chunks through web-Socket from server

ws.on('message', function incoming(message) {
    var readStream = fs.createReadStream("angular/data/google.mp3",
        {
            'flags': 'r',
            'highWaterMark': 128 * 1024
        }
    );
    readStream.on('data', function(data) {
        ws.send(data);
    });

    readStream.on('end', function() {
        ws.send('end');
    });

    readStream.on('error', function(err) {
        console.log(err)
    });
});

on the client side

var chunks = [];
var context = new AudioContext();
var soundSource;

var ws = new WebSocket(url);
    ws.binaryType = "arraybuffer";

ws.onmessage = function(message) {
    if (message.data instanceof ArrayBuffer) {
        chunks.push(message.data)
    } else {
        createSoundSource(chunks);
    }
};

function createSoundSource(audioData) {
    soundSource = context.createBufferSource();

    for (var i=0; i < audioData.length;i++) {
        context.decodeAudioData(audioData[i], function(soundBuffer){
            soundSource.buffer = soundBuffer;
            soundSource.connect(context.destination);
            soundSource.start(0);
        });
    }
}

But setting buffer soundSource.buffer = soundBuffer; for the second time causing an error

Uncaught DOMException: Failed to set the 'buffer' property on 'AudioBufferSourceNode': Cannot set buffer after it has been already been set

Any advice or insight into how best to update Web Audio API playback with new audio data would be greatly appreciated.

like image 507
Aman Gupta Avatar asked Dec 27 '16 14:12

Aman Gupta


People also ask

Does twilio use WebSockets?

At Twilio, we use WebSockets to connect our SDKs to our backend in several of our products: Sync maintains and distributes a single source of state in the cloud, managed by application developers and disseminated to browsers and mobiles.

What does Web Audio API do?

The Web Audio API provides a powerful and versatile system for controlling audio on the Web, allowing developers to choose audio sources, add effects to audio, create audio visualizations, apply spatial effects (such as panning) and much more.

How does the Web Audio API work?

Everything within the Web Audio API is based around the concept of an audio graph, which is made up of nodes. The Web Audio API handles audio operations inside an audio context, and has been designed to allow modular routing. Basic audio operations are performed with audio nodes, which are linked together to form an audio routing graph.

What is API 50 Web Audio API for live streaming?

Play encoded audio stream with web Audio Api 50 Web Audio API for live streaming? 0 Web Audio API - Live Stream 'clicks' between chunks. 1 Web Audio API delay in playback when using socket.io to receive ArrayBuffer from server 8 WebAudio - seamlessly playing sequence of audio chunks

What is the difference between WebSocket API and Streams API?

The WebSocket API provides a JavaScript interface to the WebSocket protocol, which makes it possible to open a two-way interactive communication session between the user's browser and a server. With this API, you can send messages to a server and receive event-driven responses without polling the server for a reply. The Streams API #

How do I use websocketstream?

The WebSocketStream API is promise-based, which makes dealing with it feel natural in a modern JavaScript world. You start by constructing a new WebSocketStream and passing it the URL of the WebSocket server. Next, you wait for the connection to be established, which results in a ReadableStream and/or a WritableStream.


2 Answers

You cannot reset a buffer on an AudioBufferSourceNode once it's been set. It's like fire-and-forget. Each time you want to play a different buffer, you have to create a new AudioBufferSourceNode to continue playback. Those are very lightweight nodes so don't worry about the performance even when creating tons of them.

To account for this, you can modify your createSoundSource function to simply create an AudioBufferSourceNode for each chunk inside the cycle body, like that:

function createSoundSource(audioData) {
    for (var i=0; i < audioData.length;i++) {
        context.decodeAudioData(audioData[i], function(soundBuffer){
            var soundSource = context.createBufferSource();
            soundSource.buffer = soundBuffer;
            soundSource.connect(context.destination);
            soundSource.start(0);
        });
    }
}

I tried to keep the code style as close to original as possible, but it's 2020, and a function taking advantage of modern features could actually look like this:

async function createSoundSource(audioData) {
  await Promise.all(
    audioData.map(async (chunk) => {
      const soundBuffer = await context.decodeAudioData(chunk);
      const soundSource = context.createBufferSource();
      soundSource.buffer = soundBuffer;
      soundSource.connect(context.destination);
      soundSource.start(0);
    })
  );
}

If you want to stop the old nodes as soon as new data arrives (it looks like you wanted that by resetting the .buffer but I'm not sure), you'll have to store them and call disconnect on all of them when it's time.

like image 79
1valdis Avatar answered Oct 03 '22 23:10

1valdis


Not positive, but I think you have to handle your streaming websocket buffer a bit differently. Maybe websocket-streaming-audio package source code can give you better clues on how to handle your scenario.

like image 39
Steve -Cutter- Blades Avatar answered Oct 04 '22 00:10

Steve -Cutter- Blades