I am streaming audio data in chunks through web-Socket from server
ws.on('message', function incoming(message) {
var readStream = fs.createReadStream("angular/data/google.mp3",
{
'flags': 'r',
'highWaterMark': 128 * 1024
}
);
readStream.on('data', function(data) {
ws.send(data);
});
readStream.on('end', function() {
ws.send('end');
});
readStream.on('error', function(err) {
console.log(err)
});
});
on the client side
var chunks = [];
var context = new AudioContext();
var soundSource;
var ws = new WebSocket(url);
ws.binaryType = "arraybuffer";
ws.onmessage = function(message) {
if (message.data instanceof ArrayBuffer) {
chunks.push(message.data)
} else {
createSoundSource(chunks);
}
};
function createSoundSource(audioData) {
soundSource = context.createBufferSource();
for (var i=0; i < audioData.length;i++) {
context.decodeAudioData(audioData[i], function(soundBuffer){
soundSource.buffer = soundBuffer;
soundSource.connect(context.destination);
soundSource.start(0);
});
}
}
But setting buffer soundSource.buffer = soundBuffer;
for the second time causing an error
Uncaught DOMException: Failed to set the 'buffer' property on 'AudioBufferSourceNode': Cannot set buffer after it has been already been set
Any advice or insight into how best to update
Web Audio API playback with new audio data would be greatly appreciated.
At Twilio, we use WebSockets to connect our SDKs to our backend in several of our products: Sync maintains and distributes a single source of state in the cloud, managed by application developers and disseminated to browsers and mobiles.
The Web Audio API provides a powerful and versatile system for controlling audio on the Web, allowing developers to choose audio sources, add effects to audio, create audio visualizations, apply spatial effects (such as panning) and much more.
Everything within the Web Audio API is based around the concept of an audio graph, which is made up of nodes. The Web Audio API handles audio operations inside an audio context, and has been designed to allow modular routing. Basic audio operations are performed with audio nodes, which are linked together to form an audio routing graph.
Play encoded audio stream with web Audio Api 50 Web Audio API for live streaming? 0 Web Audio API - Live Stream 'clicks' between chunks. 1 Web Audio API delay in playback when using socket.io to receive ArrayBuffer from server 8 WebAudio - seamlessly playing sequence of audio chunks
The WebSocket API provides a JavaScript interface to the WebSocket protocol, which makes it possible to open a two-way interactive communication session between the user's browser and a server. With this API, you can send messages to a server and receive event-driven responses without polling the server for a reply. The Streams API #
The WebSocketStream API is promise-based, which makes dealing with it feel natural in a modern JavaScript world. You start by constructing a new WebSocketStream and passing it the URL of the WebSocket server. Next, you wait for the connection to be established, which results in a ReadableStream and/or a WritableStream.
You cannot reset a buffer on an AudioBufferSourceNode
once it's been set. It's like fire-and-forget. Each time you want to play a different buffer, you have to create a new AudioBufferSourceNode
to continue playback. Those are very lightweight nodes so don't worry about the performance even when creating tons of them.
To account for this, you can modify your createSoundSource
function to simply create an AudioBufferSourceNode
for each chunk inside the cycle body, like that:
function createSoundSource(audioData) {
for (var i=0; i < audioData.length;i++) {
context.decodeAudioData(audioData[i], function(soundBuffer){
var soundSource = context.createBufferSource();
soundSource.buffer = soundBuffer;
soundSource.connect(context.destination);
soundSource.start(0);
});
}
}
I tried to keep the code style as close to original as possible, but it's 2020, and a function taking advantage of modern features could actually look like this:
async function createSoundSource(audioData) {
await Promise.all(
audioData.map(async (chunk) => {
const soundBuffer = await context.decodeAudioData(chunk);
const soundSource = context.createBufferSource();
soundSource.buffer = soundBuffer;
soundSource.connect(context.destination);
soundSource.start(0);
})
);
}
If you want to stop the old nodes as soon as new data arrives (it looks like you wanted that by resetting the .buffer
but I'm not sure), you'll have to store them and call disconnect
on all of them when it's time.
Not positive, but I think you have to handle your streaming websocket buffer a bit differently. Maybe websocket-streaming-audio package source code can give you better clues on how to handle your scenario.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With