I have an incoming live stream from a desktop application over a websocket connection to my web page. I'm converting the PCM stream to a float 32 array. When I start the track, I hear glitches, so I created a bufferArray to store a part of the stream, before playing it. The problem I get is that after the buffer is created, I can't add any chunks to the array.
Here I receive the data from the socket connection and store it in the array. I wait for 100 chunks before I start the stream.
sock.onmessage = function(e) {
var samples = e.data;
obj = JSON.parse(samples);
stream = stream.concat(obj);
if(j == 1000){
console.log(stream);
playPcm(stream);
}
console.log(j);
j ++;
}
This method handles the audio chunks
function playPcm(data){
var audio = new Float32Array(data);
var source = context.createBufferSource();
var audioBuffer = context.createBuffer(1, audio.length, 44100);
audioBuffer.getChannelData(0).set(audio);
// console.log(audioBuffer);
source.buffer = audioBuffer;
source.connect(context.destination);
source.start(AudioStart);
AudioStart += audioBuffer.duration;
}
I read about the scriptProcessorNode
, but couldn't figure out what to do with it. Now I'm pretty much stuck as I'm not very familiar with the Web Audio API.
Don't try to keep editing the same buffer - you will not be able to modify a playing buffer (it works sometimes in Chrome today, but that's a bug). You should buffer up the first set of chunks into one buffer, like you do now to get some latency room, and then when additional chunks come in you should schedule them as separate AudioBuffer/BufferSourceNode combinations. (i.e. keep track of the time you started playing the first one, and the running count, and schedule each successive node to play at the end of the last one.)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With