Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Sound scheduling issue when playing OPUS from websocket

I'm trying to use the library https://github.com/AnthumChris/opus-stream-decoder/

I have a stream of OPUS encoded sound (2ch, 48kHz) from a high quality microphone (but I play a music in loop on it to test this). I know it works because I can hear it if I use:

websocat --binary ws://third-i.local/api/sound - | mpv -

(It's opening the websocket and streaming its output to mpv (mplayer)).

But when I play in the browser all I hear is very small parts of the sound every second or so. But the sound itself sounds good (I believe it is a very small part of the music).

Here is the JS code I wrote to listen in the browser:

let audioWorker: any;
let exampleSocket;
let opusDecoder: any;
let audioCtx: any;
let startTime = 0;
let counter = 0;

function startAudio() {
  /*
  const host = document.location.hostname;
  const scheme = document.location.protocol.startsWith("https") ? "wss" : "ws";
  const uri = `${scheme}://${host}/api/sound`;
  */
  const uri = "ws://third-i.local/api/sound";
  audioCtx = new AudioContext();
  startTime = 100 / 1000;
  exampleSocket = new WebSocket(uri);
  exampleSocket.binaryType = "arraybuffer";
  opusDecoder = new OpusStreamDecoder({onDecode});
  exampleSocket.onmessage = (event) => opusDecoder.ready.then(
    () => opusDecoder.decode(new Uint8Array(event.data))
  );
  exampleSocket.onclose = () => console.log("socket is closed!!");
}

function onDecode({left, right, samplesDecoded, sampleRate}: any) {
  const source = audioCtx.createBufferSource();
  const buffer = audioCtx.createBuffer(2, samplesDecoded, sampleRate);
  buffer.copyToChannel(left, 0);
  buffer.copyToChannel(right, 1);
  source.buffer = buffer;
  source.connect(audioCtx.destination);
  source.start(startTime);
  startTime += buffer.duration;
}

https://github.com/BigBoySystems/third-i-frontend/blob/play-audio/src/App.tsx#L54-L88

like image 464
Yozhgoor Avatar asked Nov 03 '20 13:11

Yozhgoor


1 Answers

The problem of scheduling is due to the fact that you create the AudioContext at the same time that you create the WebSocket, thus adding the connection time to the AudioContext's scheduling.

In other words, when you create the AudioContext the scheduling is started immediately but since the AudioContext is created when the WebSocket is created (which only starts connecting), the scheduling is off by the amount of time it takes to the WebSocket to connect to the upstream and receive the first bytes.

This is your code fixed:

let audioStreamSocket;
let opusDecoder: any;
let audioCtx: AudioContext;
let startTime: number;

function startAudio() {
  const host = document.location.hostname;
  const scheme = document.location.protocol.startsWith("https") ? "wss" : "ws";
  const uri = `${scheme}://${host}/api/sound`;
  audioStreamSocket = new WebSocket(uri);
  audioStreamSocket.binaryType = "arraybuffer";
  opusDecoder = new OpusStreamDecoder({ onDecode });
  audioStreamSocket.onmessage = (event) =>
    opusDecoder.ready.then(() => opusDecoder.decode(new Uint8Array(event.data)));
}

function onDecode({ left, right, samplesDecoded, sampleRate }: any) {
  if (audioCtx === undefined) {
    // See how we create the AudioContext only after some data has been received
    // and successfully decoded <=====================================
    console.log("Audio stream connected");
    audioCtx = new AudioContext();
    startTime = 0.1;
  }
  const source = audioCtx.createBufferSource();
  const buffer = audioCtx.createBuffer(2, samplesDecoded, sampleRate);
  buffer.copyToChannel(left, 0);
  buffer.copyToChannel(right, 1);
  source.buffer = buffer;
  source.connect(audioCtx.destination);
  source.start(startTime);
  startTime += buffer.duration;
}
like image 115
Cecile Avatar answered Nov 07 '22 14:11

Cecile