Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Web Audio API Analyser Node Not Working With Microphone Input

The bug preventing getting microphone input per http://code.google.com/p/chromium/issues/detail?id=112367 for Chrome Canary is now fixed. This part does seem to be working. I can assign the mic input to an audio element and hear the results through the speaker.

But I'd like to connect an analyser node in order to do FFT. The analyser node works fine if I set the audio source to a local file. The problem is that when connected to the mic audio stream, the analyser node just returns the base value as if it doesn't have an audio stream at all. (It's -100 over and over again if you're curious.)

Anyone know what's up? Is it not implemented yet? Is this a chrome bug? I'm running 26.0.1377.0 on Windows 7 and have the getUserMedia flag enabled and am serving through localhost via python's simpleHTTPServer so it can request permissions.

Code:

var aCtx = new webkitAudioContext();
var analyser = aCtx.createAnalyser();
if (navigator.getUserMedia) {
  navigator.getUserMedia({audio: true}, function(stream) {
    // audio.src = "stupid.wav"
    audio.src = window.URL.createObjectURL(stream);
  }, onFailure);
}
$('#audio').on("loadeddata",function(){
    source = aCtx.createMediaElementSource(audio);
    source.connect(analyser);
    analyser.connect(aCtx.destination);
    process();
});

Again, if I set audio.src to the commented version, it works, but with microphone it is not. Process contains:

FFTData = new Float32Array(analyser.frequencyBinCount);
analyser.getFloatFrequencyData(FFTData);
console.log(FFTData[0]);

I've also tried using the createMediaStreamSource and bypassing the audio element - example 4 - https://dvcs.w3.org/hg/audio/raw-file/tip/webaudio/webrtc-integration.html. Also unsuccessful. :(

    if (navigator.getUserMedia) {
        navigator.getUserMedia({audio: true}, function(stream) {
        var microphone = context.createMediaStreamSource(stream);
        microphone.connect(analyser);
        analyser.connect(aCtx.destination);
        process();
    }

I imagine it might be possible to write the mediasteam to a buffer and then use dsp.js or something to do fft, but I wanted to check first before I go down that road.

like image 941
Newmu Avatar asked Jan 09 '13 08:01

Newmu


1 Answers

It was a variable scoping issue. For the second example, I was defining the microphone locally and then trying to access its stream with the analyser in another function. I just made all the Web Audio API nodes globals for peace of mind. Also it takes a few seconds for the analyser node to start reporting non -100 values. Working code for those interested:

// Globals
var aCtx;
var analyser;
var microphone;
if (navigator.getUserMedia) {
    navigator.getUserMedia({audio: true}, function(stream) {
        aCtx = new webkitAudioContext();
        analyser = aCtx.createAnalyser();
        microphone = aCtx.createMediaStreamSource(stream);
        microphone.connect(analyser);
        // analyser.connect(aCtx.destination);
        process();
    });
};
function process(){
    setInterval(function(){
        FFTData = new Float32Array(analyser.frequencyBinCount);
        analyser.getFloatFrequencyData(FFTData);
        console.log(FFTData[0]);
    },10);
}

If you would like to hear the live audio, you can connect the analyser to destination (speakers) as commented out above. Watch out for some lovely feedback though!

like image 137
Newmu Avatar answered Nov 14 '22 03:11

Newmu