I am using the HTML5 web audio API to analyse a song and create markers when the average sound frequency drops below a certain value. Using the existing AudioNode infrastructure, I managed to do this but the sound is analyzed only and only when the song is played.
What I want however, is to analyse the song in advance, so I can extract the silence markers, and turn them into CUE buttons, which the user can use to move throughout the song.
Obviously, it will be very slow to rely on playing the whole song at first, in order to analyse it, especially, if the song is something like a 50 min podcast. I tried speeding up the playbackRate to 10x, but that doesn't help.
I suppose that the solution lies in skipping the Web audio API, and analyzing the raw ArrayBuffer, however, I don't really know where to start from.
Suggestions? Ideas?
I have been able to find a slide in a presentation which describes exactly this: here
Normal use of the API is to process audio in real-time. Instead, we can pre-process the audio through the entire system and get result:
The only problem is that my understanding of the audio API is too simplistic to see what the 'trick' is from the code sample:
var sampleRate = 44100.0;
var length = 20; // seconds
var ctx = new webkitAudioContext(2, sampleRate * length, sampleRate);
ctx.oncomplete = function(e) {
var resultAudioBuffer = e.renderedBuffer;
...
};
function convolveAudio(audioBuffer, audioBuffer2) {
var source = ctx.createBufferSource();
var convolver = ctx.createConvolver();
source.buffer = audioBuffer;
convolver.buffer = audioBuffer2;
// source -> convolver -> destination.
source.connect(convolver);
convolver.connect(ctx.destination);
source.noteOn(0);
ctx.startRendering();
}
But I thought it would be better to at least share this than to leave it be entirely, even if this isn't exactly the answer I was hoping to give.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With