First I'll describe my problem, I'm creating an automated playlist from random songs, some of the songs have 10-15 seconds of silence at the end of the song, what I'm trying to achieve is to detect from the analyser when a song has been in silence for 5 seconds and act on that.
So far I've got this:
var context, analyser, source, audio;
context = new (window.AudioContext || window.webkitAudioContext)();
analyser = context.createAnalyser();
audio = new Audio();
source = context.createMediaElementSource(audio)
source.connect(analyser);
analyser.connect(context.destination);
var playNext = function() {
var pickedSong;
// chooses a song from an api with several
// thousands of songs and store it in pickedSong
audio.src = pickedSong;
audio.play();
}
audio.addEventListener('ended', playNext);
playNext();
I know the answer is somewhere in the analyser but I haven't found any coherence in the data returned from It.
I can do something like this:
var frequencies = new Float32Array(analyser.frequencyBinCount);
analyser.getFloatFrequencyData(frequencies);
and the frequencies var would contain 2048 keys each with a random(to me) number (-48.11, -55, -67, etc...), do these numbers mean anything related to the perceived sound that is played?, how can i detect if it's low enough that people would think nothing is playing.
For the detection I mainly want something like this:
var isInSilence = function(){
return !audible;
}
var tries = 0;
var checker = function() {
tries = isInSilence() ? tries + 1 : 0;
if(tries >= 5) playNext();
setTimeout(checker, 1000);
}
checker();
The only missing part is detecting if the song is currently silent or not, any help would be appreciated.
Edit:
based on william's answer i managed to solve it by doing it this way:
var context, compressor, gain, source, audio;
context = new (window.AudioContext || window.webkitAudioContext)();
compressor = context.createDynamicsCompressor();
gain = context.createGain();
audio = new Audio();
source = context.createMediaElementSource(audio)
// Connecting source directly
source.connect(context.destination);
// Connecting source the the compresor -> muted gain
source.connect(compressor);
compressor.connect(gain);
gain.connect(context.destination);
gain.gain.value = 0; // muting the gain
compressor.threshold.value = -100;
var playNext = function() {
var pickedSong;
// chooses a song from an api with several
// thousands of songs and store it in pickedSong
audio.src = pickedSong;
audio.play();
}
audio.addEventListener('ended', playNext);
playNext();
var isInSilence = function(){
return compressor.reduction.value >= -50;
}
var tries = 0;
var checker = function() {
tries = isInSilence() ? tries + 1 : 0;
if(tries >= 5) playNext();
setTimeout(checker, 1000);
}
checker();
An audio context controls both the creation of the nodes it contains and the execution of the audio processing, or decoding. You need to create an AudioContext before you do anything else, as everything happens inside a context.
The createAnalyser () method of the BaseAudioContext interface creates an AnalyserNode, which can be used to expose audio time and frequency data and create data visualisations. Note: The AnalyserNode () constructor is the recommended way to create an AnalyserNode; see Creating an AudioNode .
In the case of audio fingerprinting, the fingerprinting is based on the device's audio stack. Just as canvas fingerprinting takes advantage of the Canvas API, the technology that makes audio fingerprinting possible is an API called the AudioContext API. It is an interface of the Web Audio API that is a part of most modern browsers.
The browser is assigned the task of generating an audio signal and it's processed based on the device's audio setting and audio hardware installed on it. A website uses the AudioContext API to send a low frequency audio through the browser to the computer. It then measures how the computer processes this sent data.
This is a possible solution using a different approach - the compressor node. It's a brief description but should be enough to let you fill in the details for your use case:
Create a compressor node and connect your input source to it.
Then connect the compressor to a gain node and mute the gain node ( set it to zero). Connect the gain node to the audioContext.destination
Take your input source and connect it to audioContext.destination
.
Set the compressor property values to detect the signal (so that it triggers the reduction value ).
Wrap compressor.reduction.value
in a setInterval
or requestAnimationFrame
to check for changes.
Code the logic needed to do whatever you want when this value changes ( or doesn't change ).
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With