I am trying to save the output from webAudio API for future use , so far i think getting PCM data and saving it as a file will do my expectation , I am wondering if the webAudio or mozAudio already supports saving the output stream if not how can i get the pcm data from the output stream
The latest WebAudio API draft introduced the OfflineAudioContext
exactly for this purpose.
You use it exactly the same way as a regular AudioContext, but with an additional startRendering()
method to trigger offline rendering, as well as an oncomplete
callback so that you can act upon finishing rendering.
There isn't a good sense of the requirements here outside of attempting to capture web audio in some programmatic way. The presumption here is you want to do this from code executing in JavaScript on the page that's currently being browsed, but that also isn't entirely clear.
As Incognito points out, you can do this in Chrome by using a callback hanging off decodeAudioData()
. But, this may be overly complicated for your uses if you're simply trying to capture, for example, the output of a single web stream and decode it into PCM for use in your sound tools of choice.
Another strategy you might consider, for cases when the media URL is obscured or otherwise difficult to decode using your current tools, is capture from your underlying sound card. This gives you the decoding for free, at the potential expense of a lower sampling rate if (and only if) your sound card isn't able to sample the stream effectively.
As we know, you're already encoding analog signals digitally anyway via your desire for PCM encoding. Obviously, only do this if you have the legal right to use the files being sampled.
Regardless of the route you choose, best of luck to you. Be it programmatic stream dissection or spot sampling, you should now have more than enough information to proceed.
Edit: Based on additional information from the OP, this seems like the needed solution (merged from here and here, using NodeJS' implementation of fs
):
var fs = require('fs');
function saveAudio(data, saveLocation) {
var context = new (window.AudioContext || window.webkitAudioContext)();
var source = context.createBufferSource();
if(context.decodeAudioData) {
context.decodeAudioData(data, function(buffer) {
fs.writeFile(saveLocation, buffer, function (err) {
if (err) throw err;
console.log('It\'s saved!');
});
}, function(e) {
console.log(e);
});
} else {
var buffer = context.createBuffer(data, false /*mixToMono*/);
fs.writeFile(saveLocation, buffer, function (err) {
if (err) throw err;
console.log('It\'s saved!');
});
}
}
(Warning: untested code. If this doesn't work, edits are welcome.)
This effectively spools out decodeAudioData
from the Web Audio API, decodes PCM from the supplied data
, then attempts to save it to the target saveLocation
. Simple enough, really.
Chrome should support it (or at the least, mostly support this new feature).
decodeAudioData()
When decodeAudioData() is finished, it calls a callback function which provides the decoded PCM audio data as an AudioBuffer
It's nearly identical to the XHR2 way of doing things, so you'll likely want to make an abstraction layer for it.
Note: I haven't tested that it works, but I only see one bug in chromium regarding this, indicating it works but fails for some files.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With