I am trying to get audio capture from the microphone working on Safari on iOS11 after support was recently added
However, the onaudioprocess
callback is never called. Here's an example page:
<html> <body> <button onclick="doIt()">DoIt</button> <ul id="logMessages"> </ul> <script> function debug(msg) { if (typeof msg !== 'undefined') { var logList = document.getElementById('logMessages'); var newLogItem = document.createElement('li'); if (typeof msg === 'function') { msg = Function.prototype.toString(msg); } else if (typeof msg !== 'string') { msg = JSON.stringify(msg); } var newLogText = document.createTextNode(msg); newLogItem.appendChild(newLogText); logList.appendChild(newLogItem); } } function doIt() { var handleSuccess = function (stream) { var context = new AudioContext(); var input = context.createMediaStreamSource(stream) var processor = context.createScriptProcessor(1024, 1, 1); input.connect(processor); processor.connect(context.destination); processor.onaudioprocess = function (e) { // Do something with the data, i.e Convert this to WAV debug(e.inputBuffer); }; }; navigator.mediaDevices.getUserMedia({audio: true, video: false}) .then(handleSuccess); } </script> </body> </html>
On most platforms, you will see items being added to the messages list as the onaudioprocess
callback is called. However, on iOS, this callback is never called.
Is there something else that I should do to try and get it called on iOS 11 with Safari?
There are two problems. The main one is that Safari on iOS 11 seems to automatically suspend new AudioContext
's that aren't created in response to a tap. You can resume()
them, but only in response to a tap.
(Update: Chrome mobile also does this, and Chrome desktop will have the same limitation starting in version 70 / December 2018.)
So, you have to either create it before you get the MediaStream
, or else get the user to tap again later.
The other issue with your code is that AudioContext
is prefixed as webkitAudioContext
in Safari.
Here's a working version:
<html> <body> <button onclick="beginAudioCapture()">Begin Audio Capture</button> <script> function beginAudioCapture() { var AudioContext = window.AudioContext || window.webkitAudioContext; var context = new AudioContext(); var processor = context.createScriptProcessor(1024, 1, 1); processor.connect(context.destination); var handleSuccess = function (stream) { var input = context.createMediaStreamSource(stream); input.connect(processor); var recievedAudio = false; processor.onaudioprocess = function (e) { // This will be called multiple times per second. // The audio data will be in e.inputBuffer if (!recievedAudio) { recievedAudio = true; console.log('got audio', e); } }; }; navigator.mediaDevices.getUserMedia({audio: true, video: false}) .then(handleSuccess); } </script> </body> </html>
(You can set the onaudioprocess
callback sooner, but then you get empty buffers until the user approves of microphone access.)
Oh, and one other iOS bug to watch out for: the Safari on iPod touch (as of iOS 12.1.1) reports that it does not have a microphone (it does). So, getUserMedia will incorrectly reject with an Error: Invalid constraint
if you ask for audio there.
FYI: I maintain the microphone-stream package on npm that does this for you and provides the audio in a Node.js-style ReadableStream. It includes this fix, if you or anyone else would prefer to use that over the raw code.
Tried it on iOS 11.0.1, and unfortunately this problem still isn't fixed.
As a workaround, I wonder if it makes sense to replace the ScriptProcessor with a function that takes the steam data from a buffet and then processes it every x milliseconds. But that's a big change to the functionality.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With