I've been working on a product that uses WebRTC to exchange audio between a browser and a native client, the native side being implemented in C++. Currently I've built the latest stable release of webRtc (branch: branch-heads/65
).
So far, I'm able to get the connection peers to connect, audio is received and rendered correctly on the browser. However, the native client seems to never receive any data through it's audio track sink, despite the chrome debug tools suggesting that data is being sent from the browser to the native client.
The following code is definitely called, and the channel is being added as expected.
void Conductor::OnAddStream(rtc::scoped_refptr<webrtc::MediaStreamInterface> stream)
{
webrtc::AudioTrackVector atracks = stream->GetAudioTracks();
for (auto track : atracks)
{
remote_audio.reset(new Native::AudioRenderer(this, track));
track->set_enabled(true);
}
}
// Audio renderer derived from webrtc::AudioTrackSinkInterface
// In the audio renderer constructor, AddSink is called on the track.
AudioRenderer::AudioRenderer(AudioCallback* callback, webrtc::AudioTrackInterface* track) : track_(track), callback_(callback)
{
// Can confirm this point is reached.
track_->AddSink(this);
}
AudioRenderer::~AudioRenderer()
{
track_->RemoveSink(this);
}
void AudioRenderer::OnData(const void* audio_data, int bits_per_sample, int sample_rate, size_t number_of_channels,
size_t number_of_frames)
{
// This is never hit, despite the connection starting and streams being added.
if (callback_ != nullptr)
{
callback_->OnAudioData(audio_data, bits_per_sample, sample_rate, number_of_channels, number_of_frames);
}
}
I can also confirm that both offers include the option to receive audio:
Browser client offer:
// Create offer
var offerOptions = {
offerToReceiveAudio: 1,
offerToReceiveVideo: 0
};
pc.createOffer(offerOptions)
.then(offerCreated);
Native client answer:
webrtc::PeerConnectionInterface::RTCOfferAnswerOptions o;
{
o.voice_activity_detection = false;
o.offer_to_receive_audio = webrtc::PeerConnectionInterface::RTCOfferAnswerOptions::kOfferToReceiveMediaTrue;
o.offer_to_receive_video = webrtc::PeerConnectionInterface::RTCOfferAnswerOptions::kOfferToReceiveMediaTrue;
}
peer_connection_->CreateAnswer(this, o);
I'm unable to find anything recent regarding this issue, and it seems like a common use case of the framework to be able to use the received audio within the client application. Any ideas for where I might be making a mistake in listening for inbound audio, or strategies for how I might take to investigate why this is not working?
Many thanks
I've managed to find an alternative approach to getting audio data from WebRTC which allows one to work around this issue.
webrtc::AudioDeviceModule
implementation. Look at the webrtc source code to see how one might do this.RegisterAudioCallback
method, which is invoked when the call is established.Snippet:
int32_t AudioDevice::RegisterAudioCallback(webrtc::AudioTransport * transport)
{
transport_ = transport;
return 0;
}
NeedMorePlayData
method. (Note: this seems to work with ntp_time_ms being passed in as 0, seems to not be required).Snippet:
int32_t AudioDevice::NeedMorePlayData(
const size_t nSamples,
const size_t nBytesPerSample,
const size_t nChannels,
const uint32_t samplesPerSec,
void* audioSamples,
size_t& nSamplesOut,
int64_t* elapsed_time_ms,
int64_t* ntp_time_ms) const
{
return transport_->NeedMorePlayData(nSamples,
nBytesPerSample,
nChannels,
samplesPerSec,
audioSamples,
nSamplesOut,
elapsed_time_ms,
ntp_time_ms);
}
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With