Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Real Time Audio Analysis In Linux

I'm wondering what is the recommended audio library to use?

I'm attempting to make a small program that will aid in tuning instruments. (Piano, Guitar, etc.). I've read about ALSA & Marsyas audio libraries.

I'm thinking the idea is to sample data from microphone, do analysis on chunks of 5-10ms (from what I've read). Then perform a FFT to figure out which frequency contains the largest peak.

like image 510
MAckerman Avatar asked Mar 31 '09 21:03

MAckerman


2 Answers

This guide should help. Don't use ALSA for your application. Use a higher level API. If you decide you'd like to use JACK, http://jackaudio.org/applications has three instrument tuners you can use as example code.

like image 135
joeforker Avatar answered Sep 30 '22 18:09

joeforker


Marsyas would be a great choice for doing this, it's built for exactly this kind of task.

For tuning an instrument, what you need to do is to have an algorithm that estimates the fundamental frequency (F0) of a sound. There are a number of algorithms to do this, one of the newest and best is the YIN algorithm, which was developed by Alain de Cheveigne. I recently added the YIN algorithm to Marsyas, and using it is dead simple.

Here's the basic code that you would use in Marsyas:

  MarSystemManager mng;

  // A series to contain everything
  MarSystem* net = mng.create("Series", "series");

  // Process the data from the SoundFileSource with AubioYin
  net->addMarSystem(mng.create("SoundFileSource", "src"));
  net->addMarSystem(mng.create("ShiftInput", "si"));
  net->addMarSystem(mng.create("AubioYin", "yin"));

  net->updctrl("SoundFileSource/src/mrs_string/filename",inAudioFileName);

  while (net->getctrl("SoundFileSource/src/mrs_bool/notEmpty")->to<mrs_bool>()) {
    net->tick();
    realvec r = net->getctrl("mrs_realvec/processedData")->to<mrs_realvec>();
    cout << r(0,0) << endl;
  }

This code first creates a Series object that we will add components to. In a Series, each of the components receives the output of the previous MarSystem in serial. We then add a SoundFileSource, which you can feed in a .wav or .mp3 file into. We then add the ShiftInput object which outputs overlapping chunks of audio, which are then fed into the AubioYin object, which estimates the fundamental frequency of that chunk of audio.

We then tell the SoundFileSource that we want to read the file inAudioFileName.

The while statement then loops until the SoundFileSource runs out of data. Inside the while loop, we take the data that the network has processed and output the (0,0) element, which is the fundamental frequency estimate.

This is even easier when you use the Python bindings for Marsyas.

like image 27
sness Avatar answered Sep 30 '22 18:09

sness