how can I read real-time audio into numpy array and use matplotlib to plot ?
Right Now I am recording audio on an wav
file then using scikits.audiolab.wavread
to read it into an array. Is there a way I could do this directly in realtime?
audio2numpy Description audio2numpy load an audio file and directly ouputs the audio data as a numpy array and its sampling rate. Supports .wav, .aiff via python's standard library, and .mp3 via ffmpeg.
You can use PyAudio to record audio and use np.frombuffer to convert it into a numpy array.
Librosa is a Python library that helps us work with audio data. For complete documentation, you can also refer to this link. Loading the file: The audio file is loaded into a NumPy array after being sampled at a particular sample rate (sr). 3.
A digitized audio signal is a NumPy array with a specified frequency and sample rate. The analog wave format of the audio signal represents a function (i.e. sine, cosine etc). We need to save the composed audio signal generated from the NumPy array.
You can use PyAudio
to record audio and use np.frombuffer
to convert it into a numpy array.
import pyaudio import numpy as np from matplotlib import pyplot as plt CHUNKSIZE = 1024 # fixed chunk size # initialize portaudio p = pyaudio.PyAudio() stream = p.open(format=pyaudio.paInt16, channels=1, rate=44100, input=True, frames_per_buffer=CHUNKSIZE) # do this as long as you want fresh samples data = stream.read(CHUNKSIZE) numpydata = np.frombuffer(data, dtype=np.int16) # plot data plt.plot(numpydata) plt.show() # close stream stream.stop_stream() stream.close() p.terminate()
If you want to record stereo instead of mono, you have to set channels
to 2
. Then you get an array with interleaved channels. You can reshape it like this:
frame = np.frombuffer(data, dtype=numpy.int16) # interleaved channels frame = np.stack((frame[::2], frame[1::2]), axis=0) # channels on separate axes
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With