I'm using ALSA for and audio application on Linux, I found great docs explain how to use it : 1 and this one. although I have some issues to understand this part of the setup :
/* Set number of periods. Periods used to be called fragments. */
if (snd_pcm_hw_params_set_periods(pcm_handle, hwparams, periods, 0) < 0) {
fprintf(stderr, "Error setting periods.\n");
return(-1);
}
what does mean set a number of period when I'm using the PLAYBACK mode and :
/* Set buffer size (in frames). The resulting latency is given by */
/* latency = periodsize * periods / (rate * bytes_per_frame) */
if (snd_pcm_hw_params_set_buffer_size(pcm_handle, hwparams, (periodsize * periods)>>2) < 0) {
fprintf(stderr, "Error setting buffersize.\n");
return(-1);
}
and the same question here about the latency , how should I understand it?
A period is the number of frames in between each hardware interrupt. The poll() will return once a period. The buffer is a ring buffer. The buffer size always has to be greater than one period size. Commonly this is 2*period size, but some hardware can do 8 periods per buffer.
Across the length of your period, it is typical for between 5 to 80 ml (that's up to 6 tablespoons) of menstrual fluid to leave your body (10). The heaviest days of menstrual bleeding are usually at the beginning of the menstrual cycle (around the first and second day) (10).
I assume you've read and understood this section of linux-journal. You may also find that this blog clarify things with respect to period size selection (or fragment in the blog) in the context of ALSA. To quote:
You shouldn't misuse the fragments logic of sound devices. It's like this:
The latency is defined by the buffer size.
The wakeup interval is defined by the fragment size.The buffer fill level will oscillate between 'full buffer' and 'full buffer minus 1x fragment size minus OS scheduling latency'. Setting smaller fragment sizes will increase the CPU load and decrease battery time since you force the CPU to wake up more often. OTOH it increases drop out safety, since you fill up playback buffer earlier. Choosing the fragment size is hence something which you should do balancing out your needs between power consumption and drop-out safety. With modern processors and a good OS scheduler like the Linux one setting the fragment size to anything other than half the buffer size does not make much sense.
... (Oh, ALSA uses the term 'period' for what I call 'fragment' above. It's synonymous)
So essentially, typically you would set period
to 2 (as was done in the howto you referenced). Then periodsize * period
is your total buffer size in bytes. Finally, the latency is the delay that is induced by the buffering of that many samples, and can be computed by dividing the buffer size by the rate at which samples are played back (ie. according to the formula latency = periodsize * periods / (rate * bytes_per_frame)
in the code comments).
For example, the parameters from the howto:
correspond to a total buffer size of period * periodsize = 2 * 8192 = 16384
bytes, and a latency of 16384 / (44100 * 4) ~ 0.093` seconds.
Note also that your hardware may have some size limitations for the supported period size (see this trouble shooting guide)
When the application tries to write samples into the buffer, an if the buffer is already full, the process goes to sleep. It gets woken up by the hardware through an interrupt; this interrupt is raised at the end of each period.
There should be at least two periods per buffer; otherwise, the buffer is already empty when a wakeup happens, which result in an underrun.
Increasing the number of periods (i.e., reducing the period size) increases the safety margin against underruns caused by scheduling or processing delays.
The latency is just proportional to the buffer size: when you completely fill the buffer, the last sample written is played by the hardware only after all the other samples have been played.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With