Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

termios VMIN VTIME and blocking/non-blocking read operations

I am trying to write a simple C serial communication program for Linux. I am confused about the blocking/non-blocking reads and VMIN/VTIME relationships.

My question is, if I should be settings VMIN/VTIME according to whether I have a blocking/non-blocking open call?

For example, if I have the following open call:

open( "/dev/ttyS0", O_RDWR|O_NONBLOCK|O_NOCTTY)

Should I set the VMIN/VTIME to:

.c_cc[VTIME]    = 0;    
.c_cc[VMIN]     = 0;

and if I have blocking mode like:

open( "/dev/ttyS0", O_RDWR|O_NOCTTY)

should I set the VMIN/VTIME to:

.c_cc[VTIME]    = 0;    
.c_cc[VMIN]     = 1;

?

Does it make any difference what VMIN/VTIME are set to even though the port open flags are set appropriately?

If anybody could help me understand the relationship between VMIN/VTIME and blocking/non-blocking ports I would really appreciate it.

Thanks

like image 920
Arn Avatar asked Nov 22 '13 20:11

Arn


2 Answers

Andrey is right. In non-blocking mode, VMIN/VTIME have no effect (FNDELAY / O_NDELAY seem to be linux variants of O_NONBLOCK, the portable, POSIX flag).

When using select() with a file in non-blocking mode, you get an event for every byte that arrives. At high serial data rates, this hammers the CPU. It's better to use blocking mode with VMIN, so that select() waits for a block of data before firing an event, and VTIME to limit the delay, for blocks smaller than VMIN.

Sam said "If you want to make sure you get data every half second you could set vtime" (VTIME = 5).

Intuitively, you may expect that to be true, but it's not. The BSD termios man page explains it better than linux (though they both work the same way). The VTIME timer is an interbyte timer. It starts over with each new byte arriving at the serial port. In a worst case scenario, select() can wait up to 20 seconds before firing an event.

Suppose you have VMIN = 250, VTIME = 1, and serial port at 115200 bps. Also suppose you have an attached device sending single bytes slowly, at a consistent rate of 9 cps. The time between bytes is 0.11 seconds, long enough for the interbyte timer of 0.10 to expire, and select() to report a readable event for each byte. All is well.

Now suppose your device increases its output rate to 11 cps. The time between bytes is 0.09 seconds. It's not long enough for the interbyte timer to expire, and with each new byte, it starts over. To get a readable event, VMIN = 250 must be satisfied. At 11 cps, that takes 22.7 seconds. It may seem that your device has stalled, but the VTIME design is the real cause of delay.

I tested this with two Perl scripts, sender and receiver, a two port serial card, and a null modem cable. I proved that it works as the man page says. VTIME is an interbyte timer that's reset with the arrival of each new byte.

A better design would have the timer anchored, not rolling. It would continue ticking until it expires, or VMIN is satisfied, whichever comes first. The existing design could be fixed, but there is 30 years of legacy to overcome.

In practice, you may rarely encounter such a scenario. But it lurks, so beware.

like image 108
Trifle Menot Avatar answered Nov 05 '22 23:11

Trifle Menot


Make sure to unset the FNDELAY flag for descriptor using fcntl, otherwise VMIN/VTIME are ignored. Serial Programming Guide for POSIX Operating Systems

like image 3
Andrey Hanin Avatar answered Nov 05 '22 22:11

Andrey Hanin