I'm reading quite large lines(up to 128K) of text using fgets
. I'm seeing excessive context switching on the server, using strace
I see the following:
read(3, "9005 10218 00840023102015 201008"..., 4096) = 4096
i.e. fgets
reads chunks of 4096 bytes at a time. Is there any way to control how big chunks fgets
uses to when calling read()
?
setvbuf
would be the obvious place to start.
The function fgets()
is part of the stdio package, and as such it must buffer (or not) the input stream in a way that is consistent with also using fgetc()
, fscanf()
, fread()
and so forth. That means that the buffer itself (if the stream is buffered) is the property of the FILE
object.
Whether there is a buffer or not, and if buffered, how large the buffer is, can be suggested to the library by calling setvbuf()
.
The library implementation has a fair amount of latitude to ignore hints and do what it thinks best, but buffers that are "reasonable" powers of two in size will usually be accepted. You've noticed that the default was 4096, which is clearly smaller than optimal.
The stream is buffered by default if it is opened on an actual file. Its buffering on a pipe, FIFO, TTY or anything else potentially has different defaults.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With