I'm stuck on this. Currently I'm using:
FILE *a = fopen("sample.txt", "r");
int n;
while ((n = fgetc(a)) != EOF) {
putchar(n);
}
However this method seems to be a bit inefficient. Is there any better way? I tried using fgets:
char *s;
fgets(s, 600, a);
puts(s);
There's one thing I find wrong about this second method, which is that you would need a really large number for the second argument of fgets.
Thanks for all the suggestions. I found a way (someone on IRC told me this) using open(), read(), and write().
char *filename = "sample.txt";
char buf[8192];
int r = -1;
int in = open(filename, O_RDONLY), out = 0;
if (in == -1)
return -1;
while (1) {
r = read(in, buf, sizeof(buf));
if (r == -1 || r == 0) { break; }
r = write(out, buf, r);
if (r == -1 || r == 0) { break; }
}
The second code is broken. You need to allocate a buffer, e.g.:
char s[4096];
fgets(s, sizeof(s), a);
Of course, this doesn't solve your problem.
Read fix-size chunks from the input and write out whatever gets read in:
int n;
char s[65536];
while ((n = fread(s, 1, sizeof(s), a))) {
fwrite(s, 1, n, stdout);
}
You might also want to check ferror(a)
in case it stopped for some other reason than reaching EOF.
The most efficient method will depend greatly on the operating system. For example, in Linux, you can use sendfile
:
struct stat buf;
int fd = open(filename, O_RDONLY);
fstat(fd, &buf);
sendfile(0, fd, NULL, buf.st_size);
This does the copy directly in the kernel, minimizing unnecessary memory-to-memory copies. Other platforms may have similar approaches, such as write()
ing to stdout from a mmap
ed buffer.
I believe the FILE
returned by fopen
is tipically (always?) buffered, so you first example is not so inefficient as you may think.
The second might perform a little better... if you correct the errors: remember to allocate the buffer, and remember that puts add a newline!.
Other option is to use binary reads (fread).
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With