I'm trying to find a definitive answer and can't, so I'm hoping someone might know.
I'm developing a C++ app using GCC 4.x on Linux (32-bit OS). This app needs to be able to read files > 2GB in size.
I would really like to use iostream stuff vs. FILE pointers, but I can't find if the large file #defines (_LARGEFILE_SOURCE, _LARGEFILE64_SOURCE, _FILE_OFFSET_BITS=64) have any effect on the iostream headers.
I'm compiling on a 32-bit system. Any pointers would be helpful.
This has already been decided for you when libstdc++
was compiled, and normally depends on whether or not _GLIBCXX_USE_LFS
was defined in c++config.h
.
If in doubt, pass your executable (or libstdc++.so
, if linking against it dynamically) through readelf -r
(or through strings
) and see if your binary/libstdc++
linked against fopen
/fseek
/etc. or fopen64
/fseek64
/etc.
UPDATE
You don't have to worry about the 2GB limit as long as you don't need/attempt to fseek
or ftell
(you just read from or write to the stream.)
If you are using GCC, you can take advantage of a GCC extension called __gnu_cxx::stdio_filebuf, which ties an IOStream to a standard C FILE descriptor.
You need to define the following two things:
_LARGEFILE_SOURCE
_FILE_OFFSET_BITS=64
For example:
#include <cstdio>
#include <fstream>
#include <ext/stdio_filebuf.h>
int main()
{
std::ofstream outstream;
FILE* outfile;
outfile = fopen("bigfile", "w");
__gnu_cxx::stdio_filebuf<char> fdbuf(outfile, std::ios::out |
std::ios::binary);
outstream.std::ios::rdbuf(&fdbuf);
for(double i = 0; i <= 786432000000.0; i++) {
outstream << "some data";
fclose(outfile);
return 0;
}
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With