Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to get IOStream to perform better?

Here is what I have gathered so far:

Buffering:

If by default the buffer is very small, increasing the buffer size can definitely improve the performance:

  • it reduces the number of HDD hits
  • it reduces the number of system calls

Buffer can be set by accessing the underlying streambuf implementation.

char Buffer[N];

std::ifstream file("file.txt");

file.rdbuf()->pubsetbuf(Buffer, N);
// the pointer reader by rdbuf is guaranteed
// to be non-null after successful constructor

Warning courtesy of @iavr: according to cppreference it is best to call pubsetbuf before opening the file. Various standard library implementations otherwise have different behaviors.

Locale Handling:

Locale can perform character conversion, filtering, and more clever tricks where numbers or dates are involved. They go through a complex system of dynamic dispatch and virtual calls, so removing them can help trimming down the penalty hit.

The default C locale is meant not to perform any conversion as well as being uniform across machines. It's a good default to use.

Synchronization:

I could not see any performance improvement using this facility.

One can access a global setting (static member of std::ios_base) using the sync_with_stdio static function.

Measurements:

Playing with this, I have toyed with a simple program, compiled using gcc 3.4.2 on SUSE 10p3 with -O2.

C : 7.76532e+06
C++: 1.0874e+07

Which represents a slowdown of about 20%... for the default code. Indeed tampering with the buffer (in either C or C++) or the synchronization parameters (C++) did not yield any improvement.

Results by others:

@Irfy on g++ 4.7.2-2ubuntu1, -O3, virtualized Ubuntu 11.10, 3.5.0-25-generic, x86_64, enough ram/cpu, 196MB of several "find / >> largefile.txt" runs

C : 634572 C++: 473222

C++ 25% faster

@Matteo Italia on g++ 4.4.5, -O3, Ubuntu Linux 10.10 x86_64 with a random 180 MB file

C : 910390
C++: 776016

C++ 17% faster

@Bogatyr on g++ i686-apple-darwin10-g++-4.2.1 (GCC) 4.2.1 (Apple Inc. build 5664), mac mini, 4GB ram, idle except for this test with a 168MB datafile

C : 4.34151e+06
C++: 9.14476e+06

C++ 111% slower

@Asu on clang++ 3.8.0-2ubuntu4, Kubuntu 16.04 Linux 4.8-rc3, 8GB ram, i5 Haswell, Crucial SSD, 88MB datafile (tar.xz archive)

C : 270895 C++: 162799

C++ 66% faster

So the answer is: it's a quality of implementation issue, and really depends on the platform :/

The code in full here for those interested in benchmarking:

#include <fstream>
#include <iostream>
#include <iomanip>

#include <cmath>
#include <cstdio>

#include <sys/time.h>

template <typename Func>
double benchmark(Func f, size_t iterations)
{
  f();

  timeval a, b;
  gettimeofday(&a, 0);
  for (; iterations --> 0;)
  {
    f();
  }
  gettimeofday(&b, 0);
  return (b.tv_sec * (unsigned int)1e6 + b.tv_usec) -
         (a.tv_sec * (unsigned int)1e6 + a.tv_usec);
}


struct CRead
{
  CRead(char const* filename): _filename(filename) {}

  void operator()() {
    FILE* file = fopen(_filename, "r");

    int count = 0;
    while ( fscanf(file,"%s", _buffer) == 1 ) { ++count; }

    fclose(file);
  }

  char const* _filename;
  char _buffer[1024];
};

struct CppRead
{
  CppRead(char const* filename): _filename(filename), _buffer() {}

  enum { BufferSize = 16184 };

  void operator()() {
    std::ifstream file(_filename, std::ifstream::in);

    // comment to remove extended buffer
    file.rdbuf()->pubsetbuf(_buffer, BufferSize);

    int count = 0;
    std::string s;
    while ( file >> s ) { ++count; }
  }

  char const* _filename;
  char _buffer[BufferSize];
};


int main(int argc, char* argv[])
{
  size_t iterations = 1;
  if (argc > 1) { iterations = atoi(argv[1]); }

  char const* oldLocale = setlocale(LC_ALL,"C");
  if (strcmp(oldLocale, "C") != 0) {
    std::cout << "Replaced old locale '" << oldLocale << "' by 'C'\n";
  }

  char const* filename = "largefile.txt";

  CRead cread(filename);
  CppRead cppread(filename);

  // comment to use the default setting
  bool oldSyncSetting = std::ios_base::sync_with_stdio(false);

  double ctime = benchmark(cread, iterations);
  double cpptime = benchmark(cppread, iterations);

  // comment if oldSyncSetting's declaration is commented
  std::ios_base::sync_with_stdio(oldSyncSetting);

  std::cout << "C  : " << ctime << "\n"
               "C++: " << cpptime << "\n";

  return 0;
}

Two more improvements:

Issue std::cin.tie(nullptr); before heavy input/output.

Quoting http://en.cppreference.com/w/cpp/io/cin:

Once std::cin is constructed, std::cin.tie() returns &std::cout, and likewise, std::wcin.tie() returns &std::wcout. This means that any formatted input operation on std::cin forces a call to std::cout.flush() if any characters are pending for output.

You can avoid flushing the buffer by untying std::cin from std::cout. This is relevant with multiple mixed calls to std::cin and std::cout. Note that calling std::cin.tie(std::nullptr); makes the program unsuitable to run interactively by user, since output may be delayed.

Relevant benchmark:

File test1.cpp:

#include <iostream>
using namespace std;

int main()
{
  ios_base::sync_with_stdio(false);

  int i;
  while(cin >> i)
    cout << i << '\n';
}

File test2.cpp:

#include <iostream>
using namespace std;

int main()
{
  ios_base::sync_with_stdio(false);
  cin.tie(nullptr);

  int i;
  while(cin >> i)
    cout << i << '\n';

  cout.flush();
}

Both compiled by g++ -O2 -std=c++11. Compiler version: g++ (Ubuntu 4.8.4-2ubuntu1~14.04) 4.8.4 (yeah, I know, pretty old).

Benchmark results:

work@mg-K54C ~ $ time ./test1 < test.in > test1.in

real    0m3.140s
user    0m0.581s
sys 0m2.560s
work@mg-K54C ~ $ time ./test2 < test.in > test2.in

real    0m0.234s
user    0m0.234s
sys 0m0.000s

(test.in consists of 1179648 lines each consisting only of a single 5. It’s 2.4 MB, so sorry for not posting it here.).

I remember solving an algorithmic task where the online judge kept refusing my program without cin.tie(nullptr) but was accepting it with cin.tie(nullptr) or printf/scanf instead of cin/cout.

Use '\n' instead of std::endl.

Quoting http://en.cppreference.com/w/cpp/io/manip/endl :

Inserts a newline character into the output sequence os and flushes it as if by calling os.put(os.widen('\n')) followed by os.flush().

You can avoid flushing the bufer by printing '\n' instead of endl.

Relevant benchmark:

File test1.cpp:

#include <iostream>
using namespace std;

int main()
{
  ios_base::sync_with_stdio(false);

  for(int i = 0; i < 1179648; ++i)
    cout << i << endl;
}

File test2.cpp:

#include <iostream>
using namespace std;

int main()
{
  ios_base::sync_with_stdio(false);

  for(int i = 0; i < 1179648; ++i)
    cout << i << '\n';
}

Both compiled as above.

Benchmark results:

work@mg-K54C ~ $ time ./test1 > test1.in

real    0m2.946s
user    0m0.404s
sys 0m2.543s
work@mg-K54C ~ $ time ./test2 > test2.in

real    0m0.156s
user    0m0.135s
sys 0m0.020s

Interesting you say C programmers prefer printf when writing C++ as I see a lot of code that is C other than using cout and iostream to write the output.

Uses can often get better performance by using filebuf directly (Scott Meyers mentioned this in Effective STL) but there is relatively little documentation in using filebuf direct and most developers prefer std::getline which is simpler most of the time.

With regards to locale, if you create facets you will often get better performance by creating a locale once with all your facets, keeping it stored, and imbuing it into each stream you use.

I did see another topic on this here recently, so this is close to being a duplicate.