I'm porting C++ code from Linux to Windows. During this process, I found out that the following line takes ~10 times slower under Windows (on exactly the same hardware):
list<char*>* item = new list<char*>[160000];
On Windows it takes ~10ms, while on Linux it takes ~1ms. Note that this is the average time. Running this row 100 times takes ~1 second on Windows.
This happens both on win32 and x64, both versions are compiled in Release, and the speed is measured via QueryPerformanceCounter (Windows) and gettimeofday (Linux).
The Linux compiler is gcc. The Windows compiler is VS2010.
Any idea why could this happen?
It could be more an issue of library implementation. I would expect a
single allocation in most cases, with the default constructor for list
not allocating anything. So what you're trying to measure is the cost
of the default constructor of list
(which is executed 160000).
I say "trying to measure", because any measurements that small are measuring clock jitter and resolution more than they're measuring code execution times. You should put this in a loop, to execute it frequently enough to get a runtime of a couple of seconds. And when you do this, you need to take precautions to ensure that the compiler doesn't optimize anything out.
And under Linux, you want to measure using clock()
, at least; the wall
clock time you get from gettimeofday
is very dependent on what else
happens to happen at the same time. (Don't use clock()
under Windows,
however. The Windows implementation is broken.)
I think this instruction takes less time in both OS (regardless of anything). In this case It take such few time that you may be actually measuring the resolution of your timers.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With