Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How do I get Windows to go as fast as Linux for compiling C++?

People also ask

Why is compiling C so slow?

compilers are slow, which is not true on average. It has more to do with C++'s grammar and the huge state that a C++ compiler has to maintain. C is slow. It suffers from the same header parsing problem as is the accepted solution.


Unless a hardcore Windows systems hacker comes along, you're not going to get more than partisan comments (which I won't do) and speculation (which is what I'm going to try).

  1. File system - You should try the same operations (including the dir) on the same filesystem. I came across this which benchmarks a few filesystems for various parameters.

  2. Caching. I once tried to run a compilation on Linux on a RAM disk and found that it was slower than running it on disk thanks to the way the kernel takes care of caching. This is a solid selling point for Linux and might be the reason why the performance is so different.

  3. Bad dependency specifications on Windows. Maybe the chromium dependency specifications for Windows are not as correct as for Linux. This might result in unnecessary compilations when you make a small change. You might be able to validate this using the same compiler toolchain on Windows.


A few ideas:

  1. Disable 8.3 names. This can be a big factor on drives with a large number of files and a relatively small number of folders: fsutil behavior set disable8dot3 1
  2. Use more folders. In my experience, NTFS starts to slow down with more than about 1000 files per folder.
  3. Enable parallel builds with MSBuild; just add the "/m" switch, and it will automatically start one copy of MSBuild per CPU core.
  4. Put your files on an SSD -- helps hugely for random I/O.
  5. If your average file size is much greater than 4KB, consider rebuilding the filesystem with a larger cluster size that corresponds roughly to your average file size.
  6. Make sure the files have been defragmented. Fragmented files cause lots of disk seeks, which can cost you a factor of 40+ in throughput. Use the "contig" utility from sysinternals, or the built-in Windows defragmenter.
  7. If your average file size is small, and the partition you're on is relatively full, it's possible that you are running with a fragmented MFT, which is bad for performance. Also, files smaller than 1K are stored directly in the MFT. The "contig" utility mentioned above can help, or you may need to increase the MFT size. The following command will double it, to 25% of the volume: fsutil behavior set mftzone 2 Change the last number to 3 or 4 to increase the size by additional 12.5% increments. After running the command, reboot and then create the filesystem.
  8. Disable last access time: fsutil behavior set disablelastaccess 1
  9. Disable the indexing service
  10. Disable your anti-virus and anti-spyware software, or at least set the relevant folders to be ignored.
  11. Put your files on a different physical drive from the OS and the paging file. Using a separate physical drive allows Windows to use parallel I/Os to both drives.
  12. Have a look at your compiler flags. The Windows C++ compiler has a ton of options; make sure you're only using the ones you really need.
  13. Try increasing the amount of memory the OS uses for paged-pool buffers (make sure you have enough RAM first): fsutil behavior set memoryusage 2
  14. Check the Windows error log to make sure you aren't experiencing occasional disk errors.
  15. Have a look at Physical Disk related performance counters to see how busy your disks are. High queue lengths or long times per transfer are bad signs.
  16. The first 30% of disk partitions is much faster than the rest of the disk in terms of raw transfer time. Narrower partitions also help minimize seek times.
  17. Are you using RAID? If so, you may need to optimize your choice of RAID type (RAID-5 is bad for write-heavy operations like compiling)
  18. Disable any services that you don't need
  19. Defragment folders: copy all files to another drive (just the files), delete the original files, copy all folders to another drive (just the empty folders), then delete the original folders, defragment the original drive, copy the folder structure back first, then copy the files. When Windows builds large folders one file at a time, the folders end up being fragmented and slow. ("contig" should help here, too)
  20. If you are I/O bound and have CPU cycles to spare, try turning disk compression ON. It can provide some significant speedups for highly compressible files (like source code), with some cost in CPU.

NTFS saves file access time everytime. You can try disabling it: "fsutil behavior set disablelastaccess 1" (restart)


The issue with visual c++ is, as far I can tell, that it is not a priority for the compiler team to optimize this scenario. Their solution is that you use their precompiled header feature. This is what windows specific projects have done. It is not portable, but it works.

Furthermore, on windows you typically have virus scanners, as well as system restore and search tools that can ruin your build times completely if they monitor your buid folder for you. windows 7 resouce monitor can help you spot it. I have a reply here with some further tips for optimizing vc++ build times if you're really interested.