Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Cross Compile or Compile Native for CPU Arch

When writing software that is CPU arch dependent, such as C code running on x86 or C code running on ARM cpus. There generally is two ways to go about compiling this code, either Cross-Compile to the ARM CPU arch (if you're developing on an x86 system for example) or copy your code to a native arch cpu system and compile it naively.

I'm wondering if there is a benefit to the native approach vs the cross-compile approach? I noticed that the Fedora ARM team is using a build-server cluster of slow/low power ARM devices to "naively" compile their Fedora ARM spin... surely a project backed by Red Hat has access to some powerful build servers running x86 cpus that could get the job done in 1/2 the time... so why their choice? Am I missing something by cross-compiling my software?

like image 924
SnakeDoc Avatar asked Dec 27 '22 04:12

SnakeDoc


2 Answers

The main benefit is that all ./configure scripts do not need to be tweaked when running natively. If you are using a shadow rootfs, then you still have configurations running uname to detect the CPU type, etc. For instance, see this question. pkgconfig and other tools try to ease cross-building, but packages normally get native-building on x86 correct first, and then maybe native-building on ARM. cross-building can be painful as each package may need individual tweaks.

Finally, if you are doing profile guided optimizations and running test suitesas per Joachim, it is pretty much impossible to do this in a cross build environment.

Compile speed on the ARM is significantly faster than the human package builders, read configure, edit configure, re-run configure, compile, link cycles.

This also fits well with a continuous integration strategy. Various packages, especially libraries, can be built/deployed/tested quickly. The testing of libraries may involve hundreds of dependent packages. Arm Linux distrubutions will typically need to prototype changes when upgrading and patching a base library which may have hundreds of dependent packages that at least need retesting. A slow cycle done by a computer is always better than a fast compile followed by manual human intervention.

like image 65
artless noise Avatar answered Dec 28 '22 19:12

artless noise


No technically you're not missing anything by cross-compiling within the context of .c -> .o -> a.out (or whatever); A cross compiler will give you the same binary as a native compiler (versions etc. notwithstanding)

The "advantages" of building natively come from post-compile testing and managing complex systems.

1) If I can run unit-tests quickly after compiling I can get to any bugs/issues quickly the cycle is presumably shorter than the cross-compiling cycle;

2) if I am compiling some target software that has 3rd-party libraries that it uses, then building, deploying and then using them to build my target would probably be easier on native platform; I don't want to deal with the cross-compile builds of those because half of them have build processes written by crazy monkeys that make cross compiling them a pain.

Typically for most things one would try to get to a base build and the compile the rest natively. Unless I have a sick set up where my cross compiler is super wicked fast and I the time I save there is worth the set up required to make the rest of the things (such as unit testing and dependencies management) easier.

At least those are my thoughts

like image 29
Ahmed Masud Avatar answered Dec 28 '22 19:12

Ahmed Masud