Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How Can I Make My Highly Modular Project Easy for End Users to Compile?

I am working on a relatively large set of serial code C-code libraries, which will then be parallelized in CUDA by my collaborators.

For this project my code essentially boils down to

#include "Initialize.cpp"
#include "PerformMoves.cpp"
#include "CollectResults.cpp"

main() 
{
   //DECLARE General Vars

   Initialize();

   for (unsigned int step=0; step < maxStep; step++)
   {
      PerformMoves();
   }

   CollectResults();
}

Now the steps I perform inside Initialize and PerformMoves will be very different depending on what kind of simulation I'm building. Speed is of the utmost performance as my code is a Monte Carlo simulation that will be performing millions of moves, each of which involves potentially thousands of calculations. Thus I want to avoid any unnecessary conditionals.

Thus I essentially want different "plug and play" C modules, e.g.

InitializeSimType1.cpp InitializeSimType2.cpp InitializeSimType3.cpp

PerformMovesType1.cpp PerformMovesType2.cpp PerformMovesType3.cpp

....

Each optimized for a certain type of simulation procedure, without the "fluff" of large conditionals to handle a variety of cases.

Currently I have two different types of simulations, and just have these in two different folders and compile as follows:

g++ Main.cc Initialize.cpp PerformMoves.cpp CollectResults.cpp -o MC_Prog

I would like to move to some sort of conditional compilation scheme, where I have some sort config file where I specify options and it grabs the correct cpp files and compiles them.

I would assume that a makefile+config files is my best bet, but beyond basic targets and very linear compilation I'm pretty novice to the world of complex makefiles.

What are some good ways that I could create a list-driven compilation system, that would allow users with very basic C knowledge to easily build a target with the modules they want. My end users won't have a great deal of makefile knowledge (or programming knowledge in general), so a complex system on their end is out of the question. I want them basically to have a transparent configuration-file driven system that will allow them to pick one Initialize module, one PerformMoves module, and one CollectResults module. Once they've modified this configuration file with their picks, I want them to be able to do a single-command compilation.

In other words I want to be able to create a README that directs users:

  1. Input these entries in this config file (gives config file name, options to put for each config file entry)...
  2. Compile using this command (gives single command)

I know this is a rather abstract and complex question, but I know there are some Makefile/c experts out there who could offer me some good guidance.

like image 303
Jason R. Mick Avatar asked Nov 14 '22 02:11

Jason R. Mick


1 Answers

I can think of a couple of similar build systems:

  1. When you build GCC it builds libraries with helper routines (eg libgcc.a). It wants to ensure that only the necessary functions link into your program. Since the unit of linker granularity is a single .o it builds one .c file dozens of times with different -D_enable_xxx options to turn on each function individually. Then it uses ar to build the resulting .o into a .a which you can link against to get only what you need.
  2. The X window server has several functions which perform largely identical operations except that the math in the very innermost loop is different. For example, it might xor or and or or bits together. It doesn't want a conditional in its inner loop either so it builds the same file over and over with different -D_math_xor options to produce dozens of .o each with one function that performs the specific math operation wrapped in the looping function. It uses all of the .o (because the operation is actually selected by the X client, not at compile time) so the .a technique is not important.

It sounds like you could use a mix of those to produce a library of .o where each one is pre-compiled with your specific options and then rebuild just main.c over and over calling different functions and linking against your libaray.

Based on your comments below, here are a few other ideas:

  1. When you build binutils, GCC, glibc, etc you unpack the source distribution but you don't build inside the source tree. Instead you mkdir obj-mytarget; cd obj-mytarget; ../path/to/gcc/configure --options...; make The configure script creates config.h and a Makefile specific to the options (and, of course, the system capabilities). One big advantage of this setup is that all of the build products (from .o to executables) for that set of options are self-contained and do not interfere with builds using other options.
  2. The classic BSD kernel configuration is done by editing a file (idiomatically given an all caps name like GENERIC which produces the default kernel or LINT which turns on everything for static analysis purposes) and then invoking config CONFNAME. config parses the simple config file and creates a directory tree named after the file. The directory tree contains a Makefile and various .h files to control the build. Same basic advantages as the method above.
  3. avr-libc (a libc for Atmel AVR microcontrollers) automates the above approaches by automatically creating target directories for dozens of microcontrollers (with variations in memory size, peripheral availability, etc) and building all of the variations in a single pass. The result is many libraries, providing a cross build environment capable of targeting any AVR. You could use a similar idea to automatically build every permutation of your program.
like image 89
Ben Jackson Avatar answered Dec 28 '22 08:12

Ben Jackson