I have one huge C file. Within the file, there is a giant struct (~>1million lines). Is there a way to parallel compile this file using additional cores?
Edit: Sorry, after reviewing my code and my question, the actual giant thing is not "struct", rather, it's the struct array...
If it is the struct definition that is over 1 million lines, then you're probably out of luck.
But if you're declaring a struct variable that is an array of that struct type that is many lines (or not an array but just a very big struct) then I would suggest placing the variable declaration in a separate .c file by itself and using the extern keyword in any other c file that needs to access it. That way it will only need to be re-compiled when it changes.
For example if you had the following:
//Filename: onefile.c
struct _bigStruct{
int type;
char *name;
}bigStruct[] = {
{ 1, "One" },
{ 2, "Two" },
{ 3, "Three" },
{ 4, "Four" },
...
};
int someFunction(int j, int x)
{
if (j == bigStruct[x])
//do something
}
Then I would change it to the following:
//Filename: bigstruct.h
struct _bigStruct{
int type;
char *name;
};
and
//Filename: bigstruct.c
struct _bigStruct bigStruct[] = {
{ 1, "One" },
{ 2, "Two" },
{ 3, "Three" },
{ 4, "Four" },
...
};
and
//Filename: main.c
#include "bigstruct.h"
extern struct _bigStruct bigStruct[];
int someFunction(int j, int x)
{
if (j == bigStruct[x].type)
//do something
}
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With