This question is not intended to be a repeat of "Why should I not include cpp files and instead use a header?" and is more of a question of practices.
To best frame this question let me explain. When writing a class it can grow quickly to over a few hundred lines if not more. For readability purposes I would like to break a class into individual files on a per method basis. To be clear I am not suggesting making the entire project into a set of includes for reasons mentioned in the post listed above, but rather break a class into components contained in their own files.
The following snippets of code is an illustration of what I mean
#include <iostream>
#include "helloClass.h"
using namespace std;
int main()
{
hello a;
cout<<a.out();
cin.get();//just to pause execution
}
#ifndef HELLOCLASS_H
#define HELLOCLASS_H
#include <string>
class hello
{
std::string message;
public:
std::string out();
hello();
};
#endif
#include "helloClass.h"
using namespace std;
hello::hello()
{
message = "Hello World!";
};
#include "helloClassOut.cpp"
string hello::out()
{
return message;
}
This will compile fine and execute as expected. I find an added benefit when there happens to be an error because the compiler will not only tell you what line but also the file that the error is in. For example I compiled it with a undeclared variable
$ c++ main.cpp helloClassMain.cpp -o hello
In file included from helloClassMain.cpp:8:0:
helloClassOut.cpp: In member function ‘std::string hello::out()’:
helloClassOut.cpp:3:1: error: ‘fail’ was not declared in this scope
fail="test";
I find this helpful to say the least and it allows me to think of the file helloClassMain.cpp as the entry point for the hello class and all of it's methods and attributes.
I understand that this is in the end the same thing as having the class written out in the same file because of how a compiler works. It is just a matter of being able to easier read, troubleshoot, etc.
Finally the questions.
If you #include a cpp file in several other files in your program, the compiler will try to compile the cpp file multiple times, and will generate an error as there will be multiple implementations of the same methods.
To minimize the potential for errors, C++ has adopted the convention of using header files to contain declarations. You make the declarations in a header file, then use the #include directive in every . cpp file or other header file that requires that declaration.
Class definitions can be put in header files in order to facilitate reuse in multiple files or multiple projects. Traditionally, the class definition is put in a header file of the same name as the class, and the member functions defined outside of the class are put in a .
Yes, it's entirely legitimate and allowable to define a class and its member functions in a single file.
- Is this bad practice?
Usually, yes. The only time I can think where this might be desirable is if you have some methods that are implemented differently on different systems (i.e. Windows and Linux). If there's a system-specific method, you might do something like this. But otherwise it's frowned upon.
- Would it make hell for those who are collaborating with me on a project?
Yes, because:
#include
the definitions they need).#include
d instead of compiled. That's misleading.#include
ing .cpp files into another .cpp file is a great way to get namespace clashes. Maybe one .cpp file has a global, static helper function that conflicts with another .cpp's global static helper. Maybe you using namespace std;
in one file messes up another file.#include
them, you could take advantage of parallel compilation. Using #include
is going to be much slower, because if you change one method just a little, then all of them have to be reprocessed and compiled. If instead you compiled every .cpp separately (which is the normal thing to do), then changing one .cpp file means only that file needs to be reprocessed and recompiled.
- Is there a core concept that I am missing here? I am sure I am not the first to consider this solution.
Without a more concrete example, you might be missing out on the single responsibility principle. Your class might be too big and doing too much.
Also, classes represent objects with state; spreading that state across multiple files makes it harder to conceptually understand that state as a whole (if it's all in the same file, it helps me to see and understand the whole object's state). My bet is that you'll have more bugs because the object's state isn't as consistent as you think it is.
- Have I asked a question more about preference and not so much best practices?
It's a bit of both. Now we're getting meta, though.
- Finally is there a name for what I am describing other than bad?
Perhaps there's a technical name, but I'd lean towards poisoned chalice.
You don't actually need to #include
all .cpp files in a single one.
You can for example implement each method in a different .cpp file, then compile them independently. You will get several .o files, each exporting the method it implements. The linker will do the job of tying them together all the same.
So :
Splitting implementations is not bad practice if done well. Don't #include
them however.
Not if you document it well. For example, group methods together in your .h and put a comment indicating in which file they're implemented.
The linking process, which doesn't care where a symbol comes from as long as it's exported by one object file.
I don't think so, even if it could be aimed closer to the problem (how do I split an implementation ?).
Not that I'm aware of.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With