The question is NOT about the Linux kernel. It is NOT a C vs. C++ debate either.
I did a research and it seems to me that C++ lacks tool support when it comes to exception handling and memory allocation for embedded systems:
Why is the linux kernel not implemented in C++? Beside the accepted answer see also Ben Collins' answer.
Linus Torvalds on C++:
"[...] anybody who designs his kernel modules for C++ is [...]
(b) a C++ bigot that can't see what he is writing is really just C anyway"" - the whole C++ exception handling thing is fundamentally broken. It's especially broken for kernels.
- any compiler or language that likes to hide things like memory allocations behind your back just isn't a good choice for a kernel."
JOINT STRIKE FIGHTER AIR VEHICLE C++ CODING STANDARDS:
"AV Rule 208 C++ exceptions shall not be used"
Are the exception handling and the memory allocation the only points where C++ apparently lacks tool support (in this context)?
To fix the exception handling issue, one has to provide bound on the time till the exception is caught after it is thrown?
Could you please explain me why the memory allocation is an issue? How can one overcome this issue, what has to be done?
As I see this, in both cases one has to provide an upper bound at compile time on something nontrivial that happens and depends on things at run-time.
Answer:
No, dynamic casts were also an issue but it has been solved.
Basically yes. The time needed to handle exceptions has to be bounded by analyzing all the throw paths.
See solution on slides "How to live without new" in Embedded systems programming. In short: pre-allocate (global objects, stacks, pools).
Well, there are a couple of things. First, you have to remember that the STL is completely built on OS routines, the C standard library, and dynamic allocation. When your writing a kernel, there is no dynamic memory allocation for you (you're providing it) there is no C standard library (you have to provide one built on top of your kernel), and you are providing system calls. Then there is the fact that C interops very well and easily with assembly, whereas C++ is very difficult to interface with assembly because the ABI isn't necessarily constant, nor are names. Because of name mangling, you get a whole new level of complication.
Then, you have to remember that when you are building an OS, you need to know and control every aspect of the memory used by the kernel. In C++, there are quite a few hidden structures that you have no control over (vtables, RTTI, exceptions) that would severely interfere with your work.
In other words, what Linus is saying is that with C, you can easily understand the assembly being generated and it is simple enough to run directly on the machine. Although C++ can, you will always have to set up quite a bit of context and still do some C to interface the assembly and C. Another reason is that in systems programing, you need to know exactly how methods are being called. C has the very well documented C calling conventions, but in C++ you have this
to deal with, name mangling, etc.
In short, it's because C++ does things without you asking.
Per @Josh's comment bellow, another thing C++ does behind your back is constructors and destructors. They add overhead to enter and exiting stack frames, and most of all, make assembly interop even harder, as when you destroy a C++ stack frame, you have to call the destructor of every object in it. This gets ugly quickly.
Why certain kernels refuse C++ code in their code base? Politics and preference, but I digress.
Some parts of modern OS kernels are written in certain subsets of C++. In these subsets mainly exceptions and RTTI are disabled (sometimes multiple inheritance and templates are disallowed, too).
This is the case too in C. Certain features should not be used in a kernel environment (e.g. VLAs).
Outside of exceptions and RTTI, certain features in C++ are heavily critiqued, when we are talking about kernel code (or embedded code). These are vtables and constructors/destructors. They bring a bit of code under the hood, and that seems to be deemed 'bad'. If you don't want a constructor, then don't implement one. If you worry about using a class with a constructor, then worry too about a function you have to use to initialize a struct. The upside in C++ is, you cannot really forget using a dtor outside of forgetting to deallocate the memory.
But what about vtables?
When you implement a object which contains extension points (e.g. a linux filesystem driver), you implement something like a class with virtual methods. So why is it sooo bad, to have a vtable? You have to control the placement of this vtable when you have certain requirements in which pages the vtable resides. As far as I recall, this is irrelevant for linux, but under windows, code pages can be paged out, and when you call a paged out function from a too high irql, you crash. But you really have to watch out what functions you call, when you are on a high irql, whatever function it is. And you don't need to worry, if you don't use a virtual call in this context. In embedded software this could be worse, because (very seldomly) you need to directly control in which code page your code goes, but even there you can influence what your linker does.
So why are so many people so adamant of 'use C in kernel'?
Because they either got burned by a toolchain problem, or got burned by overenthusiastic developers using the latest stuff in kernel mode.
Maybe the kernel-mode developers are rather conservative, and C++ is a too newfangled thing ...
Why are exceptions not used in kernel-mode code?
Because they need to generate some code per function, introduce complexity in a code path and not handling an exception is bad for a kernel mode component, because it kills the system.
In C++, when an exception is thrown, the stack must be unwound and the according destructors must be called. This involves at least a bit of overhead. This is mostly negligible, but it does incur a cost, which may not be something you want. (Note I do not know how much a table base unwind does actually cost, I think I read that there is no cost when no exception is running, but ... I guess I have to look it up).
A code path, which cannot throw exceptions can be much easier reasoned about, then one which may. So :
int f( int a )
{
if( a == 0 )
return -1;
if( g() < 0 )
return -2;
f3();
return h();
}
We can reason about every exit path, in this function, because we can easily see all returns, but when exceptions are enabled, the functions may throw and we cannot guarantee what the actual path is, that the function takes. This is the exact point of the code may do something we cannot see at once. (This is bad C++ code, when exceptions are enabled).
The third point is, you want user mode applications to crash, when something unexpected occurs (E.g. when memory runs out), a user mode application should crash (after freeing resources) to allow the developer to debug the problem or at least get a good error message. You should not have a uncaught exception in a kernel mode module, ever.
Note that all this can be overcome, there are SEH exceptions in the windows kernel, so point 2+3 are not really good points in the NT kernel.
There are no memory management problems with C++ in the kernel. E.g. the NT kernel headers provide overloads for new and delete, which let you specify the pool type of your allocation, but are otherwise exactly the same as the new and delete in a user mode application.
I don't really like language wars, and have voted to close this again. But anyway...
Well, there are a couple of things. First, you have to remember that the STL is completely built on OS routines, the C standard library, and dynamic allocation. When your writing a kernel, there is no dynamic memory allocation for you (you're providing it) there is no C standard library (you have to provide one built on top of your kernel), and you are providing system calls. Then there is the fact that C interops very well and easily with assembly, whereas C++ is very difficult to interface with assembly because the ABI isn't necessarily constant, nor are names. Because of name mangling, you get a whole new level of complication.
No, with C++ you can declare functions having an extern "C"
(or optionally extern "assembly"
) calling convention. That makes the names compatible with everything else on the same platform.
Then, you have to remember that when you are building an OS, you need to know and control every aspect of the memory used by the kernel. In C++, there are quite a few hidden structures that you have no control over (vtables, RTTI, exceptions) that would severely interfere with your work.
You have to be careful when coding kernel features, but that is not limited to C++. Sure, you cannot use std::vector<byte>
as the base for you memory allocation, but neither can you use malloc
for that. You don't have to use virtual functions, multiple inheritance and dynamic allocations for all C++ classes, do you?
In other words, what Linus is saying is that with C, you can easily understand the assembly being generated and it is simple enough to run directly on the machine. Although C++ can, you will always have to set up quite a bit of context and still do some C to interface the assembly and C. Another reason is that in systems programing, you need to know exactly how methods are being called. C has the very well documented C calling conventions, but in C++ you have this to deal with, name mangling, etc.
Linus is possibly claiming that he can spot every call to f(x)
and immediately see that it is calling g(x)
, h(x)
, and q(x)
20 levels deep. Still MyClass M(x);
is a great mystery, as it might be calling some unknown code behind his back. Lost me there.
In short, it's because C++ does things without you asking.
How? If I write a constructor and a destructor for a class, it is because I am asking for the code to be executed. Don't tell me that C can magically copy an object without executing some code!
Per @Josh's comment bellow, another thing C++ does behind your back is constructors and destructors. They add overhead to enter and exiting stack frames, and most of all, make assembly interop even harder, as when you destroy a C++ stack frame, you have to call the destructor of every object in it. This gets ugly quickly.
Constuctors and destructors do not add code behind your back, they are only there if needed. Destructors are called only when it is required, like when dynamic memory needs to be deallocated. Don't tell me that C code work without this.
One reason for the lack of C++ support in both Linux and Windows is that a lot of the guys working on the kernels have been doing this since long before C++ was available. I have seen posts from the Windows kernel developers arguing that C++ support isn't really needed, as there are very few device drivers written in C++. Catch-22!
Are the exception handling and the memory allocation the only points where C++ apparently lacks tool support (in this context)?
In places where this is not properly handled, just don't use it. You don't have to use multiple inheritance, dynamic allocation, and throwing exceptions everywhere. If returning an error code works, fine. Do that!
To fix the exception handling issue, one has to provide bound on the time till the exception is caught after it is thrown?
No, but you just cannot use application levels features in the kernel. Implementing dynamic memory using a std::vector<byte>
isn't a good idea, but who would really try that?
Could you please explain me why the memory allocation is an issue? How can one overcome this issue, what has to be done?
Using standard library features depending on memory allocation on a layer below the functions implementing the memory management would be a problem. Implementing malloc
using calls to malloc
would be just as silly. But who would try that?
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With