I want to know how to design a compiler that compiles very, very quickly.
First, let me head off some obvious misunderstandings of my question:
I am not talking about the speed of the code produced by the compiler. There are already many resources available for learning how to optimize generated code. What I'm having trouble finding is information on making the compiler fast.
I'm also not interested in a discussion of why C++ compilers are generally slower than Java compilers (for example). I'm interested in what techniques can be used to be speed up the compiler for any given language.
I also don't want to hear about distributed compilation systems like Microsoft's Incredibuild or Unix's distcc. Those systems don't give you faster compilers, they just give you more compilers. Which is certainly useful, but that's not the question I'm asking. I want to know how to design a fast compiler for a single CPU.
Nor is ccache the answer I'm looking for. That's a system that allows you to avoid using the compiler at all, but it doesn't make the compiler faster. Again, that's useful; again, that's not the question I'm asking.
I hope my question is now crystal clear. But perhaps some history will make it even clearer.
C compilers used to be really slow. Then, in 1986, THINK Technologies introduced Lightspeed C for Macintosh, and it compiled programs almost instantaneously. Lightspeed C was so much faster than all the other C compilers that there was hardly any comparison. (Perhaps Lightspeed C wasn't the first of the new generation of lightning-fast compilers, but it was the first in my experience. Turbo Pascal came earlier [1983] but I had no experience with it, so I don't know how it compared, speed-wise.)
Since then, many fast compilers have been available. It seems that there was some kind of quantum leap in compiler technology in the 1980's, and that in particular is what I'm trying to understand. What was the breakthrough?
The answer may be this simple: with IDE's like Lightspeed and Turbo, the integrated editor already has the source code in RAM. If the compiler operates off that data, it eliminates disk I/O, which is the slowest part of any compiler. That's probably a very important contributor to the speed improvement, if the source code size is small relative to the memory size. (In those days, RAM sizes were much smaller, but then so were typical program sizes.)
Is that it? Or were there other important innovations involved? And have there been important improvements in compiler speed since then?
If languages each have a set of grammar rules, and those rules are all the legal expressions, then there are primarily two parts to building a compiler. Be able to read a file, parse it, then build an validate an Abstract Syntax Tree from that grammar.
Compiler transforms code written in a high-level programming language into the machine code at once before the program runs, whereas an Interpreter converts each high-level program statement, one by one, into the machine code, during program run. Compiled code runs faster, while interpreted code runs slower.
OCaml is a strongly-typed language (which means no core dumps and eliminates a large class of data corruption problems) which is statically typed (which means that OCaml detects type conflicts -- a large class of bugs -- at compile-time rather than at run-time, perhaps months after your application is in production) ...
Compilers analyze and convert source code written in languages such as Java, C++, C# or Swift. They're commonly used to generate machine code or bytecode that can be executed by the target host system. Interpreters do not generate IR code or save generated machine code.
Addition:
One issue is what you emit for generated code. You can put pretty much as much compiler time into optimization as you like. Straightforward generation, maybe even looking sort of dumb, will save you time. Of course, back when I used Turbo Pascal and Lightspeed C, the neat part was getting an executable conveniently, not how well optimized it was. Desktop compiler technology, at the time, was seriously behind mainframe compiler technology.
Another thing about Turbo Pascal and Lightspeed C was the integration. Particularly in the days before multitasking home computers, this was great. Unlike the first C compiler I owned (for CP/M), I didn't have to make changes in an editor, close that, do the compile, do the link, and then execute. This may have been part of what you saw: fast execution of components, without the need to type in complicated commands. I can duplicate this now by running multiple terminals on a Gnome desktop: one for vim, one to run gcc, and one to execute in.
Aside from that, reducing I/O is good. Fast lexical analysis is essentially a solved problem nowadays, but not necessarily back then. I'm not sure about parsing, last having delved into that twenty years ago, so somebody else can guide you on fast parsing and such.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With