Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Would a cloud-based compiler be feasible? [closed]

Would it be any practical benefit to write a cloud-based compiler, that would spread compiled units of code on different machines in the cloud? Could there be a benefit from obtaining a software-as-a-service architecture right within the app just after compiling, or would the inherent latency make such an approach impractical?

like image 893
luvieere Avatar asked Oct 02 '09 22:10

luvieere


People also ask

What is the purpose of compiler?

A compiler is a special program that translates a programming language's source code into machine code, bytecode or another programming language. The source code is typically written in a high-level, human-readable language such as Java or C++.

What is compiler and its types?

There are various types of compilers which are as follows − Traditional Compilers(C, C++, and Pascal) − These compilers transform a source program in an HLL into its similar in native machine program or object program. Interpreters (LISP, SNOBOL, and Java1.

What is compiling in programming?

Compile refers to the act of converting programs written in high level programming language, which is understandable and written by humans, into a low level binary language understood only by the computer.


3 Answers

I'm not sure if I've misunderstood your point or if the other answers have. Are you talking about some sort of automatic parallelisation task? The answers given so far appear to be talking about distributed compilation - i.e. using a cloud to speed up compilation times. I assumed you were instead talking about a compiler that targets cloud computing resources.

If you were in fact talking about distributed compilation, then obviously things like distcc will do what you need.

If you were asking the much more interesting (IMHO) question about whether a compiler that targets distributed architectures would be useful, my answer is a resounding 'yes'. However, feasibility is at the heart of the problem. Latency is not the problem as such, however coherence (i.e. ensuring that the correct versions of all the units are in place) and having decent heuristics would be an issue.

The best place to look would probably be the Occam programming language - it targeted the transputer, which was not entirely dissimilar to the kinds of distributed systems architectures we're interested in these days. I believe there is some work that follows on from Occam that might provide useful clues as to what the state of the art is.

like image 72
Gian Avatar answered Nov 02 '22 09:11

Gian


I've used such a system, but it worked on a local cluster, not a cloud. But the principle would be exactly the same. Unfortunately I can't remember what it was called - but it was cool, watching your source files get farmed out to the other PCs in your department.

Edit: It was called IncrediBuild.

like image 40
Mark Ransom Avatar answered Nov 02 '22 10:11

Mark Ransom


You can use distcc and make -j for distributed compilation of most typical unix code. If you regularly compile big chunks of code, it might get you big speedups... afaik samba (free smb implementation) developers use it for this. distcc does only the compilation phase in a distributed way, leaving preprocessing and linking to the master machine.

Interaction with "the cloud" might induce latency, but I still think with more complicated c++ code it might be very useful. I guess if you have more than 100 compilation units (f.e. .cpp files), you could get noticeable speedup.

like image 2
liori Avatar answered Nov 02 '22 10:11

liori