Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Does Google Closure Compiler ever decrease performance?

I'm writing a Google Chrome extension. As the JavaScript files are loaded from disk, their size barely matters.

I've been using Google Closure Compiler anyway, because apparently it can make performance optimizations as well as reducing code size.

But I noticed this at the top of my output from Closure Compiler:

var i = true, m = null, r = false; 

The point of this is obviously to reduce the filesize (all subsequent uses of true/null/false throughout the script can be replaced by single characters).

But surely there's a slight performance hit with that? It must be quicker to just read a literal true keyword than look up a variable by name and find its value is true...?

Is this performance hit worth worrying about? And is there anything else Google Closure Compiler does that might actually slow down execution?

like image 669
callum Avatar asked Nov 09 '11 13:11

callum


People also ask

What is the function of a closure compiler?

Explanation: The Closure Compiler is a tool for making JavaScript download and runs faster. It parses your JavaScript, analyzes it, removes dead code and rewrites and minimizes what's left. It also checks syntax, variable references, and types, and warns about common JavaScript pitfalls.

Does Google have a compiler?

Google's Chrome browser is now built using the Clang compiler on Windows. Previously built using the Microsoft C++ compiler, Google is now using the same compiler for Windows, macOS, Linux, and Android, and the switch makes Chrome arguably the first major software project to use Clang on Windows.

What are Closure Templates?

Closure Templates simplify the task of dynamically generating HTML. They have a simple syntax that is natural for programmers. In contrast to traditional templating systems, in which you use one big template per page, you can think of Closure Templates as small components that you compose to form your user interface.


1 Answers

The answer is maybe.

Lets look at what the closure team says about it.

From the FAQ:

Does the compiler make any trade-off between my application's execution speed and download code size?

Yes. Any optimizing compiler makes trade-offs. Some size optimizations do introduce small speed overheads. However, the Closure Compiler's developers have been careful not to introduce significant additional runtime. Some of the compiler's optimizations even decrease runtime (see next question).

Does the compiler optimize for speed?

In most cases smaller code is faster code, since download time is usually the most important speed factor in web applications. Optimizations that reduce redundancies speed up the run time of code as well.

I flatly challenge the first assumption they've made here. The size of vars names used does not directly impact how the various JavaScript engines treat the code-- in fact, JS engines don't care if you call your variables supercalifragilisticexpialidocious or x (but I as a programmer sure do). Download time is the most important part if you're worried about delivery-- a slow running script can be caused by millions of things that I suspect the tool simply cannot account for.

To truthfully understand why your question is maybe, first thing you need to ask is "What makes JavaScript fast or slow?"

Then of course we run into the question, "What JavaScript engine are we talking about?"

We have:

  • Carakan (Opera)
  • Chakra (IE9+)
  • SpiderMonkey (Mozilla/FireFox)
  • SquirrelFish (Apple's webkit)
  • V8 (Chrome)
  • Futhark (Opera)
  • JScript (All versions of IE before 9)
  • JavaScriptCore (Konqueror, Safari)
  • I've skipped out on a few.

Does anyone here really think they all work the same? Especially JScript and V8? Heck no!

So again, when google closure compiles code, which engine is it building stuff for? Are you feeling lucky?

Okay, because we'll never cover all these bases lets try to look more generally here, at "old" vs "new" code.

Here's a quick summary for this specific part from one of the best presentations on JS Engines I've ever seen.

Older JS engines

  • Code is interpreted and compiled directly to byte code
  • No optimization: you get what you get
  • Code is hard to run fast because of the loosely typed language

New JS Engines

  • Introduce Just-In-Time(JIT) compilers for fast execution
  • Introduce type-optimizing JIT compilers for really fast code (think near C code speeds)

Key difference here being that new engines introduce JIT compilers.

In essence, JIT will optimize your code execution such that it can run faster, but if something it doesn't like happens it turns around and makes it slow again.

You can do such things by having two functions like this:

var FunctionForIntegersOnly = function(int1, int2){     return int1 + int2; }  var FunctionForStringsOnly = function(str1, str2){     return str1 + str2; }  alert(FunctionForIntegersOnly(1, 2) + FunctionForStringsOnly("a", "b")); 

Running that through google closure actually simplifies the whole code down to:

alert("3ab"); 

And by every metric in the book that's way faster. What really happened here is it simplified my really simple example, because it does a bit of partial-execution. This is where you need to be careful however.

Lets say we have a y-combinator in our code, the compiler turns it into something like this:

(function(a) {  return function(b) {     return a(a)(b)   } })(function(a) {   return function(b) {     if(b > 0) {       return console.log(b), a(a)(b - 1)     }   } })(5); 

Not really faster, just minified the code.

JIT would normally see that in practice your code only ever takes two string inputs to that function, and returns a string (or integer for the first function), and this put it into the type-specific JIT, which makes it really quick. Now, if google closure does something strange like transform both those functions that have nearly identical signatures into one function (for code that is non-trivial) you may lose JIT speed if the compiler does something JIT doesn't like.

So, what did we learn?

  • You might have JIT-optimized code, but the compiler re-organizes your code into something else
  • Old browsers don't have JIT but still run your code
  • Closure compiled JS invokes less function calls by doing partial-execution of your code for simple functions.

So what do you do?

  • Write small and to-the-point functions, the compiler will be able to deal with them better
  • If you have a very deep understanding of JIT, hand optimizing code, and used that knowledge then closure compiler may not be worthwhile to use.
  • If you want the code to run a bit faster on older browsers, it's an excellent tool
  • Trade-offs are generally worth-while, but just be careful to check things over and not blindly trust it all the time.

In general, your code is faster. You may introduce things that various JIT compilers don't like but they're going to be rare if your code uses smaller functions and correct prototypical object-oriented-design. If you think about the full scope of what the compiler is doing (shorter download AND faster execution) then strange things like var i = true, m = null, r = false; may be a worth-while trade off that the compiler made even though they're running slower, the total lifespan was faster.

It's also worthwhile to note the most common bottle neck in web-app execution is the Document Object model, and I suggest you put more effort over there if your code is slow.

like image 99
Incognito Avatar answered Oct 05 '22 08:10

Incognito