I'm writing a Google Chrome extension. As the JavaScript files are loaded from disk, their size barely matters.
I've been using Google Closure Compiler anyway, because apparently it can make performance optimizations as well as reducing code size.
But I noticed this at the top of my output from Closure Compiler:
var i = true, m = null, r = false;
The point of this is obviously to reduce the filesize (all subsequent uses of true
/null
/false
throughout the script can be replaced by single characters).
But surely there's a slight performance hit with that? It must be quicker to just read a literal true
keyword than look up a variable by name and find its value is true
...?
Is this performance hit worth worrying about? And is there anything else Google Closure Compiler does that might actually slow down execution?
Explanation: The Closure Compiler is a tool for making JavaScript download and runs faster. It parses your JavaScript, analyzes it, removes dead code and rewrites and minimizes what's left. It also checks syntax, variable references, and types, and warns about common JavaScript pitfalls.
Google's Chrome browser is now built using the Clang compiler on Windows. Previously built using the Microsoft C++ compiler, Google is now using the same compiler for Windows, macOS, Linux, and Android, and the switch makes Chrome arguably the first major software project to use Clang on Windows.
Closure Templates simplify the task of dynamically generating HTML. They have a simple syntax that is natural for programmers. In contrast to traditional templating systems, in which you use one big template per page, you can think of Closure Templates as small components that you compose to form your user interface.
Lets look at what the closure team says about it.
From the FAQ:
Does the compiler make any trade-off between my application's execution speed and download code size?
Yes. Any optimizing compiler makes trade-offs. Some size optimizations do introduce small speed overheads. However, the Closure Compiler's developers have been careful not to introduce significant additional runtime. Some of the compiler's optimizations even decrease runtime (see next question).
Does the compiler optimize for speed?
In most cases smaller code is faster code, since download time is usually the most important speed factor in web applications. Optimizations that reduce redundancies speed up the run time of code as well.
I flatly challenge the first assumption they've made here. The size of vars names used does not directly impact how the various JavaScript engines treat the code-- in fact, JS engines don't care if you call your variables supercalifragilisticexpialidocious
or x
(but I as a programmer sure do). Download time is the most important part if you're worried about delivery-- a slow running script can be caused by millions of things that I suspect the tool simply cannot account for.
To truthfully understand why your question is maybe, first thing you need to ask is "What makes JavaScript fast or slow?"
Then of course we run into the question, "What JavaScript engine are we talking about?"
We have:
Does anyone here really think they all work the same? Especially JScript and V8? Heck no!
So again, when google closure compiles code, which engine is it building stuff for? Are you feeling lucky?
Okay, because we'll never cover all these bases lets try to look more generally here, at "old" vs "new" code.
Here's a quick summary for this specific part from one of the best presentations on JS Engines I've ever seen.
Key difference here being that new engines introduce JIT compilers.
In essence, JIT will optimize your code execution such that it can run faster, but if something it doesn't like happens it turns around and makes it slow again.
You can do such things by having two functions like this:
var FunctionForIntegersOnly = function(int1, int2){ return int1 + int2; } var FunctionForStringsOnly = function(str1, str2){ return str1 + str2; } alert(FunctionForIntegersOnly(1, 2) + FunctionForStringsOnly("a", "b"));
Running that through google closure actually simplifies the whole code down to:
alert("3ab");
And by every metric in the book that's way faster. What really happened here is it simplified my really simple example, because it does a bit of partial-execution. This is where you need to be careful however.
Lets say we have a y-combinator in our code, the compiler turns it into something like this:
(function(a) { return function(b) { return a(a)(b) } })(function(a) { return function(b) { if(b > 0) { return console.log(b), a(a)(b - 1) } } })(5);
Not really faster, just minified the code.
JIT would normally see that in practice your code only ever takes two string inputs to that function, and returns a string (or integer for the first function), and this put it into the type-specific JIT, which makes it really quick. Now, if google closure does something strange like transform both those functions that have nearly identical signatures into one function (for code that is non-trivial) you may lose JIT speed if the compiler does something JIT doesn't like.
In general, your code is faster. You may introduce things that various JIT compilers don't like but they're going to be rare if your code uses smaller functions and correct prototypical object-oriented-design. If you think about the full scope of what the compiler is doing (shorter download AND faster execution) then strange things like var i = true, m = null, r = false;
may be a worth-while trade off that the compiler made even though they're running slower, the total lifespan was faster.
It's also worthwhile to note the most common bottle neck in web-app execution is the Document Object model, and I suggest you put more effort over there if your code is slow.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With