Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

JavaScript Performance: Multiple variables or one object?

this is just a simple performance question, helping me understand the javascript engine. for this I'm was wondering, what is faster: declaring multiple variables for certain values or using one object containing multiple values.

example:

var x = 15; var y = 300; 

vs.

var sizes = { x: 15, y: 300 }; 

this is just a very simple example, could of course differ in a real project. does this even matter?

like image 538
misantronic Avatar asked Jan 09 '12 14:01

misantronic


People also ask

What are JavaScript variables?

JavaScript variables are containers for storing data values. In this example, x, y, and z, are variables, declared with the var keyword: Before 2015, using the var keyword was the only way to declare a JavaScript variable.

How to return multiple values from a JavaScript function?

JavaScript functions can return a single value. To return multiple values from a function, you can pack the return values as elements of an array or as properties of an object. Suppose that the following getNames () function retrieves the first name and last name from the database or a third-party API and returns them as elements of an array:

How do you declare multiple variables in a script?

It's a good programming practice to declare all variables at the beginning of a script. You can declare many variables in one statement. Start the statement with var and separate the variables by comma: In computer programs, variables are often declared without a value.

What are the limitations of variable names in JavaScript?

There are two limitations on variable names in JavaScript: The name must contain only letters, digits, or the symbols $ and _. The first character must not be a digit. When the name contains multiple words, camelCase is commonly used. That is: words go one after another, each word except first starting with a capital letter: myVeryLongName.


2 Answers

A complete answer for that question would be really long. So I'll try to explain a few things only. First, maybe most important fact, even if you declare a variable with var, it depends where you do that. In a global scope, you implicitly would also write that variable in an object, most browsers call it window. So for instance

// global scope var x = 15;  console.log( window.x ); // 15 

If we do the same thing within the context of a function things change. Within the context of a function, we would write that variable name into its such called 'Activation Object'. That is, an internal object which the js engine handles for you. All formal parameters, function declarations and variables are stored there.

Now to answer your actual question: Within the context of a function, its always the fastest possible access to have variables declared with var. This again is not necesarrily true if we are in the global context. The global object is very huge and its not really fast to access anything within.

If we store things within an object, its still very fast, but not as fast as variables declared by var. Especially the access times do increase. But nonetheless, we are talking about micro and nanoseconds here (in modern browser implementations). Old'ish browsers, especially IE6+7 have huge performance penalties when accessing object properties.

If you are really interested in stuff like this, I highyl recommend the book 'High Performance Javascript' by Nicholas C. Zakas. He measured lots of different techniques to access and store data in ECMAscript for you.

Again, performance differences for object lookups and variables declared by var is almost not measureable in modern browsers. Old'ish Browsers like FF3 or IE6 do show a fundamental slow performance for object lookups/access.

like image 106
jAndy Avatar answered Sep 30 '22 07:09

jAndy


foo_bar is always faster than foo.bar in every modern browser (IE11+/Edge and any version of Chrome, FireFox, and Safari) and NodeJS so long as you see performance as holistic (which I recommend you should). After millions of iterations in a tight loop, foo.bar may approach (but never surpass) the same ops/s as foo_bar due to the wealth of correct branch predictions. Notwithstanding, foo.bar incurs a ton more overhead during both JIT compilation and execution because it is so much more complex of an operation. JavaScript that features no tight loops benefits an extra amount from using foo_bar because, in comparison, foo.bar would have a much higher overhead:savings ratio such that there was extra overhead involved in the JIT of foo.bar just to make foo.bar a little faster in a few places. Granted, all JIT engines intelligently try to guess how much effort should be put into optimizing what to minimize needless overhead, but there is still a baseline overhead incurred by processing foo.bar that can never be optimized away.

Why? JavaScript is a highly dynamic language, where there is costly overhead associated with every object. It was originally a tiny scripting executed line-by-line and still exhibits line-by-line execution behavior (it's not executed line-by-line anymore but, for example, one can do something evil like var a=10;eval('a=20');console.log(a) to log the number 20). JIT compilation is highly constrained by this fact that JavaScript must observe line-by-line behavior. Not everything can be anticipated by JIT, so all code must be slow in order for extraneous code such as is shown below to run fine.

(function() {"use strict"; // chronological optimization is very poor because it is so complicated and volatile var setTimeout=window.setTimeout; var scope = {}; scope.count = 0; scope.index = 0; scope.length = 0;  function increment() {  // The code below is SLOW because JIT cannot assume that the scope object has not changed in the interum  for (scope.index=0, scope.length=17; scope.index<scope.length; scope.index=scope.index+1|0)    scope.count = scope.count + 1|0;  scope.count = scope.count - scope.index + 1|0; }  setTimeout(function() {   console.log( scope ); }, 713);  for(var i=0;i<192;i=i+1|0)   for (scope.index=11, scope.length=712; scope.index<scope.length; scope.index=scope.index+1|0)     setTimeout(increment, scope.index); })();

(function() {"use strict"; // chronological optimization is very poor because it is so complicated and volatile var setTimeout=window.setTimeout; var scope_count = 0; var scope_index = 0; var scope_length = 0;  function increment() {  // The code below is FAST because JIT does not have to use a property cache  for (scope_index=0, scope_length=17; scope_index<scope_length; scope_index=scope_index+1|0)    scope_count = scope_count + 1|0;  scope_count = scope_count - scope_index + 1|0; }  setTimeout(function() {   console.log({     count: scope_count,     index: scope_index,     length: scope_length   }); }, 713);  for(var i=0;i<192;i=i+1|0)   for (scope_index=4, scope_length=712; scope_index<scope_length; scope_index=scope_index+1|0)     setTimeout(increment, scope_index); })();

Performing a one sample z-interval by running each code snippet above 30 times and seeing which one gave a higher count, I am 90% confident that the later code snippet with pure variable names is faster than the first code snippet with object access between 76.5% and 96.9% of the time. As another way to analyze the data, there is a 0.0000003464% chance that the data I collected was a fluke and the first snippet is actually faster. Thus, I believe it is reasonable to infer that foo_bar is faster than foo.bar because there is less overhead.

Don't get me wrong. Hash maps are very fast because many engines feature advanced property caches, but there will still always be enough extra overhead when using hash maps. Observe.

(function(){"use strict"; // wrap in iife  // This is why you should not pack variables into objects var performance = window.performance;   var iter = {}; iter.domino = -1; // Once removed, performance topples like a domino iter.index=16384, iter.length=16384; console.log(iter);   var startTime = performance.now();  // Warm it up and trick the JIT compiler into false optimizations for (iter.index=0, iter.length=128; iter.index < iter.length; iter.index=iter.index+1|0)   if (recurse_until(iter, iter.index, 0) !== iter.domino)     throw Error('mismatch!');  // Now that its warmed up, drop the cache off cold and abruptly for (iter.index=0, iter.length=16384; iter.index < iter.length; iter.index=iter.index+1|0)   if (recurse_until(iter, iter.index, 0) !== iter.domino)     throw Error('mismatch!');  // Now that we have shocked JIT, we should be running much slower now for (iter.index=0, iter.length=16384; iter.index < iter.length; iter.index=iter.index+1|0)   if (recurse_until(iter, iter.index, 0) !== iter.domino)     throw Error('mismatch!');  var endTime=performance.now();  console.log(iter); console.log('It took ' + (endTime-startTime));  function recurse_until(obj, _dec, _inc) {   var dec=_dec|0, inc=_inc|0;   var ret = (     dec > (inc<<1) ? recurse_until(null, dec-1|0, inc+1|0) :     inc < 384 ? recurse_until :     // Note: do not do this in production. Dynamic code evaluation is slow and     //  can usually be avoided. The code below must be dynamically evaluated to     //  ensure we fool the JIT compiler.     recurse_until.constructor(       'return function(obj,x,y){' +           // rotate the indices           'obj.domino=obj.domino+1&7;' +           'if(!obj.domino)' +           'for(var key in obj){' +               'var k=obj[key];' +               'delete obj[key];' +               'obj[key]=k;' +               'break' +           '}' +           'return obj.domino' +       '}'     )()   );   if (obj === null) return ret;      recurse_until = ret;   return obj.domino; }  })();

For a performance comparison, observe pass-by-reference via an array and local variables.

// This is the correct way to write blazingly fast code (function(){"use strict"; // wrap in iife  var performance = window.performance;   var iter_domino=[0,0,0]; // Now, domino is a pass-by-reference list var iter_index=16384, iter_length=16384;   var startTime = performance.now();  // Warm it up and trick the JIT compiler into false optimizations for (iter_index=0, iter_length=128; iter_index < iter_length; iter_index=iter_index+1|0)   if (recurse_until(iter_domino, iter_index, 0)[0] !== iter_domino[0])     throw Error('mismatch!');  // Now that its warmed up, drop the cache off cold and abruptly for (iter_index=0, iter_length=16384; iter_index < iter_length; iter_index=iter_index+1|0)   if (recurse_until(iter_domino, iter_index, 0)[0] !== iter_domino[0])     throw Error('mismatch!');  // Now that we have shocked JIT, we should be running much slower now for (iter_index=0, iter_length=16384; iter_index < iter_length; iter_index=iter_index+1|0)   if (recurse_until(iter_domino, iter_index, 0)[0] !== iter_domino[0])     throw Error('mismatch!');  var endTime=performance.now();  console.log('It took ' + (endTime-startTime));  function recurse_until(iter_domino, _dec, _inc) {   var dec=_dec|0, inc=_inc|0;   var ret = (     dec > (inc<<1) ? recurse_until(null, dec-1|0, inc+1|0) :     inc < 384 ? recurse_until :     // Note: do not do this in production. Dynamic code evaluation is slow and     //  can usually be avoided. The code below must be dynamically evaluated to     //  ensure we fool the JIT compiler.     recurse_until.constructor(       'return function(iter_domino, x,y){' +           // rotate the indices           'iter_domino[0]=iter_domino[0]+1&7;' +           'if(!iter_domino[0])' +           'iter_domino.push( iter_domino.shift() );' +           'return iter_domino' +       '}'     )()   );   if (iter_domino === null) return ret;      recurse_until = ret;   return iter_domino; }  })();

JavaScript is very different from other languages in that benchmarks can easily be a performance-sin when misused. What really matters is what should in theory run the fastest accounting for everything in JavaScript. The browser you are running your benchmark in right now may fail to optimize for something that a later version of the browser will optimize for.

Further, browsers are guided in the direction that we program. If everyone used CodeA that makes no performance sense via pure logic but is really fast (44Kops/s) only in a certain browser, other browsers will lean towards optimizing CodeA and CodeA may eventually surpass 44Kops/s in all browsers. On the other hand, if CodeA were really slow in all browsers (9Kops/s) but was very logical performance-wise, browsers would be able to take advantage of that logic and CodeA may soon surpass 900Kops/s in all browsers. Ascertaining the logical performance of code is very simple and very difficult. One must put themself in the shoes of the computer and imagine one has an infinite amount of paper, an infinite supply of pencils, and an infinite amount of time, and no ability to interpret the purpose/intention of the code. How can you structure your code to fare the best under such hypothetical circumstances? For example, hypothetically, the hash maps incurred by foo.bar would be a bit slower than doing foo_bar because foo.bar would require looking at the table named foo and finding the property named bar. You could put your finger on the location of the bar property to cache it, but the overhead of looking through the table to find bar costed time.

like image 40
Jack G Avatar answered Sep 30 '22 07:09

Jack G