I've found what appears to be an interesting anomaly in JavaScript. Which centres upon my attempts to speed up trigonometric transformation calculations by precomputing sin(x) and cos(x), and simply referencing the precomputed values.
Intuitively, one would expect pre-computation to be faster than evaluating the Math.sin() and Math.cos() functions each time. Especially if your application design is going to use only a restricted set of values for the argument of the trig functions (in my case, integer degrees in the interval [0°, 360°), which is sufficient for my purposes here).
So, I ran a little test. After pre-computing the values of sin(x) and cos(x), storing them in 360-element arrays, I wrote a short test function, activated by a button in a simple test HTML page, to compare the speed of the two approaches. One loop simply multiplies a value by the pre-computed array element value, whilst the other loop multiplies a value by Math.sin().
My expectation was that the pre-computed loop would be noticeably faster than the loop involving a function call to a trig function. To my surprise, the pre-computed loop was slower.
Here's the test function I wrote:
function MyTest()
{
var ITERATION_COUNT = 1000000;
var angle = Math.floor(Math.random() * 360);
var test1 = 200 * sinArray[angle];
var test2 = 200 * cosArray[angle];
var ref = document.getElementById("Output1");
var outData = "Test 1 : " + test1.toString().trim() + "<br><br>";
outData += "Test 2 : "+test2.toString().trim() + "<br><br>";
var time1 = new Date(); //Time at the start of the test
for (var i=0; i<ITERATION_COUNT; i++)
{
var angle = Math.floor(Math.random() * 360);
var test3 = (200 * sinArray[angle]);
//End i loop
}
var time2 = new Date();
//This somewhat unwieldy procedure is how we find out the elapsed time ...
var msec1 = (time1.getUTCSeconds() * 1000) + time1.getUTCMilliseconds();
var msec2 = (time2.getUTCSeconds() * 1000) + time2.getUTCMilliseconds();
var elapsed1 = msec2 - msec1;
outData += "Test 3 : Elapsed time is " + elapsed1.toString().trim() + " milliseconds<br><br>";
//Now comparison test with the same number of sin() calls ...
var time1 = new Date();
for (var i=0; i<ITERATION_COUNT; i++)
{
var angle = Math.floor(Math.random() * 360);
var test3 = (200 * Math.sin((Math.PI * angle) / 180));
//End i loop
}
var time2 = new Date();
var msec1 = (time1.getUTCSeconds() * 1000) + time1.getUTCMilliseconds();
var msec2 = (time2.getUTCSeconds() * 1000) + time2.getUTCMilliseconds();
var elapsed2 = msec2 - msec1;
outData += "Test 4 : Elapsed time is " + elapsed2.toString().trim() + " milliseconds<br><br>";
ref.innerHTML = outData;
//End function
}
My motivation for the above, was that multiplying by a pre-computed value fetched from an array would be faster than invoking a function call to a trig function, but the results I obtain are interestingly anomalous.
Some sample runs yield the following results (Test 3 is the pre-computed elapsed time, Test 4 the Math.sin() elapsed time):
Run 1:
Test 3 : Elapsed time is 153 milliseconds
Test 4 : Elapsed time is 67 milliseconds
Run 2:
Test 3 : Elapsed time is 167 milliseconds
Test 4 : Elapsed time is 69 milliseconds
Run 3 :
Test 3 : Elapsed time is 265 milliseconds
Test 4 : Elapsed time is 107 milliseconds
Run 4:
Test 3 : Elapsed time is 162 milliseconds
Test 4 : Elapsed time is 69 milliseconds
Why is invoking a trig function twice as fast as referencing a precomputed value from an array, when the precomputed approach, intuitively at least, should be the faster by an appreciable margin? All the more so because I'm using integer arguments to index the array in the precomputed loop, whilst the function call loop also includes an extra calculation to convert from degrees to radians?
There's something interesting happening here, but at the moment, I'm not sure what. Usually, array accesses to precomputed data are a lot faster than calling intricate trig functions (or at least, they were back in the days when I coded similar code in assembler!), but JavaScript seems to turn this on its head. The only reason I can think of, is that JavaScript adds a lot of overhead to array accesses behind the scenes, but if this were so, this would impact upon a lot of other code, that appears to run at perfectly reasonable speed.
So, what exactly is going on here?
I'm running this code in Google Chrome:
Version 60.0.3112.101 (Official Build) (64-bit)
running on Windows 7 64-bit. I haven't yet tried it in Firefox, to see if the same anomalous results appear there, but that's next on the to-do list.
Anyone with a deep understanding of the inner workings of JavaScript engines, please help!
Two identical test functions, well almost.
Run them in a benchmark and the results are surprising.
{
func : function (){
var i,a,b;
D2R = 180 / Math.PI
b = 0;
for (i = 0; i < count; i++ ) {
// single test start
a = (Math.random() * 360) | 0;
b += Math.sin(a * D2R);
// single test end
}
},
name : "summed",
},{
func : function (){
var i,a,b;
D2R = 180 / Math.PI;
b = 0;
for (i = 0; i < count; i++ ) {
// single test start
a = (Math.random() * 360) | 0;
b = Math.sin(a * D2R);
// single test end
}
},
name : "unsummed",
},
The results
=======================================
Performance test. : Optimiser check.
Use strict....... : false
Duplicates....... : 4
Samples per cycle : 100
Tests per Sample. : 10000
---------------------------------------------
Test : 'summed'
Calibrated Mean : 173µs ±1µs (*1) 11160 samples 57,803,468 TPS
---------------------------------------------
Test : 'unsummed'
Calibrated Mean : 0µs ±1µs (*1) 11063 samples Invalid TPS
----------------------------------------
Calibration zero : 140µs ±0µs (*)
(*) Error rate approximation does not represent the variance.
(*1) For calibrated results Error rate is Test Error + Calibration Error.
TPS is Tests per second as a calculated value not actual test per second.
The benchmarker barely picked up any time for the un-summed test (Had to force it to complete).
The optimiser knows that only the last result of the loop for the unsummed test is needed. It only does for the last iteration all the other results are not used so why do them.
Benchmarking in javascript is full of catches. Use a quality benchmarker, and know what the optimiser can do.
Testing array and sin. To be fair to sin I do not do a deg to radians conversion.
tests : [{
func : function (){
var i,a,b;
b=0;
for (i = 0; i < count; i++ ) {
a = (Math.random() * 360) | 0;
b += a;
}
},
name : "Calibration",
},{
func : function (){
var i,a,b;
b = 0;
for (i = 0; i < count; i++ ) {
a = (Math.random() * 360) | 0;
b += array[a];
}
},
name : "lookup",
},{
func : function (){
var i,a,b;
b = 0;
for (i = 0; i < count; i++ ) {
a = (Math.random() * 360) | 0;
b += Math.sin(a);
}
},
name : "Sin",
}
],
And the results
=======================================
Performance test. : Lookup compare to calculate sin.
Use strict....... : false
Data view........ : false
Duplicates....... : 4
Cycles........... : 1055
Samples per cycle : 100
Tests per Sample. : 10000
---------------------------------------------
Test : 'Calibration'
Calibrator Mean : 107µs ±1µs (*) 34921 samples
---------------------------------------------
Test : 'lookup'
Calibrated Mean : 6µs ±1µs (*1) 35342 samples 1,666,666,667TPS
---------------------------------------------
Test : 'Sin'
Calibrated Mean : 169µs ±1µs (*1) 35237 samples 59,171,598TPS
-All ----------------------------------------
Mean : 0.166ms Totals time : 17481.165ms 105500 samples
Calibration zero : 107µs ±1µs (*);
(*) Error rate approximation does not represent the variance.
(*1) For calibrated results Error rate is Test Error + Calibration Error.
TPS is Tests per second as a calculated value not actual test per second.
Again had the force completions as the lookup was too close to the error rate. But the calibrated lookup is almost a perfect match to the clock speed ??? coincidence.. I am not sure.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With