I've noticed though some tests that native JavaScript functions are often much slower than a simple implementation. What is the reason behind that?
After looking at ECMA-262. It seems that the native implementations just do more in terms of error handling and features than the simple self-implementations.
For instance, check out the polyfill implementation of map
on MDN: Array.prototype.map(). It's based off the same algorithm specified in ECMA-262. Updating your example to use this algorithm now makes the native implementation faster—although just slightly: map-native-vs-implemented.
Also, map
might not be the best example to test since it's bouncing back and forth between native code and the provided lambda function.
I would have expected better performance for the native concat
function. Nevertheless, looking at ECMA-262 we can see it also just does more. Looking at the algorithm in section 15.4.4.4, we can see that it handles some extra cases. For instance combining multiple arguments—some being arrays and some being other types:
[1, 2, 3].concat([4, 5, 6], "seven", 8, [9, 10]);
returns
[1, 2, 3, 4, 5, 6, "seven", 8, 9, 10]
Finally it's important to note that these are pretty basic algorithms. When running such algorithms on huge data sets, or thousands of times consecutively, it may seem that one is significantly faster than the other. However, performing even a couple of extra safety checks over thousands of iterations can render one algorithm significantly slower than one that does not do those checks. Count the computational operations—if your extra error handling and features doubles the lines of code in the loop, it's only natural that it would be slower.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With