I always believed GHCJS, for obvious reasons, generated very slow JavaScript programs, compared to manually written and optimized code. When experimenting with it, though, I noticed it was not as bad as I expected. I decided to run a series of small benchmarks to get a grasp on the true performance, and this one in particular surprised me. The program simply fills an array with "1"'s and add them up.
Haskell:
import Data.Array.Repa
len = 1024*1024*64
arr = fromFunction (Z :. len) (const 1) :: Array D DIM1 Float
main = sumAllP arr >>= print
JavaScript:
var len = 1024*1024*64
var arr = [];
var sum = 0;
for (var i=0; i<len; ++i)
arr[i] = 1;
for (var i=0; i<len; ++i)
sum += arr[i];
console.log(sum);
And a crude benchmark:
apple1$ ghcjs -O2 bench_fill.hs -funfolding-use-threshold10000 -funfolding-keeness-factor1000 -o bench_fill.js; time node bench_fill.js/all.js
Linking bench_fill.js (Main)
6.7108864e7
real 0m1.543s
user 0m1.512s
sys 0m0.033s
apple1$ time node benchfill.js
67108864
real 0m1.764s
user 0m1.173s
sys 0m0.583s
How can the GHCJS run faster than a slim, clean native for-loop? That shouldn't be possible considering the amount of boxings the generated code should be exposed to.
Array D DIM1 Float
is a delayed array. It is just represented as the function const 1
plus the bounds of the array. There is no array of 64 million Floats stored anywhere.
The JavaScript program actually creates an array of 64 million doubles, which uses 512 MB of memory. The costs to read and write such a large array are non-negligible (as is the cost to allocate it; note the substantial system time).
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With