Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Haskell benchmarking/Optimization of nf/whnf of non-strict reduction

I am trying to optimize a library which is designed to take a large data set and then apply different operations to it. Now that the library is working, I want to optimize it.

I am under the impression that non-strict evaluation allows GHC to combine operations so that the data is only iterated over once when all of the functions are written so that arguments are ordered to facilitate whnf reduction. (And to potentially reduce the number of operations performed on each datum)

To test this I wrote the following code:

import Criterion.Main

main = defaultMain
       [ bench "warmup (whnf)" $ whnf putStrLn "HelloWorld",
         bench "single (whnf)" $ whnf single [1..10000000],
         bench "single (nf)"   $ nf   single [1..10000000],
         bench "double (whnf)" $ whnf double [1..10000000],
         bench "double (nf)"   $ nf   double [1..10000000]]

single :: [Int] -> [Int]
single lst = fmap (* 2) lst

double :: [Int] -> [Int]             
double lst =  fmap (* 3) $ fmap (* 2) lst

Benchmarking using the Criterion library I get the following results:

benchmarking warmup (whnf)
mean: 13.72408 ns, lb 13.63687 ns, ub 13.81438 ns, ci 0.950
std dev: 455.7039 ps, lb 409.6489 ps, ub 510.8538 ps, ci 0.950

benchmarking single (whnf)
mean: 15.88809 ns, lb 15.79157 ns, ub 15.99774 ns, ci 0.950
std dev: 527.8374 ps, lb 458.6027 ps, ub 644.3497 ps, ci 0.950

benchmarking single (nf)
collecting 100 samples, 1 iterations each, in estimated 107.0255 s
mean: 195.4457 ms, lb 195.0313 ms, ub 195.9297 ms, ci 0.950
std dev: 2.299726 ms, lb 2.006414 ms, ub 2.681129 ms, ci 0.950

benchmarking double (whnf)
mean: 15.24267 ns, lb 15.17950 ns, ub 15.33299 ns, ci 0.950
std dev: 384.3045 ps, lb 288.1722 ps, ub 507.9676 ps, ci 0.950

benchmarking double (nf)
collecting 100 samples, 1 iterations each, in estimated 20.56069 s
mean: 205.3217 ms, lb 204.9625 ms, ub 205.8897 ms, ci 0.950
std dev: 2.256761 ms, lb 1.590083 ms, ub 3.324734 ms, ci 0.950

Does GHC optimize the "double" function so that the list is only operated on once by (* 6)? The nf results show that this is the case because otherwise the mean computation time for "double" would be twice that of "single"

What is the difference that makes the whnf version run so fast? I can only assume that nothing is actually being performed (OR just the first iteration in the reduction)

Am I even using the correct terminology?

like image 465
Toymakerii Avatar asked Oct 10 '22 17:10

Toymakerii


1 Answers

Looking at the core (intermediate code) generated by GHC using the -ddump-simpl option, we can confirm that GHC does indeed fuse the two applications of map into one (using -O2). The relevant parts of the dump are:

Main.main10 :: GHC.Types.Int -> GHC.Types.Int
GblId
[Arity 1
 NoCafRefs]
Main.main10 =
  \ (x_a1Ru :: GHC.Types.Int) ->
    case x_a1Ru of _ { GHC.Types.I# x1_a1vc ->
    GHC.Types.I# (GHC.Prim.*# (GHC.Prim.+# x1_a1vc 2) 3)
    }

Main.double :: [GHC.Types.Int] -> [GHC.Types.Int]
GblId
[Arity 1
 NoCafRefs
 Str: DmdType S]
Main.double =
  \ (lst_a1gF :: [GHC.Types.Int]) ->
    GHC.Base.map @ GHC.Types.Int @ GHC.Types.Int Main.main10 lst_a1gF

Note how there is only one use of GHC.Base.map in Main.double, referring to the combined function Main.main10 which both adds 2 and multiplies by 3. This is likely a result of GHC first inlining the Functor instance for lists so that fmap becomes map, and then applying a rewrite rule that allows two applications of map to be fused, plus some more inlining and other optimizations.

WHNF means that the expression is only evaluated to the "outermost" data constructor or lambda. In this case, that means the first (:) constructor. That's why it's so much faster, since almost no work is being done. See my answer to What is Weak Head Normal Form? for more details.

like image 169
hammar Avatar answered Oct 18 '22 09:10

hammar