Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How efficient can Haskell state be compared to C++, for very stateful games/simulations?

I just can't seem to find any conclusive information on the subject. There's plenty of Haskell game implementations out there, but the ones I've found are small games and it's unclear whether their approaches scale. Likewise, there's lots of information with regards to having state in a Haskell program (mostly using the State monad), but very little on whether the efficiency of such methods are comparable to state in an imperative language.

I'm working on a simulator which has extremely simple graphics, which makes developing in Haskell very desirable to me. However, I want to simulate as many entities as I possibly can, which means efficiency is very important. I would accept a small performance decrease in order to use Haskell, but I worry that the stateful nature of this simulation will make Haskell code an order of magnitude slower than my other choice, C++.

As the title states, how does Haskell compare for this type of application? Suggestions for approaches to use in Haskell, in addition to links to implemented stateful high-performance Haskell programs, would be greatly appreciated.

If a more specific example of how I need to maintain state is required, I can provide one, but just thinking of a huge collection of coordinates which change extensively on each iteration should be sufficient.

Thanks!

like image 998
trolox Avatar asked Mar 06 '13 17:03

trolox


1 Answers

As it stands your question can't be answered directly.

TL;DR: there are no theoretical reasons why you would have performance issues. And certainly no know reason to have "an order of magnitude" worse performance. However, unless you have a concrete scenario, it is hard to make anything other than broad statements.


Haskell can access bare memory and shuffle bits around, just as C++. So there is no theoretical "order of magnitude slower" to be concerned about. Indeed, if you wish to represent state as mutable blobs of memory, or stack allocated frames, or whatever, Haskell (the GHC implementation) supports that -- there are operating systems written in Haskell, after all.

Some idioms and libraries that are useful in this area:

  • unboxed, strict fields
  • the ST monad (safe access to raw memory)
  • the vector and repa libraries -- high performance vectors

Most likely you won't need absolute low level control , unless you're programming the device driver.

As long as your algorithms are sensible, you will be just fine.

The GHC implementation of Haskell has very fast allocation (bump pointer) making immutable data cheap. There's no inherent penalty for using immutable data, and you gain easier parallelization, and code that is easier to maintain. So if you can, stick to Haskell idioms.

I want to simulate as many entities as I possibly can, which means efficiency is very important.

GHC has a very efficient allocator and garbage collector, objects are cheap and have low overhead -- assuming you use sensible data representations -- so I see no problem here. E.g. on the benchmarks game, the binary-trees benchmark tests allocator performance -- C++ and GHC Haskell are tied at time of writing for this micro-test.

Ultimately, you pay no penalty to use Haskell fundamentally. You can use imperative algorithms where necessary -- probably you won't need to. Just don't make the mistake of using naive algorithms (like using immutable lists in place of mutable arrays). This is by far the biggest risk - as a new user, you might use inefficient approaches -- ask for help and profile :)

As an aside, a new Haskell user, 10 years ago, wrote Frag, which had perfectly acceptable performance. GHC is a lot smarter now too...

like image 50
Don Stewart Avatar answered Nov 13 '22 10:11

Don Stewart