I am new to Clojure. In experimenting with it I wrote I function to calculate the n!. My Clojure code is as follows:
(defn factorial
[n]
(reduce * (biginteger 1) (range 1 (inc n))))
I then ran the following in a repl.
(time (factorial 100))
And this was the result:
"Elapsed time: 0.50832 msecs"
93326215443944152681699238856266700490715968264381621468592963895217599993229915608941463976156518286253697920827223758251185210916864000000000000000000000000N
I then created a similar solution in Ruby:
def factorial(n)
start = Time.now.to_f
(2..n).inject(1) { |p, f| p * f }
finish = Time.now.to_f
time_taken = finish - start
puts "It took: #{(time_taken * 1000)} msecs"
end
The with irb I ran factorial(100)
Resulting in:
It took: 0.06556510925292969 msecs
=> 93326215443944152681699238856266700490715968264381621468592963895217599993229915608941463976156518286253697920827223758251185210916864000000000000000000000000
The performance of the Ruby version seems to be significantly greater, despite most evidence I've seen suggesting that Clojure should have superior performance. Is there something I am misunderstanding or some element of my Clojure solution that would slow it down?
BigInteger comes from java, while BigInt is implemented in the Clojure's core. Right off the bat, that comes with some costs related to interopability.
Additionally, BigInt is represented as either a long or a BigInteger. Whenever possible, the long is used. However, if any operation makes it overflow, the resulting new BigInt will use it's BigInteger. Java's long maps to the native architecture's implementation, hence is significantly faster. This is similar to Ruby's magic conversion between Fixnum and Bignum.
Since you use small numbers almost exclusively (1 to 100 and a good chunk of the intermediate products), you can get a significant performance boost.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With