I'm really impressed with Julia since it ran faster than D on a processor intensive Euler Project question. #303 if anyone is interested.
What's weird is how slow BigInts in Julia seems to be. Strange because I read their performance is quite good.
The following is a Julia program to calculate the number of partitions of 15k using Euler's recurrence formula.
function eu78()
lim = 15000
ps = zeros(BigInt, lim)
function p(n) #applies Euler recurrence formula
if n < 0
return BigInt(0)
elseif n == 0
return BigInt(1)
elseif ps[n] > 0
return ps[n]
end
s = BigInt(0)
f = BigInt(-1)
for k = 1 : n
f *= -1
t1 = (k * (3k - 1)) ÷ BigInt(2)
t2 = (k * (3k + 1)) ÷ 2
s += f * (p(n - t1) + p(n - t2))
end
ps[n] = s
end
for i = 1 : lim
p(i)
end
println(ps[lim])
end
eu78()
Runs in a whopping 3min43sec to generate the 132 digit answer.
The equivalent Python code run with pypy takes a mere 8 seconds.
What am I doing wrong?
BigInts are currently pretty slow in Julia. This is because each operation allocates a new BigInt, as opposed to Python where all integers are BigInts and therefore they've spent a fair amount of time making sure that basic operations are fast. Python actually uses a hybrid approach where small integer values are represented inline and only when values get too large are they represented as BigInts. The same could absolutely be done in Julia, but no one has implemented this yet – partly because standard Int
values are machine integers and therefore the performance of BigInts is not critical. The real boost to BigInt performance will come from integrating Julia's BigInt type with GC and allowing small BigInt values to be stack allocated – and to live in registers. We're not quite there yet, however.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With