as we know, Julia supports parallelism and this is something rooted in the language which is very good.
I recently saw that Julia supports threads but it seems to me to be experimental. I noticed that in the case of using the Threads.@Threads
macro there is no need for Shared Arrays which is perhaps a computational advantage since no copies of the objects are performed. I also saw that there is the advantage of not declaring all functions with @everywhere
.
Can anyone tell me the advantage of using the @parallel
macro instead of the @threads
macro?
Below are two simple examples of using non-synchronized macros for parallelism.
Using the @threads macro
addprocs(Sys.CPU_CORES)
function f1(b)
b+1
end
function f2(c)
f1(c)
end
result = Vector(10)
@time Threads.@threads for i = 1:10
result[i] = f2(i)
end
0.015273 seconds (6.42 k allocations: 340.874 KiB)
Using the @parallel macro
addprocs(Sys.CPU_CORES)
@everywhere function f1(b)
b+1
end
@everywhere function f2(c)
f1(c)
end
result = SharedArray{Float64}(10)
@time @parallel for i = 1:10
result[i] = f2(i)
end
0.060588 seconds (68.66 k allocations: 3.625 MiB)
It seems to me that for Monte Carlo simulations where loops are mathematically independent and there is a need for a lot of computational performance the use of the @threads
macro is more convenient. What do you think the advantages and disadvantages of using each of the macros?
Best regards.
Here is my experience:
Pros:
Cons:
Pros:
Cons:
Processes are much easier to work with and scale better. In most situations they give you enough performance. If you have large data transfers between parallel jobs threads will be better but are much more delicate to correctly use and tune.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With