Is it suitable parallelizing loops containing function call(s), or is it much more convenient parallelization of loops which are doing basic operation inside.
for example is it suitable putting parallelization directives as below ?
main(){
..
#omp paralel ..
for (i=0;i<100;i++){
a[i] = foo(&datatype , ...);
...
}
..
}
int foo(datatype *a,...){
//doing complex operations here
//calling other functions etc.
}
Thank you Will Richard and Phkahler , that comments were helpful and i will have a deep look to the book rchrd had suggested. But before end of the day, it is wanted from me to make an existing C code ( indeed a big loop which stays at the top of the program) parallelized with openMP if possible.
At this point i need some help about making at least some parts of the loop parallelized. To make things simple, instead of parellelezing whole loop contents how can i make only a part of it work parallel
for(i to N){
work1() --(serial)
work2() --(serial)
Work3() --( PARALLEL)
work4() --(serial)
}
//does it make sense adding critical sections except work3
#omp parallel for private(Ptr)
for(i to N){
#omp single
{
work1() --(serial)
work2() --(serial)
}
Work3(Ptr) --( PARALLEL)
#omp single
{
work4() --(serial)
}
}
Three bits of information need to be known:
If you have task that takes a long time - several seconds or more - and it can be broken into independent parts (sometimes by refactoring, e.g. by dividing into jobs and gathering the results for each job before combining) then it can be worth trying to parallelize it.
Profile!
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With