I'm trying to understand some quirks of the Parallelize[] behavior.
If I do:
CloseKernels[];
LaunchKernels[1]
f[n_, g_] :=
First@AbsoluteTiming[
g[Product[Mod[i, 2], {i, 1, n/2}]
Product[Mod[i, 2], {i, n/2 + 1, n}]]];
Clear[a, b];
a = Table[f[i, Identity], {i, 100000, 1500000, 100000}];
LaunchKernels[1]
b = Table[f[i, Parallelize], {i, 100000, 1500000, 100000}];
ListLinePlot[{a, b}, PlotStyle -> {Red, Blue}]
The result is the expected one:
CPU utilization:

But if I do the same, changing the function to evaluate:
CloseKernels[];
LaunchKernels[1]
f[n_, g_] :=
First@AbsoluteTiming[
g[Product[Sin@i, {i, 1, n/2}]
Product[Sin@i, {i, n/2 + 1, n}]]];
Clear[a, b];
a = Table[f[i, Identity], {i, 1000, 15000, 1000}];
LaunchKernels[1]
b = Table[f[i, Parallelize], {i, 1000, 15000, 1000}];
ListLinePlot[{a, b}, PlotStyle -> {Red, Blue}]
The result is:

CPU utilization:

I think I am missing some important knowledge about Parallelize[] to understand this.
Any hints?
My guess is that the problem is not in Parallelize, but in what you are trying to compute. For Mod, the result is always either 1 or 0 and the product as well. For Sin, since you use the integer arithmetic, you accumulate huge symbolic expressions (products of Sin[i]). They are discarded after having been computed, but they need the heap space (memory allocation/deallocation). The quadratic behavior you observe is likely due to the linear complexity of large size memory allocation, "multiplied" by the liner complexity from your iteration. This seems the dominant effect, which shadows the real costs of Parallelize. If you apply N, like Sin@N[i], the results are quite different.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With