I know about Amdahl's law and maximum speedup of a parallel program. But I couldn't research Gustafson's law properly. What is Gustafson's law and what is the difference between Amdahl's and Gustafson's laws?
In computer programming, Amdahl's law is that, in a program with parallel processing , a relatively few instruction s that have to be performed in sequence will have a limiting factor on program speedup such that adding more processor s may not make the program run faster.
Amdahl's Law fails to predict this because it assumes that adding processors won't reduce the total amount of work that needs to be done, which is reasonable in most cases, but not for search.
A well-known limitation of Amdahl's law is that it only applies in situation where the problem size is constant and the number of processors varies (strong scalability– a concept we already discussed in Section 1.1).
In computer architecture, Amdahl's law (or Amdahl's argument) is a formula which gives the theoretical speedup in latency of the execution of a task at fixed workload that can be expected of a system whose resources are improved.
Amdahl's law
Suppose you have a sequential code and that a fraction f
of its computation is parallelized and run on N
processing units working in parallel, while the remaining fraction 1-f
cannot be improved, i.e., it cannot be parallelized. Amdahl’s law states that the speedup achieved by parallelization is
Gustafson's law
Amdahl’s point of view is focused on a fixed computation problem size as it deals with a code taking a fixed amount of sequential calculation time. Gustafson's objection is that massively parallel machines allow computations previously unfeasible since they enable computations on very large data sets in fixed amount of time. In other words, a parallel platform does more than speeding up the execution of a code: it enables dealing with larger problems.
Suppose you have an application taking a time ts
to be executed on N
processing units. Of that computing time, a fraction (1-f)
must be run sequentially. Accordingly, this application would run on a fully sequential machine in a time t
equal to
If we increase the problem size, we can increase the number of processing units to keep the fraction of time the code is executed in parallel equal to f·ts
. In this case, the sequential execution time increases with N
which now becomes a measure of the problem size. The speedup then becomes
The efficiency would then be
so that the efficiency tends to f for increasing N
.
The pitfall of these rather optimistic speedup and efficiency evaluations is related to the fact that, as the problem size increases, communication costs will increase, but increases in communication costs are not accounted for by Gustafson’s law.
References
G. Barlas, Multicore and GPU Programming: An Integrated Approach, Morgan Kaufmann
M.D. Hill, M.R. Marty, Amdahl’s law in the multicore era, Computer, vol. 41, n. 7, pp. 33-38, Jul. 2008.
GPGPU
There are interesting discussions on Amdahl's law as applied to General Purpose Graphics Processing Units, see
Amdahl's law and GPU Amdahl's Law for GPU Is Amdahl's law accepted for GPUs too?
We are looking at the same problem from different perspectives. Amdahl's law says that if you have, say, 100 more CPUs, how much faster can you solve the same problem?
Gustafson's law is saying, if a parallel computer with 100 CPUs can solve this problem in 30 minutes, how long would it take for a computer with just ONE such CPU to solve the same problem?
Gustafson's law reflects the situations better. For example, we cannot use a 20 year old PC to play most of today's video games because they are just too slow.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With