I'm interested in finding out whether there is a formal definition as to whether a parallel code is scalable, or whether it is just a trendy word? If I measure the serial wall time as t_S and the parallel wall time as t(P), then I can define the efficiency as E(P) = t_S / (t(P) * P), is there a criterion for how the efficiency has to change with P (and the problem size) for the code to be deemed scalable?
Scalable means that with extra machines or cpu cores (scale up vs. scale out) performance (ability to handle increasingly large workloads) improves. Serial code is thus not scalable. Parallel code can be. Amdahl's law limits how scalable a system can be.
Scalability is often more important than efficiency. A scalable but inefficient system can handle more load by just adding hardware. An efficient but unscalable system requires major code rework in order to handle larger loads.
see Amdahl's law and Gustafson's law for some formal definitions of some scalability metrics.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With