Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

CPU bound applications vs. IO bound

For 'number-crunching' style applications that use alot of data (reads: "hundreds of MB, but not into GB" ie, it will fit nicely into memory beside the OS), does it make sense to read all your data into memory first before starting processing to avoid potentially making your program IO bound while reading large related datasets, instead loading them from RAM?

Does this answer change between using different data backings? ie, would the answer be the same irrespective of if you were using XML files, flat files, a full DBMS, etc?

like image 875
Matthew Scharley Avatar asked Oct 26 '09 05:10

Matthew Scharley


People also ask

Why is CPU-bound faster than I O bound?

Programs that are I/O bound are often slower than CPU-bound programs. Due to the use of the input-output system, the time spent waiting for data to be read or written can be substantial. This is considerably slower than the time it takes a processor to complete operations.

Why is it important for the scheduler to distinguish t 0 Bound programs from CPU-bound programs?

5 Why is it important for the scheduler to distinguish I/O-bound programs from CPU-bound programs? Answer: I/O-bound programs have the property of performing only a small amount of computation before performing I/O. Such programs typically do not use up their entire CPU quantum.

What is meant by IO bound?

Refers to programs with a large number of I/O (input/output) operations, which slow the central processing unit (CPU).

What is the difference between CPU-bound and I O bound processes?

CPU Bound means the rate at which process progresses is limited by the speed of the CPU. A task that performs calculations on a small set of numbers, for example multiplying small matrices, is likely to be CPU bound. I/O Bound means the rate at which a process progresses are limited by the speed of the I/O subsystem.


2 Answers

Your program is as fast as whatever its bottleneck is. It makes sense to do things like storing your data in memory if that improves the overall performance. There is no hard and fast rule that says it will improve performance however. When you fix one bottleneck, something new becomes the bottleneck. So resolving one issue may get a 1% increase in performance or 1000% depending on what the next bottleneck is. The thing you're improving may still be the bottleneck.

I think about these things as generally fitting into one of three levels:

  1. Eager. When you need something from disk or from a network or the result of a calculation you go and get or do it. This is the simplest to program, the easiest to test and debug but the worst for performance. This is fine so long as this aspect isn't the bottleneck;
  2. Lazy. Once you've done a particular read or calculation don't do it again for some period of time that may be anything from a few milliseconds to forever. This can add a lot of complexity to your program but if the read or calculation is expensive, can reap enormous benefits; and
  3. Over-eager. This is much like a combination of the previous two. Results are cached but instead of doing the read or calculation or requested there is a certain amount of preemptive activity to anticipate what you might want. Like if you read 10K from a file, there is a reasonably high likelihood that you might later want the next 10K block. Rather than delay execution you get it just in case it's requested.

The lesson to take from this is the (somewhat over-used and often mis-quoted) quote from Donald Knuth that "premature optimization is the root of all evil." Eager and over-eager solutions add a huge amount of complexity so there is no point doing them for something that won't yield a useful benefit.

Programmers often make the mistake of creating some highly (alleged) optimized version of something before determining if they need to and whether or not it will be useful.

My own take on this is: don't solve a problem until you have a problem.

like image 117
cletus Avatar answered Dec 18 '22 23:12

cletus


I would guess that choosing the right data storage method will have more effect than whether you read from disk all at once or as needed.

Most database tables have regular offsets for fields in each row. For example, a customer record may be 50 bytes long and have a pants_size column start at the 12th byte. Selecting all pants sizes is as easy as getting values at offsets 12, 62, 112, 162, ad nauseum.

XML, however, is a lousy format for fast data access. You'll need to slog through a bunch of variable-length tags and attributes in order to get your data, and you won't be able to jump instantly from one record to the next. Unless you parse the file into a data structure like the one mentioned above. In which case you'd have something very much like an RDMS, so there you go.

like image 20
Brendan Berg Avatar answered Dec 18 '22 21:12

Brendan Berg