Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

What is the difference between programmed (polled) I/O, interrupt-driven I/O, and direct memory access (DMA)?

What is the difference between programmed (polled) I/O, interrupt-driven I/O, and direct memory access (DMA)? Are these forms of I/O dependent on the operating system?

I've read through the question dma vs interrupt-driven i/o, but it seems to me that the responses are both unclear and contradictory.

like image 505
The Pointer Avatar asked Nov 11 '16 05:11

The Pointer


People also ask

What is the difference between polling and polled interrupt processing?

1. In interrupt, the device notices the CPU that it requires its attention. Whereas, in polling, CPU steadily checks whether the device needs attention.

What is the difference between DMA and interrupt?

DMA is “direct memory access”, a way for devices to access the system's RAM without using the CPU. Interrupts are signals from a device to the CPU indicating some event occured. So they both originate from a device, but otherwise have nothing in common.

Is programmed IO and polling the same?

There is no polling involved. The "PIO" acronym has always stood for "Programmed Input/Ouput".


2 Answers

Polled (or programmed) I/O: The CPU periodically manually checks if there are any I/O requests available. If there isn't, it keeps executing its normal workflow. If there is, it handles the I/O request instead.

Interrupt-Driven I/O: The CPU doesn't need to manually check for I/O requests. When there is an I/O request available, the CPU is immediately notified using interrupts, and the request is immediately handled using an interrupt service routine.

DMA: The use of DMA allows interrupt-driven I/O to be used. Otherwise, a system must use programmed I/O if DMA is not available.

DMA is a method allowing devices (typically has very slow I/O speeds) to access main memory without needing the CPU to explicitly handle the requests. When the CPU initiates data transfer from I/O devices to main memory, the CPU instructs the DMA controller to handle this task. The CPU will then "forget" about this operation, and proceed with other tasks. When the DMA controller has completed the transfer, it will signal the CPU using an interrupt. The CPU will then "conclude" the task needed associated with the data transfer it initiated.

The availability of DMA and interrupt driven I/O depends on the physical CPU. If the DMA and interrupt hardware exists, then the OS (and your programs) can use interrupt-driven I/O requests. Otherwise, I/O requests must be manually checked periodically by polling.

like image 56
BiN4RY Avatar answered Sep 26 '22 23:09

BiN4RY


What is the difference between programmed (polled) I/O, interrupt-driven I/O, and direct memory access (DMA)?

For programmed IO (PIO) you might have a loop, like:

for(i = 0; i < 512; i++) {
    dest_buffer[i] = getNextByteFromDevice();
}

The defining characteristic is that CPU time is being spent to move data between the device and RAM. In this case, software/driver knows the transfer completed when the loop finishes, so nothing else is needed.

Note: It is wrong to call this "polled IO". There is no polling involved. The "PIO" acronym has always stood for "Programmed Input/Ouput".

For interrupt driven IO; the device generates in interrupt for each small piece of data that needs to be transferred (e.g. the interrupt handler might do something like "dest_buffer[i++] = getNextByteFromDevice();"). For performance this is horrific (the cost of an interrupt is expensive on modern systems); so interrupt driven IO was mostly only used for ancient and very slow devices (e.g. serial port controllers before FIFOs were added to them in the early 1990s).

For DMA, you'd configure some kind of DMA controller to do the transfer for you (e.g. tell a DMA controller chip the address of your buffer in RAM, the size of the transfer, etc). In this case you don't know when the transfer is finished, so you either poll the DMA controller or poll the device, or arrange some kind of interrupt to occur when the transfer is completed. Note that polling may make it relatively pointless unless the CPU can do other work before it starts polling (otherwise the CPU time spent polling could've been spent doing the transfer without DMA instead).

A fourth option (which is related to DMA but much more common on modern systems) is for devices to do their own bus mastering without any extra/external DMA controller - e.g. instead of telling a disk controller to fetch some data from disk then configuring a DMA controller to transfer the data to RAM; you might tell the disk controller to fetch data from disk and transfer it to RAM itself. This makes it easier to have many devices doing work in parallel (without fighting for a "shared by all devices on the bus" DMA controller), makes device drivers more self contained, and can reduce the need for CPU attention (especially for things like network cards where data can arrive at any time; where the device can use one interrupt to say "data arrived and was transferred to RAM" instead of having 2 different interrupts - one for "data arrived at the device" and a second for "data was transferred from device to RAM"). The disadvantage is that it increases the cost of a device, which is something that became less important as the price of transistors dropped (mostly during the late 1980s).

Are these forms of I/O dependent on the operating system?

They're dependent on the hardware and the OS. If a device only supports one option then the OS/driver must use that option. If a device supports 2 or more options then the OS/driver can choose which to use (and in some cases, may support 2 or more options and may dynamical switch between them based on other criteria - power consumption, IO priority, device load, CPU load, ...).

For hardware; most hardware doesn't provide a useful DMA controller (excluding the old "ISA bus DMA controller chip", which is only used by equally old devices like floppy controller, where the devices themselves are so slow that it doesn't matter that the DMA controller is equally slow). However; modern high-end servers often do have DMA controllers/DMA engines (e.g. "Intel IO Acceleration Technology" - see https://en.wikipedia.org/wiki/I/O_Acceleration_Technology ); and often these DMA controllers are faster and/or more powerful than bus mastering (e.g. able to transfer data from one device to another device bypassing RAM, able to inject data directly into CPUs' caches to avoid cache misses, etc). Sadly; because these modern DMA controllers are restricted to niche hardware, most operating systems and/or most device drivers either don't support them well or don't bother supporting them at all, so (even in their intended niche) the benefits of them existing is diminished by their scarcity.

like image 43
Brendan Avatar answered Sep 24 '22 23:09

Brendan