Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

dma vs interrupt-driven i/o

I'm a little unclear on differences between DMA and interrupt I/O. (Currently reading Operating Systems Concepts, 7th ed).

Specifically, I'm not sure when the interrupts occur in either case, and at what points in both cases is the CPU is free to do other work. Things I've been reading, but can't necessarily reconcile:

Interrupt-driven

  1. Controller initialized via driver
  2. Controller examines registers loaded by driver in order to decide action
  3. Data transfer from/to peripheral and controller's buffer ensues.
  4. Controller issues interrupt when (on each byte read? on each word read? when the buffer fills? when the transfer is completed?
  5. It is my understanding, that the CPU is not doing anything while both the peripheral <-> controller I/O is taking place, and the controller <-> MM I/O is taking place?
  6. When the transfer done, or when block fills up, CPU must initialize transfer from controllers buffer to MM

DMA

  1. Same as above, except that the controller is able to transfer data from it's buffer directly to MM without CPU intervention.
  2. Does this mean that the CPU is only interrupted when the whole transfer is complete, or is it still interupted when a controller buffer fills up?
  3. Is the only difference that the CPU no longer has to wait for the controller <-> MM I/O, but is still has to be interrupted when a controller buffer fills up? Or does DMA hide that from the CPU too?
like image 561
Joney Avatar asked Aug 14 '14 21:08

Joney


People also ask

What is the difference between interrupt driven IO and DMA?

CPU is general purpose but the DMA controller is specific purpose. A DMA module controls the exchange of data between main memory and an I/O module. The processor sends a request for the transfer of a block of data to the DMA module and is interrupted only after the entire block has been transferred.

Why DMA is better than interrupt driven?

DMA saves lots of CPU time so that CPU can have more time to execute CPU-bound tasks. In a simple computer architecture, CPU and IO devices are linked with a bus. if CPU wants to get data from IO device, it will send a message to IO device via bus and wait for its response.

What are the advantages of DMA over interrupt initiated IO?

Advantages: Transferring the data without the involvement of the processor will speed up the read-write task. DMA reduces the clock cycle requires to read or write a block of data. Implementing DMA also reduces the overhead of the processor.

Is DMA faster than interrupt driven?

So, the use of DMA is reducing number of interrupt and increasing performance in comparison with interrupt driven IO. That's why DMA's are mostly used when there is a demand of frequent transfer of big chunk of data.


2 Answers

I'm a little unclear on differences between DMA and interrupt I/O

Differences between DMA and interrupts are bogus because they are not opposing concepts.
DMA and interrupts are orthogonal concepts, and both concepts are typically used together.

The alternative to DMA is programmed I/O, aka PIO.
The alternative to interrupts is polling.

Interrupt-driven

You need to be more specific to what you are referring.
If the system is not using interrupts, then it would have to use polling to detect the change in status of a device.

PIO often uses an interrupt (from the device) to initiate each byte/word data transfer. This helps mitigate the CPU-intensive nature of PIO. A polled PIO transfer would otherwise totally consume CPU resources.
But to refer to "PIO with interrupts" as simply "interrupts" or "interrupt-driven" is inaccurate and misleading.

DMA transfers almost alway employ a completion interrupt (from the DMA controller) to notify the CPU that a buffer transfer is complete.
To poll for the DMA completion (instead of using a completion interrupt) bestows a burden on the CPU that DMA is supposed to relieve. I have seen a bootloader that initaited a DMA transfer, and then polled for completion. But that is a single task environment that can afford to busy wait, whereas an operating system needs to maximize CPU availability.. That means using DMA with completion interrupts.

Discussing "interrupts" without providing specific context, e.g. the source and reason why these interrupts being generated, probably is responsible for your confusion.

  1. Controller initialized via driver
  2. Controller examines registers loaded by driver in order to decide action
  3. Data transfer from/to peripheral and controller's buffer ensues.
  4. Controller issues interrupt when (on each byte read? on each word read? when the buffer fills? when the transfer is completed?
  5. It is my understanding, that the CPU is not doing anything while both the peripheral <-> controller I/O is taking place, and the controller <-> MM I/O is taking place?
  6. When the transfer done, or when block fills up, CPU must initialize transfer from controllers buffer to MM

An issue I see with your questions is that you're posing an ambiguous configuration.
You mention a "peripheral", a "controller", the CPU and "MM" (perhaps main memory?).

From a software perspective, the peripheral connection could be one of the following topologies:

A. CPU <--> device

B. CPU <--> controller -- [device or medium]

C. CPU <--> bus -- device
D. CPU <--> bus -- controller -- [device or medium]

Connection A typifies a device that the CPU can access directly, such as a local UART for a serial port. There may be buses involved in the hardware implementation, but they're invisible to software.

Connection B typifies a device that interfaces to the CPU through a device controller, e.g. MultiMediaCard (MMC) controller to SDcard and IDE (integrated disk controller) disk drive. Unlike A, the CPU has to interface only with the device controller, and not the the device itself. The interactions between the controller and its device are typically not controlled by the CPU and minimally monitored (if at all). The controller exists to simplify the interface between CPU and its device.

Connections C and D typifies a device or its controller that the CPU can access indirectly over a bus (e.g. USB, SPI or SATA), such as a USB-to-Ethernet adapter or SPI NOR flash. Command to the device or its controller need to be transmitted over the bus. For example the ATAPI commands to the disk controller have to be transmitted through the SATA controller. Unlike A, the bus controller is the interface that the CPU has to directly perform I/O.

So your #3 and #5a are irrelevant. The CPU is not involved. Also you cannot generalize about the controller-to-device interface, because that can be unique for each peripheral subsystem. One controller may just buffer one byte, whereas another controller will buffer an entire block in order to verify ECC.

DMA

  1. Same as above, except that the controller is able to transfer data from it's buffer directly to MM without CPU intervention.

  2. Does this mean that the CPU is only interrupted when the whole transfer is complete, or is it still interupted when a controller buffer fills up?

  3. Is the only difference that the CPU no longer has to wait for the controller <-> MM I/O, but is still has to be interrupted when a controller buffer fills up? Or does DMA hide that from the CPU too?

These scenarios and questions barely make sense. The direction of the transfer is unspecified, i.e. is the CPU performing a read or write operation).

DMA transfers almost alway employ a completion interrupt (from the DMA controller) to notify the CPU that a buffer transfer is complete.

You repeatedly use the phrase "when a controller buffer fills up" without specifying the source of this data. If you're asking about device-to-controller I/O, then such I/O is typically of minimal concern to the CPU and status indications are controller specific.

You seem to be asking about a block-type of transfer. Understanding block transfers does not necessarily confer understanding of character-based I/O.
For a derivative question on character-based (UART) I/O, see Master for Interrupt based UART IO

like image 134
sawdust Avatar answered Nov 25 '22 13:11

sawdust


In case of interrupt driven IO, MCU gets interrupt on each Byte or word depends on what microcontroller facilitates. MCU moves to interrupt mode leaving normal mode of operation for each byte/word reception. Here MCU can not do anything but reading the data from IO and copy into memory.

In case of DMA the DMA does the same thing which is done by MCU in interrupt case. So here MCU is free to do anything else. You can configure DMA on how many bytes you need interrupt. So here it is different than IO interrupt, because MCU does not get interrupt for every byte or word; instead get interrupt from DMA only when it has received amount of data you have configured. Moreover DMA has copied the data from IO to RAM so MCU need not to give effort for copy too, is a big time saving.

So if you have configured the DMA to interrupt at 1 KB of data, your MCU will get 1 interrupt for 1 KByte otherwise it will get 1 K interrupts if use interrupt driven IO. So, the use of DMA is reducing number of interrupt and increasing performance in comparison with interrupt driven IO.

That's why DMA's are mostly used when there is a demand of frequent transfer of big chunk of data.

like image 24
Vicky Avatar answered Nov 25 '22 13:11

Vicky