Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

what would the system software have to do if the processor did not generate interrupts?

Question related to interrupts

if a processor didn't have the ability to service interrupts, what would the system software have to do to make sure that each keyboard keystroke was detected, each mouse movement was registered, ethernet (network) data was processed correctly, and files were successfully loaded into memory?

like image 262
Hagop Tavoyan Avatar asked Mar 03 '23 08:03

Hagop Tavoyan


2 Answers

If a processor didn't have the ability to service interrupts, what would the system software have to do to make sure that each keyboard keystroke was detected, each mouse movement was registered, ethernet (network) data was processed correctly, and files were successfully loaded into memory?

There's 2 ways to do IO.

Asynchronous IO

The good way to do IO is for the CPU to ask the device to perform the IO, then let the CPU do other work (including running other programs) until an IRQ occurs to inform the OS that the IO request completed. For a computer with (e.g.) 4 CPUs and 20 devices; this allows all 4 CPUs and all 20 devices to be doing useful work at the same time. This is the method that is used by every modern OS (including modern "single-tasking" operating systems).

In this case; without IRQs you'd have to emulate them by checking for IO request completion extremely often (e.g. maybe insert a check_for_IO_completion() function call everywhere), where "how often" is a compromise between "device performance" (latency, how much time can pass between an IO request completing and the OS knowing the IO request completed and being able to start the next request) and "CPU performance" (how much CPU time is wasted on all the check_for_IO_completion() calls). Of course (especially if there's any security - e.g. "kernel space" isolated from "user-space") there is no good compromise - regardless of how much of what kind of performance you sacrifice, you can't expect performance to be the same as it would've been if IRQs were used.

Synchronous IO

The bad way to do IO is for the CPU to ask the device to perform the IO and then constantly poll that device until the IO is completed. For a computer with (e.g.) 4 CPUs and 20 devices; you can never have more than 4 things happening at the same time (each of the 4 CPUs is either doing useful work or waiting for an IO request to complete) and you will always have at least 20 pieces of hardware (devices or CPUs) being wasted/unable to do useful work.

This is the method that is used by firmware (e.g. BIOS and UEFI), because it's simpler, and because firmware is only used very briefly to start an OS (which mostly involves "load kernel from disk" and doesn't benefit much from allowing many pieces of hardware to work at the same time), and because firmware is discarded/ignored after that. There were also a few horrible old operating systems that used this approach (e.g. MS-DOS).

For this method, IRQs aren't really used so it doesn't really matter much if there's no IRQs.

Note: there are some kinds of devices that transfer data without software requesting it (e.g. keyboard, mouse, network card receiving but not sending packets). Without IRQs and with synchronous IO; these kinds of devices need buffers large enough to store the data until software requests it and/or some way to deal with buffer overflows causing lost data (e.g. TCP/IP retries).

like image 196
Brendan Avatar answered Apr 25 '23 04:04

Brendan


Polling, and cooperative multi-tasking yield() calls in all code (including user-space) to give the OS the CPU frequently so it can poll the hardware.

If literally servicing interrupts is the only thing you're not allowed to have, you could still have the hardware machinery of handling signals from devices and creating a priority queue of things that need servicing. So instead of looping through every driver and having it poll its own hardware, the polling could just check what, if anything, needs servicing with one I/O read. Maybe instead of interrupts, the CPU would have support for that queue right in the core so checking for pending things that need servicing doesn't even have to go off-core and only takes a few cycles.

This would be slightly less horrible and somewhat lower overhead, but it would still require all code everywhere in the system to yield() very frequently or else servicing of HW would lag far behind.

e.g. one infinite-loop bug in a loop that you expected to be short enough to get away without a yield() and your entire system locks up unrecoverably except for the reset button. (Classic MacOS was like this, except you could still move the mouse when that happened.)


Presumably HW for this system would be designed not to need much CPU interaction, e.g. give the disk controller a command buffer with some DMA addresses.

You'd maybe integrate the mouse controller with the video card to do hardware cursor movement without involving the CPU.

A NIC would have a hardware receive queue with room to store multiple incoming ethernet frames between times the CPU checks on it. (I think real-life NICs work this way, at least good ones. Under high traffic conditions, real NIC drivers actually can/do switch to a polled mode instead of having the HW raise an interrupt for every incoming packet, to reduce CPU overhead. Especially for 10G / 100G ethernet with small frame sizes. But that polling is done from a timer interrupt at 100Hz or something, not just from yield().)

and files were successfully loaded into memory?

Disk controllers don't know about files, just sectors of the block device. But yeah, reporting completion of a disk read request could be via polling.

Or on a low-end system, disk I/O could be done with programmed I/O, where the CPU has to read every byte or word separately and store it to memory itself. (IDE disk controllers on x86 used to work this way, with DMA as an option but some controllers had buggy DMA. So Linux used to default to PIO, and you could use hdparm to enable DMA. Although interrupts were still involved, I think, probably to let the CPU know when data was ready to be copied out of the disk's buffer in a burst. You don't want to leave the CPU spinning for several milliseconds waiting for the heads to seek before transfer can even start. But on a system with solid-state storage, like SSD / Flash, seek times are much smaller so you could just make it simplistic and block the whole system from initiating an I/O until that sector transfer was complete.)

like image 43
Peter Cordes Avatar answered Apr 25 '23 05:04

Peter Cordes