Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

how DOS executed multiple processes simultaneously?

DOS is always given as an example of single tasking operating system. However when a command is issued through command-line, control switches from the shell to the command and then switches back to shell when the command completes.Thus there are two processes executing simultaneously. Is there something wrong in my understanding ?

like image 965
vjain27 Avatar asked Dec 28 '11 04:12

vjain27


3 Answers

No, they weren't executing simultaneously.

COMMAND.COM had a resident portion that was in memory all the time and a transient portion that could be tossed out at will.

When you ran a program, it typically got loaded in place of the transient portion and then run. When the program exited, it did so by calling code in the resident portion which would then reload the transient portion if necessary and continue.

The fact that some of the code remained resident in no way means that it was "running". In a similar way, vast tracts of MS-DOS (the kernel) stayed continuously in memory yet they weren't "running", unless called explicitly by a non-kernel program.

Now there were things can could be said to run concurrently, DOS had plenty of TSR (terminate and stay resident) programs that would run, hook into an interrupt or DOS in some way, then exit but leaving some memory allocated (where its code was).

Then, in response to certain events, that code would be run. Perhaps one of the famous ones was Borland Sidekick which was a personal information manager that would pop up instantly with a keypress.

like image 146
paxdiablo Avatar answered Sep 27 '22 20:09

paxdiablo


While the other process is running, the command line processor is not running: it is suspended. The only "multitasking" facility that was available in DOS was "Terminate and Stay Resident".

like image 22
Sergey Kalinichenko Avatar answered Sep 27 '22 19:09

Sergey Kalinichenko


It doesnt matter whether you are running DOS or Windows or Linux or BSD or whatever on that processor it is all the same. At that period of time you for purposes of this discussion had a single execution unit, a single core executing the instructions, mostly in order. Makes no difference if those instructions wear the name DOS or Linux or windows. Just instructions.

Just like now as then when a windows program decides to terminate it tries to do it nicely with some flavor of exit call. When a linux program terminates it tries to do so nicely with some flavor of exit call to the system. And when a dos program terminates it tries to do so nicely with some flavor of exit call to the system. In a shell, command prompt, etc linux, windows, dos, that shell, which is a program itself, loads and branches to the program you have loaded and your program runs for a while and as mentioned tries to return to the prior program nicely with some flavor of exit. Just like when the shell you were running wants to return when it is done running it tries to do so nicely.

As with linux or windows, easier to see back then, you dont run anything "at the same time" or "in parallel" one instruction stream at a time. (today we have multiple execution units and/or cores that are designed to each be doing something in parallel with something managing them, so today you can actually say "in parallel") You want to switch "tasks" or "threads" or "processes" you needed an interrupt, that switched to you different code, an interrupt handler, and that handler could return to the same program that was interrupted or switch to another. You can put whatever name on it you want it is how you make things look like they are running at the same time. dos, linux, windows, etc, this is typically how you switch from one "program" or bit of code to another. linux and windows have their kernels and operating system behind them that was called during the interrupts, and dos had that as well (dos HAS that, dos is still alive you touch a dos machine every few days most likely (gas pump, atm machine, etc), dos is also still used in the development and testing of x86 motherboards/computers, nothing can compete with it as an embedded x86 platform, nothing has the freedom that dos has to do what you want, this is why bios upgrades are still distributed as a dos program). The interrupt handlers would give time slices to the various bios handlers and dos handlers. task/process/thread switching was not as designed or planned as an operating system like linux or windows, but it was there, for each version of dos there were rules you followed and you could switch tasks (tsrs are a popular term). Just talking to a floppy, hard disk, etc there was code involved in the whole process, it wasnt buried in the hardware, lots of things happened in parallel. no different than a hard disk controller driver in something more complicated like linux or windows. At least one, maybe some, non-microsoft dos clones could multitask.

The short answer, When you have a function bob() that calls a function ted().

int bob ( int something )
{
...some code
...more code
   ted();
...some code
...more code
}

is bob() still running? Are they running in parallel? No, the bob() code is still there, somewhere, waiting for the ted() code to finish what it was doing and return. So long as ted() doesnt crash it will return and bob() can continue to execute. bob is suspended while ted executes. Not much different with a shell or command line in an more complicated operating system. There is some function somewhere that has loaded your program into memory and called it, it might be a fork or clone of a command line that you were running so that that command line can continue "in parallel" or the clone can continue in parallel. but the concept is the same.

The difference from a trivial C program like the one above is that the code above can be thought of being resolved at compile time where loading and running a program is definitely runtime, basically self modifying code, the program modifies memory then jumps to it. When it returns that code, cleans up, unwinds, and exits itself or waits for another command depending on the design. DOS was just very very simple, a bunch of system calls, combined with a bunch of BIOS calls, and a very simple command line that could load programs and do a small number of other commands. It didnt have any rules you couldnt get around (windows is a dos program), if the program you launched didnt want to return (you could at least at the time launch linux from dos through an intermediate dos program) well it kind of messes up your question of what happens when the program completes, well linux didnt return, it took over the system.

like image 34
old_timer Avatar answered Sep 27 '22 19:09

old_timer