Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Why do we need different CPU architecture for server & mini/mainframe & mixed-core? [closed]

I was just wondering what other CPU architectures are available other than INTEL & AMD. So, found List of CPU architectures on Wikipedia.

It categorizes notable CPU architectures into following categories.

  1. Embedded CPU architectures
  2. Microcomputer CPU architectures
  3. Workstation/Server CPU architectures
  4. Mini/Mainframe CPU architectures
  5. Mixed core CPU architectures

I was analyzing their purposes and have few doubts. Taking Microcomputer CPU (PC) architecture as reference and comparing it to others we have:

Embedded CPU architecture:

  • They are a completely new world.
  • Embedded systems are small & do very specific task mostly real time & low power consuming so we do not need so many & such wide registers available in a microcomputer CPU (typical PC). In other words we do need a new small & tiny architecture. Hence new architecture & new instruction RISC.
  • The above point also clarifies why do we need a separate operating system (RTOS).

Workstation/Server CPU architectures

  • I don't know what is a workstation. Someone clarify regarding the workstation.
  • As of the server. It is dedicated to run a specific software (server software like httpd, mysql etc.). Even if other processes run we need to give server process priority therefore there is a need for new scheduling scheme and thus we need operating system different than general purpose one. If you have any more points for the need of server OS please mention.
  • But I don't get why do we need a new CPU Architecture. Why cant Microcomputer CPU architecture do the job. Can someone please clarify?

Mini/Mainframe CPU architectures

  • Again I don't know what are these & what miniframes or mainframes used for? I just know they are very big and occupy complete floor. But I never read about some real world problems they are trying to solve. If any one working on one of these. Share your knowledge.
  • Can some one clarify its purpose & why is it that microcomputer CPU archicture not suitable for it?
  • Is there a new kind of operating system for this too? Why?

Mixed core CPU architectures

  • Never heard of these.

If possible please keep your answer in this format:

XYZ CPU architectures

  • Purpose of XYZ
  • Need for a new architecture. why can't current microcomputer CPU architecture work? They go upto 3GHZ & have upto 8 cores.
  • Need for a new Operating System Why do we need a new kind of operating system for this kind of archictures?

EDIT:

Guys, this is not a homework problem. I can't do anything to make you guys believe. I don't know if the question is not clear or something else but I'm only interested in just specific technical details.

Let me put a part of this question in another way. You are in an interview and if the interviewer asks you "tell me, Microcomputer processors are fast & a lot capable and our PC operating systems are good. Why do we need a different architecture like SPARC, Itanium and need a different OS like Windows Server for servers?". What would you answer? I hope got my point.

like image 692
claws Avatar asked Apr 19 '10 12:04

claws


People also ask

Why is CPU architecture important?

It has two main functions: to process data and instructions. to control the rest of the computer system.

Why do servers have two CPUs?

Due to their better performance and stability, dual processor servers can provide more computing power and are usually used for scientific high-precision computing and simulations, high-performance computing (HPC) deployment, etc.

Why do servers need CPUs?

The Central Processing Unit (CPU) in your server, also referred to simply as the processor, is what interprets and executes instructions, processing data and performing tasks like serving web pages, running database queries, and executing other program and computing commands.

What are the different CPU architecture?

There are two primary processor architectures used in today's environments: 32-bit (x86) and 64-bit (x86-64, IA64, and AMD64). These architectures differ in the datapath width, integer size, and memory address width that the processor is able to work with.


2 Answers

Workstations are now almost-extinct form of computers. Basically they used to be high-end computers looking like desktops, but with some important differences, such as RISC processors, SCSI drives instead of IDE and running UNIX or (later) NT line of Windows operating systems. Mac Pro can be seen as a present form of workstation.

Mainframes are big (though they do not necessarily occupy whole floor) computers. They provide very high availibility (most parts of a mainframe, including processors and memory, can be replaced without system going down) and backwards compatibility (many modern mainframes can run unmodified software written for '70 mainframes).

The biggest advantage of x86 architecture is compatibility with x86 architecture. CISC is usually considered obsolete, that's why most modern architectures are RISC based. Even new Intel & AMD processors are RISC under the hood.

In the past, gap between home computers and "professional" hardware was much bigger than today, so "microcomputer" hardware was inadequate for servers. When most of RISC "server" architectures (SPARC, PowerPC, MIPS, Alpha) were created, most microcomputer chips were still 16-bit. First 64 bit PC chip (AMD Opteron) shipped over 10 years after MIPS R4000. The same was with operating systems: PC operating systems (DOS and non-NT Windows) simply were inadequate for servers.

In embedded systems, x86 chips are simply not enough power efficient. ARM processors provide comparable processing power using much less energy.

like image 124
el.pescado - нет войне Avatar answered Sep 23 '22 16:09

el.pescado - нет войне


It will probably help to consider what the world was like twenty years ago.

Back then, it wasn't as expensive to design and build world-class CPUs, and so many more companies had their own. What happened since is largely explainable by the increasing price of CPU design and fabs, which means that that which sold in very large quantities survived a lot better than that which didn't.

There were mainframes, mostly from IBM. These specialized in high throughput and reliability. You wouldn't do anything fancy with them, it being much more cost-effective to use lower-cost machines, but they were, and are, great for high-volume business-type transactions of the sort programmed in COBOL. Banks use a lot of these. These are specialized systems. Also, they run programs from way back, so compatibility with early IBM 360s, in architecture and OS, is much more important than compatibility with x86.

Back then, there were minicomputers, which were smaller than mainframes, generally easier to use, and larger than anything personal. These had their own CPUs and operating systems. I believe they were dying at the time, and they're mostly dead now. The premier minicomputer company, Digital Equipment Corporation, was eventually bought by Compaq, a PC maker. They tended to have special OSes.

There were also workstations, which were primarily intended as personal computers for people who needed a lot of computational power. They had considerably cleaner designed CPUs than Intel's in general, and at that time it meant they could run a lot faster. Another form of workstation was the Lisp Machine, available at least in the late 80s from Symbolics and Texas Instruments. These were CPUs designed to run Lisp efficiently. Some of these architectures remain, but as time went on it became much less cost-effective to keep these up. With the exception of Lisp machines, these tended to run versions of Unix.

The standard IBM-compatible personal computer of the time wasn't all that powerful, and the complexity of the Intel architecture held it back considerably. This has changed. The Macintoshes of the time ran on Motorola's 680x0 architectures, which offered significant advantages in computational power. Later, they moved to the PowerPC architecture pioneered by IBM workstations.

Embedded CPUs, as we know them now, date from the late 1970s. They were characterized by being complete low-end systems with a low chip count, preferably using little power. The Intel 8080, when it came out, was essentially a three-chip CPU, and required additional chips for ROM and RAM. The 8035 was one chip with a CPU, ROM, and RAM on board, correspondingly less powerful, but suitable for a great many applications.

Supercomputers had hand-designed CPUs, and were notable for making parallel computing as easy as possible as well as the optimization of the CPU for (mostly) floating-point multiplication.

Since then, mainframes have stayed in their niche, very successfully, and minicomputer and workstations have been squeezed badly. Some workstation CPUs stay around, partly for historical reasons. Macintoshes eventually moved from PowerPC to Intel, although IIRC the PowerPC lives on in Xbox 360 and some IBM machines. The expense of keeping a good OS up to date grew, and modern non-mainframe systems tend to run either Microsoft Windows or Linux.

Embedded computers have also gotten better. There's still small and cheap chips, but the ARM architecture has become increasingly important. It was in some early netbooks, and is in the iPhone, iPad, and many comparable devices. It has the virtue of being reasonably powerful with low power consumption, which makes it very well suited for portable devices.

The other sort of CPU you'll run into on common systems is the GPU, which is designed to do high-speed specialized parallel processing. There are software platforms to allow programming those to do other things, taking advantage of their strengths.

The difference between desktop and server versions of operating systems is no longer fundamental. Usually, both will have the same underlying OS, but the interface level will be far different. A desktop or laptop is designed to be easily usable by one user, while a server needs to be administered by one person who's also administering a whole lot of other servers.

I'll take a stab at mixed core, but I might not be accurate (corrections welcome). The Sony Playstation 3 has a strange processor, with different cores specialized for different purposes. Theoretically, this is very efficient. More practically, it's very hard to program a mixed-core system, and they're rather specialized. I don't think this concept has a particularly bright future, but it's doing nice things for Sony sales in the present.

like image 27
David Thornley Avatar answered Sep 23 '22 16:09

David Thornley