Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

cluster vs Grid vs Cloud

There are two questions:

1) What is the difference between cluster and Grid 2) What is the Cloud

I am not looking for conceptual definitions, I found a lot of that by googling but the problem is I still do not get it. so I believe the answer I seek is different. From what I could re-search online I start to think that many article writers who is trying to explain this either do not understand this deep enough themselves or not able to explain their knowledge for an average guy like myself (which is common issue with very technical people).

Just to let you know my level: I am a computer programmer, .NET and LAMP, I can do basic admin on both Linux flavors and Windows, I have hands on experience with Hyper-V and now researching Xen and XCP to setup a test cloud based on two computers for learning purposes.

Below info you do not have to read, it is just my current understanding of cluster,grid and cloud it just to support my two questions because I thought it would help to understand what kind of mess is in my head right now and what answers I am looking for.

Thank you.


Two computers used for reference in my statements are "A" and "B"

specs for A: 2 core intel cpu, 8GB memory , 500gb disk

specs for "B": 2 core intel cpu, 8GB memory , 500gb disk,


Now I would like to look at A and B roles from Cluster, Grid and from Cloud angle.

Common definitions between Grid and Cloud

1) cluster or Grid are 2 or more computers hooked up together, on hardware level they are hooked up though network cards and on a software level it is using some kind of program implementing message passing interface to make it possible to send commands between nodes.

2) cluster or Grid do NOT combine CPU power or memory between nodes, meaning that in this simulation a FireFox browser running on A still has only one 2 cores cpu, 8GB memory and 500gb available.

Differences between Grid and Cloud:

1) Cluster only provides fail over part, if A node breaks while FireFox is running the cluster software will re-start FireFox process on node B.

2) Grid however is able to run a software in parallel on multiple nodes at the same time provided that software is coded with MPI in mind. It can also lunch any software on any node on demand (even if it is not written for MPI)

3) Grid is also able to combine different type of nodes, Linux Server, Windows XP, Xbox and Playstation into one Grid.

Cloud definition:

1) Cloud is not a technical term at all, it is just a short convenient word to describe a computer of unlimited resources, it can aslo be called a Supercomputer, a Beast, an Ocean or Universe but someone said "Cloud" first and here we are.

2) Cloud can be based on Grids or on Clusters

3) From technical point of view Cloud is a software to combine hardware resources into one, meaning that if I install Cloud software on Grid or Cluster then it will combine A and B and I will get one Cloud like this: 4 core CPU, 16gb memory and 1000gb disk.

edited: 2013.04.02 item 3) was a complete nonsense, cloud will NOT combine resources from many nodes into one huge resource, so in this case there will be no 4 core CPU, 16gb memory and 1000gb cloud.

like image 486
Peter Avatar asked Mar 31 '13 21:03

Peter


1 Answers

Grid computing is designed to parcel out large workloads to many participating grid members--through software on each member which is expecting to hear that request for computation or for data, and to reply with it's small piece of the overall puzzle. Applications must be written specifically for this approach to problem-solving. It can be heterogeneous because it's not the OS that matters but the software waiting to hear problem-solving requests.

The expectation of a cluster is that it can run the same executable image across any member node--any node can execute that code--which is what drives its requirement for homogeneity. You can write cluster-aware code which distributes workload throughout the cluster, but again you have to write your code to be cluster aware in order to take advantage of more than the redundancy features of a cluster. As most application vendors do not write cluster-aware code, the simple redundancy feature is all that's commonly used in cluster deployments, but that does not limit the architecture. Clusters can and do share their resources, and can collaborate on tasks simultaneously.

Cloud, as it's commonly defined is neither of these, precisely, but it doesn't preclude them, either. Cloud computing assumes the ability to deploy an application without advanced knowledge of it's underlying operating system, or even control of that operating system, coupled with the ability to expand or reduce the processing and memory footprint available to that application without having to destroy and recreate that environment--all done with enough isolation that the application won't know or be able to know what other applications might be installed or running on it's shared infrastructure, unless that access is approved-of by both application managers.

like image 186
Paul Nelis Avatar answered Sep 28 '22 06:09

Paul Nelis