Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

What is the difference between concurrent programming and parallel programming?

What is the difference between concurrent programming and parallel programing? I asked google but didn't find anything that helped me to understand that difference. Could you give me an example for both?

For now I found this explanation: http://www.linux-mag.com/id/7411 - but "concurrency is a property of the program" vs "parallel execution is a property of the machine" isn't enough for me - still I can't say what is what.

like image 265
matekm Avatar asked Dec 13 '09 22:12

matekm


People also ask

What is the difference between concurrent and parallel programming?

1. Concurrency is the task of running and managing the multiple computations at the same time. While parallelism is the task of running multiple computations simultaneously.

What's the difference between concurrency and parallelism?

Concurrency is when multiple tasks can run in overlapping periods. It's an illusion of multiple tasks running in parallel because of a very fast switching by the CPU. Two tasks can't run at the same time in a single-core CPU. Parallelism is when tasks actually run in parallel in multiple CPUs.

What is the difference between concurrent and sequential programming?

Concurrency is about independent computations that can be executed in an arbitrary order with the same outcome. The opposite of concurrent is sequential, meaning that sequential computations depend on being executed step-by-step to produce correct results.

What is the difference between concurrent and parallel task executions?

Concurrency is when two or more tasks can start, run, and complete in overlapping time periods. It doesn't necessarily mean they'll ever both be running at the same instant. For example, multitasking on a single-core machine. Parallelism is when tasks literally run at the same time, e.g., on a multicore processor.


Video Answer


2 Answers

Concurrent programming regards operations that appear to overlap and is primarily concerned with the complexity that arises due to non-deterministic control flow. The quantitative costs associated with concurrent programs are typically both throughput and latency. Concurrent programs are often IO bound but not always, e.g. concurrent garbage collectors are entirely on-CPU. The pedagogical example of a concurrent program is a web crawler. This program initiates requests for web pages and accepts the responses concurrently as the results of the downloads become available, accumulating a set of pages that have already been visited. Control flow is non-deterministic because the responses are not necessarily received in the same order each time the program is run. This characteristic can make it very hard to debug concurrent programs. Some applications are fundamentally concurrent, e.g. web servers must handle client connections concurrently. Erlang, F# asynchronous workflows and Scala's Akka library are perhaps the most promising approaches to highly concurrent programming.

Multicore programming is a special case of parallel programming. Parallel programming concerns operations that are overlapped for the specific goal of improving throughput. The difficulties of concurrent programming are evaded by making control flow deterministic. Typically, programs spawn sets of child tasks that run in parallel and the parent task only continues once every subtask has finished. This makes parallel programs much easier to debug than concurrent programs. The hard part of parallel programming is performance optimization with respect to issues such as granularity and communication. The latter is still an issue in the context of multicores because there is a considerable cost associated with transferring data from one cache to another. Dense matrix-matrix multiply is a pedagogical example of parallel programming and it can be solved efficiently by using Straasen's divide-and-conquer algorithm and attacking the sub-problems in parallel. Cilk is perhaps the most promising approach for high-performance parallel programming on multicores and it has been adopted in both Intel's Threaded Building Blocks and Microsoft's Task Parallel Library (in .NET 4).

like image 63
J D Avatar answered Oct 24 '22 04:10

J D


If you program is using threads (concurrent programming), it's not necessarily going to be executed as such (parallel execution), since it depends on whether the machine can handle several threads.

Here's a visual example. Threads on a non-threaded machine:

        --  --  --      /              \ >---- --  --  --  -- ---->> 

Threads on a threaded machine:

     ------     /      \ >-------------->> 

The dashes represent executed code. As you can see, they both split up and execute separately, but the threaded machine can execute several separate pieces at once.

like image 26
Tor Valamo Avatar answered Oct 24 '22 04:10

Tor Valamo