Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

STL algorithms and concurrent programming

Can any of STL algorithms/container operations like std::fill, std::transform be executed in parallel if I enable OpenMP for my compiler? I am working with MSVC 2008 at the moment. Or maybe there are other ways to make it concurrent?

Thanks.

like image 300
Andrew Avatar asked Mar 30 '10 18:03

Andrew


People also ask

What are STL algorithms?

The Standard Template Library, or STL, is a C++ library of container classes, algorithms, and iterators; it provides many of the basic algorithms and data structures of computer science. The STL is a generic library, meaning that its components are heavily parameterized: almost every component in the STL is a template.

How many STL algorithms are there?

Iterators. The STL implements five different types of iterators.

How are STL algorithms implemented?

STL supports various algorithms that act on containers through iterators. As these algorithms act on iterators and not directly on containers, they can be used on any type of iterators. STL algorithms are built-in and thus save a lot of time and are more reliable too. They also enhance code reusability.

What is STL algorithms in C++?

The Standard Template Library (STL) is a set of C++ template classes to provide common programming data structures and functions such as lists, stacks, arrays, etc. It is a library of container classes, algorithms, and iterators. It is a generalized library and so, its components are parameterized.


2 Answers

There are a number of projects that aim at having parallel STL type libraries:

  1. OpenMP Multi-Threaded Template Library
  2. libstdc++ parallel
  3. HPC++ Parallel Standard Template Library
  4. Parallel Patterns Library (shamelessly borrowed from AshleysBrain's answer)
like image 109
2 revs Avatar answered Oct 10 '22 13:10

2 revs


To guarantee std::transform and std::fill to be parallel-safe, you will have to write your own version. The common implementation of these functions is for sequential execution.

Let us take std::fill as a simple example. In converting to parallel, you will need to break up the function into smaller functions that can be executed asynchronously without any inter-dependencies. For example, one sub-function could fill the first half and the second sub-function could fill the second half. The parent function would have to delegate (fork) the two sub-functions, and wait for them to finish (join).

The bigger question is whether or not the overhead spent in run-time preparation for parallel execution can make up for the actual parallel execution time. Large fills would have a higher justification than smaller fills.

Perhaps a better idea is to make thread-safe versions of these functions and have the threads execute in parallel, rather than splitting up the functions.

Before splitting things into multiple threads, first try optimizing in reference to the data. Search the web for Data Oriented Design. Articles have shown that by optimizing execution to reduce processor cache misses, a program can run significantly faster.

like image 29
Thomas Matthews Avatar answered Oct 10 '22 12:10

Thomas Matthews