Say I have a large array and I want to process the contents with multiple threads. If I delegate each thread to a specific section, guaranteeing no overlap, does that eliminate any need for locking, assuming the threads don't access any other memory outside the array?
Something like this (pseudo-code):
global array[9000000]; do_something(chunk) { for (i = chunk.start; i < chunk.end; i++) //do something with array } main() { chunk1 = {start: 0, end: 5000000}; chunk2 = {start: 5000000, end: 9000000}; start_thread(thread1, do_something(chunk1)); start_thread(thread2, do_something(chunk2)); wait_for_join(thread1); wait_for_join(thread2); //do something else with the altered array }
With locking, deadlock happens when threads acquire multiple locks at the same time, and two threads end up blocked while holding locks that they are each waiting for the other to release.
Multiple threads accessing shared data simultaneously may lead to a timing dependent error known as data race condition. Data races may be hidden in the code without interfering or harming the program execution until the moment when threads are scheduled in a scenario (the condition) that break the program execution.
Not only are different cores allowed to read from the same block of memory, they're allowed to write at the same time too.
For a thread to work on an object, it must have control over the lock associated with it, it must “hold” the lock. Only one thread can hold a lock at a time. If a thread tries to take a lock that is already held by another thread, then it must wait until the lock is released.
In a conforming C++11 compiler this is safe [intro.memory] (§1.7):
A memory location is either an object of scalar type or a maximal sequence of adjacent bit-fields all having non-zero width. [...] Two threads of execution (1.10) can update and access separate memory locations without interfering with each other.
C11 gives identical guarantees (they even use the same wording) in §3.14.
In a C++03 compiler this is not guaranteed to work by the standard, but it might still work if the compiler provides similar guarantees as an extension.
Yes: if you can guarantee that no two threads will access the same element, then there's no need for any further synchronisation.
There is only a conflict (and therefore a potential data race) if two threads access the same memory location (with at least one of them modifying it) without synchronisation.
(NOTE: this answer is based on the C++11 memory model. I've just noticed that you're also asking about a second language; I believe that C11 specifies a very similar memory model, but can't say for sure that the answer is also valid for C. For older versions of both languages, thread-safety was implementation-dependent.)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With