Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Is it too early to start designing for Task Parallel Library?

I have been following the development of the .NET Task Parallel Library (TPL) with great interest since Microsoft first announced it.

There is no doubt in my mind that we will eventually take advantage of TPL. What I am questioning is whether it makes sense to start taking advantage of TPL when Visual Studio 2010 and .NET 4.0 are released, or whether it makes sense to wait a while longer.

Why Start Now?

  • The .NET 4.0 Task Parallel Library appears to be well designed and some relatively simple tests demonstrate that it works well on today's multi-core CPUs.
  • I have been very interested in the potential advantages of using multiple lightweight threads to speed up our software since buying my first quad processor Dell Poweredge 6400 about seven years ago. Experiments at that time indicated that it was not worth the effort, which I attributed largely to the overhead of moving data between each CPU's cache (there was no shared cache back then) and RAM.
  • Competitive advantage - some of our customers can never get enough performance and there is no doubt that we can build a faster product using TPL today.
  • It sounds fun. Yes, I realize that some developers would rather poke themselves in the eye with a sharp stick, but we really enjoy maximizing performance.

Why Wait?

  • Are today's Intel Nehalem CPUs representative of where we are going as multi-core support matures? You can purchase a Nehalem CPU with 4 cores which share a single level 3 cache today, and most likely a 6 core CPU sharing a single level 3 cache by the time Visual Studio 2010 / .NET 4.0 are released. Obviously, the number of cores will go up over time, but what about the architecture? As the number of cores goes up, will they still share a cache? One issue with Nehalem is the fact that, even though there is a very fast interconnect between the cores, they have non-uniform memory access (NUMA) which can lead to lower performance and less predictable results. Will future multi-core architectures be able to do away with NUMA?
  • Similarly, will the .NET Task Parallel Library change as it matures, requiring modifications to code to fully take advantage of it?

Limitations

  • Our core engine is 100% C# and has to run without full trust, so we are limited to using .NET APIs.
like image 581
Joe Erickson Avatar asked Jan 28 '10 17:01

Joe Erickson


2 Answers

I would start now. I strongly suspect that we've seen the bulk of the changes - even if there are a few tweaks in the release candidate, I'm sure they'll be well documented in the PFX team blog, and easy to change. Even if chips change, I'd expect the TPL to adapt appropriate in future versions - and I would personally expect that the current TPL is still likely to do a better job of handling those new chips than any hand-crafted threading code mere mortals like us could write.

The one real downside I see to starting now is that the learning resources aren't really there yet. There's some documenation, some blog posts (some of which will be outdated by now) and some sample code - but no books dedicated to PFX. I'm sure those will come in time though - and if you're early in the game, you could even write one :)

Depending on your application, you might also want to look at Reactive Extensions, which works hand-in-hand with PFX.

like image 146
Jon Skeet Avatar answered Nov 01 '22 20:11

Jon Skeet


In the end, it matters more if your core engine can benefit from parallelism in general. Does it have lots of shared state that needs to be guarded with locks? if that is true, can that be easily moved to a design centered around lock-free data structures?

I think those questions must be answered first, so that you can them have a clearer picture to assess if TPL may help down the road.

like image 34
Monoman Avatar answered Nov 01 '22 21:11

Monoman