Have there been any studies comparing OpenCL to OpenMP performance? Specifically I am interested in the overhead cost of launching threads with OpenCL, e.g., if one were to decompose the domain into a very large number of individual work items (each run by a thread doing a small job) versus heavier weight threads in OpenMP were the domain was decomposed into sub domains whose number equals the number of cores.
It seems that the OpenCL programming model is more targeted towards massively parallel chips (GPUs, for instance), rather than CPUs that have fewer but more powerful cores.
Can OpenCL be an effective replacement for OpenMP?
The target directives provide a mechanism to move the thread of execution from the CPU to another device, also relocating required data. Almost all of OpenMP can be used within a target region, but only a limited subset makes sense on a GPU.
OpenCL and OpenMP are both widely available for the most popular computing platforms and operating systems. While OpenCL is designed primarily as a GPU programming tool, its support of CPU parallelism makes it a versatile tool. From an ease of use point of view, OpenCL does involve more programming overhead.
for beginners in parallel programming,OpenMP is easy and best . cuda is well suited /efficient for large and complex problem.
CUDA vs OpenCL – two interfaces used in GPU computing and while they both present some similar features, they do so using different programming interfaces.
The benchmarks I've seen indicate that OpenCL and OpenMP running on the same hardware are usually comparable in performance, or OpenMP has slightly better performance. However, I haven't seen any benchmarks that I would consider conclusive, because they've been mostly lacking in detailed explanations of their methodology. However, there are a few useful things to consider:
OpenCL will always have some extra overhead when compiling the kernel at runtime. Any benchmark either needs to list this time separately, use pre-compiled native kernels, or run long enough that the kernel compilation is insignificant.
OpenCL implementations will vary. GPU vendors like NVidia have no incentive to make sure their CPU-based OpenCL implementation is as fast as possible. None of the OpenCL implementations are likely to be as mature as a good OpenMP implementation.
The OpenCL spec says basically nothing about how CPU-based implementations use threading under the hood, so any discussion of whether the threading is relatively lightweight or heavyweight will necessarily be implementation-specific.
When you're running OpenCL code on a CPU, your work items don't have to be tiny and numerous. You can break down the problem in the same way you would for OpenMP.
Even if OpenCL has a bit more overhead, there may be other reasons to prefer it.
Obviously, if your code can make good use of a GPU, you will want to have an OpenCL implementation. OpenCL performance on a CPU may be good enough that it isn't worth it to also maintain an OpenMP fallback code path for users who don't have powerful GPUs.
A good CPU-based OpenCL implementation means that you will automatically get the benefit of whatever instruction set extensions the CPU and OpenCL implementation support. With OpenMP, you have to do extra work to make sure that your executable includes both SSEx and AVX code paths.
OpenCL vector primitives can help you express some explicit parallelism without the portability and readibility sacrifices you get from using SSE intrinsics.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With