I noticed a GPU can have hundreds of cores so that a parallel computation can be largely sped up with them. It seems that in an OS kernel, no parallel algorithms are used for acceleration.
People do parallel computing in users-pace with OpenMP, but why not in kernel-space? I guess there are lots of tasks inside the OS that require parallel processing, like processing multiple network connections and packets, doing cryptography operations, managing memory, searching?... Some firewalls filter and monitor network flows by matching patterns, research-oriented OSes may also analyze the program before running it, which is time-consuming and might be parallelisable.
So why OSes do not use GPUs to improve their performance and throughput? Does it make sense to run OS computations on GPU?
GPU parallel processing applications require you to run the exact same operation hundreds of times. Moreover, you are limited in what operations you can do - branches are generally not an option, nor are traversing pointer chains.
Most kernel operations don't fit into this model; much of what the kernel is doing is managing resources via pointers, including locking. This does not fit into the GPU model at all. As for the other operations you cite:
GPUs are well suited for mathematical kernels where throughput is paramount and latency is a minor issue - numerical simulations, that sort of thing. They are, in general, not well suited for data management, or where latency is critical - which is exactly the kind of stuff OS kernels do. This is why OSes typically don't make use of GPU acceleration.
There are other types of hardware acceleration that OS kernels can and do make use of - some machines have special cryptographic hardware cores specifically designed for doing one-off crypto computations quickly. These can be put to good use by the kernel, as they are better suited to the specific problems the kernel faces.
Your impression that kernels don't parallelize is wrong. Modern kernels have adapted to multi-core/multi-thread CPUs quite well and deal with pretty much everything in a "parallel" way in that respect.
As for the GPUs, they are very different in terms of instructions they can process from CPUs. More adapted to vector floating point computations in particular. The Linux kernel essentially never uses that kind of operation. Exceptions whole be crypto and some raid code that can be well adapted to vector-type ops (and probably others, but still very limited).
So in general, the kernel itself doesn't actually need the kind of operations GPUs provide. For the times it needs them, you'll find modern CPU cores include specific instruction sets (like SSE, AES-NI, and such) or dedicated co-processors/offload engines (again for crypto and raid calcs, network packet checkums, etc...).
Modern GPUs can be used for more than just graphics processing; they can run general-purpose programs as well. While not well-suited to all types of programs, they excel on code that can make use of their high degree of parallelism. Most uses of so-called ``General Purpose GPU'' (GPGPU) computation have been outside the realm of systems software. However, recent work on software routers and encrypted network connections has given examples of how GPGPUs can be applied to tasks more traditionally within the realm of operating systems. These uses are only scratching the surface. Other examples of system-level tasks that can take advantage of GPUs include general cryptography, pattern matching, program analysis, and acceleration of basic commonly-used algorithms.
Cited from https://code.google.com/p/kgpu/
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With