Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

GPU-accelerated hardware simulation?

I am investigating if GPGPUs could be used for accelerating simulation of hardware. My reasoning is this: As hardware by nature is very parallel, why simulate on highly sequential CPUs?

GPUs would be excellent for this, if not for their restrictive style of programming: You have a single kernel running, etc.

I have little experience with GPGPU-programming, but is it possible to use events or queues in OpenCL / CUDA?

Edit: By hardware simulation I don't mean emulation, but bit-accurate behavorial simulation (as in VHDL behavioral simulation).

like image 419
eisbaw Avatar asked Sep 09 '11 16:09

eisbaw


People also ask

Does simulation use GPU?

GPU-based System objects look and behave much like the other System objects in the Communications Toolbox™ product. The important difference is that the algorithm is executed on a Graphics Processing Unit (GPU) rather than on a CPU. Using the GPU can accelerate your simulation.

What is accelerated simulation?

“Simulation acceleration” refers to the process of mapping the synthesizable portion of the design into a hardware platform to increase performance by evaluating the HDL constructs in parallel (Figure 1).

What is GPU simulator for?

GPUs speed up high-performance computing (HPC) workloads by parallelizing parts of the code that are compute intensive. This enables researchers, scientists, and engineers across scientific domains to run their simulations in a fraction of the time and make discoveries faster.

Does Simulink use GPU?

You can use GPU Coder™ to speed up the execution of your Simulink® model on NVIDIA® GPUs. GPU-accelerated computing follows a heterogeneous programming model.


1 Answers

I am not aware of any approaches regarding VHDL simulation on GPUs (or a general scheme to map discrete-event simulations), but there are certain application areas where discrete-event simulation is typically applied and which can be simulated efficiently on GPUs (e.g. transportation networks, as in this paper or this one, or stochastic simulation of chemical systems, as done in this paper).

Is it possible to re-formulate the problem in a way that makes a discrete time-stepped simulator feasible? In this case, simulation on a GPU should be much simpler (and still faster, even if it seems wasteful because the time steps have to be sufficiently small - see this paper on the GPU-based simulation of cellular automata, for example).

Note, however, that this is still most likely a non-trivial (research) problem, and the reason why there is no general scheme (yet) is what you already assumed: implementing an event queue on a GPU is difficult, and most simulation approaches on GPUs gain speed-up due to clever memory layout and application-specific optimizations and problem modifications.

like image 182
Roland Ewald Avatar answered Nov 15 '22 08:11

Roland Ewald