Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to model a CPU using qemu?

I need to add some hardware to a multicoe x86-64 processor and test it using simulation, so I was thinking of using QEMU. But I want to know the general idea of modeling a CPU in qemu. Any good document on this will be great. If it is too difficult to do, I might think about using just the PIN tool for simplistic simulation.

Also, is it possible to model unconventional hardware with QEMU, like some shared registers between different cores of a processor? And does the current implemented models properly simulate things like cache accesses? Does the qemu simulator measure the elapsed time precisely for simulation?

like image 939
MetallicPriest Avatar asked Jul 03 '13 13:07

MetallicPriest


People also ask

What is QEMU CPU?

QEMU comes with a number of predefined named CPU models, that typically refer to specific generations of hardware released by Intel and AMD. These allow the guest VMs to have a degree of isolation from the host CPU, allowing greater flexibility in live migrating between hosts with differing hardware. @

Can QEMU emulate arm?

QEMU can emulate both 32-bit and 64-bit Arm CPUs. Use the qemu-system-aarch64 executable to simulate a 64-bit Arm machine.

Is there a GUI for QEMU?

JavaQemu, a GUI for QEMU written in Java.

Is QEMU same as KVM?

KVM, Kernel-based Virtual Machine, is a hypervisor built into the Linux kernel. It is similar to Xen in purpose but much simpler to get running. Unlike native QEMU, which uses emulation, KVM is a special operating mode of QEMU that uses CPU extensions (HVM) for virtualization via a kernel module.


1 Answers

There are a number of questions here. When you say "add some hardware" what do you mean? A co-processor or some additional peripheral?

QEMU is a general purpose translator and has front-ends that translate a variety of architectures into it's common TCG op format which can then generate code for a variety of host architecture. It is designed to be fast and semantically accurate (i.e. instructions should behave as they do on real hardware). However it is not designed to simulate micro-architectures so things like cache modelling are outside of it's scope. While -icount mode provides for deterministic time during translation it is in no way related to the time a real processor would take to execute an instruction.

If you want to model and experiment with small kernels of functionality then perhaps PIN is a better tool for the job.

like image 149
stsquad Avatar answered Oct 01 '22 16:10

stsquad