Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How does an interpreter interpret the code?

Tags:

People also ask

How does an interpreter translates code?

An interpreter translates code into machine code, instruction by instruction - the CPU executes each instruction before the interpreter moves on to translate the next instruction. Interpreted code will show an error as soon as it hits a problem, so it is easier to debug than compiled code.

What does the interpreter actually do?

As the name suggests, an interpreter transforms or interprets a high-level programming code into code that can be understood by the machine (machine code) or into an intermediate language that can be easily executed as well. The interpreter reads each statement of code and then converts or executes it directly.

What happens when a code is interpreted?

Interpreted languages use a special program called an interpreter that converts the source code, combines with runtime libraries, and executes the resulting machine instructions all during runtime. Unlike a compiled language, there is no precompiled program to run.

Does an interpreter generate code?

An interpreter creates the program. It neither links the files nor generates machine code. The source statements are executed line by line while executing the program. In case of using compilers, the program codes are translated into machine code already and hence the time to execute the code is very less.


For simplicity imagine this scenario, we have a 2-bit computer, which has a pair of 2 bit registers called r1 and r2 and only works with immediate addressing.

Lets say the bit sequence 00 means add to our cpu. Also 01 means move data to r1 and 10 means move data to r2.

So there is an Assembly Language for this computer and a Assembler, where a sample code would be written like

mov r1,1 mov r2,2 add r1,r2 

Simply, when I assemble this code to native language and the file will be something like:

0101 1010 0001 

the 12 bits above is the native code for:

Put decimal 1 to R1, Put decimal 2 to R2, Add the data and store in R1.  

So this is basically how a compiled code works, right?

Lets say someone implements a JVM for this architecture. In Java I will be writing code like:

int x = 1 + 2; 

How exactly will JVM interpret this code? I mean eventually the same bit pattern must be passed to the cpu, isn't it? All cpu's have a number of instructions that it can understand and execute, and they are after all just some bits. Lets say the compiled Java byte-code looks something like this:

1111 1100 1001 

or whatever.. Does it mean that the interpreting changes this code to 0101 1010 0001 when executing? If it is, it is already in the Native Code, so why is it said that JIT only kicks in after a number of times? If it does not convert it exactly to 0101 1010 0001, then what does it do? How does it make the cpu do the addition?

Maybe there are some mistakes in my assumptions.

I know interpreting is slow, compiled code is faster but not portable, and a virtual machine "interprets" a code, but how? I am looking for "how exactly/technically interpreting" is done. Any pointers (such as books or web pages) are welcome instead of answers as well.