I'm toying with the idea of writing a JIT compiler and am just wondering if it is even theoretically possible to write the whole thing in managed code. In particular, once you've generated assembler into a byte array how do you jump into it to begin execution?
The JIT compiler translates the MSIL code of an assembly to native code and uses the CPU architecture of the target machine to execute a . NET application. It also stores the resulting native code so that it is accessible for subsequent calls.
Just-In-Time compiler(JIT) is a part of Common Language Runtime (CLR) in . NET which is responsible for managing the execution of . NET programs regardless of any .
It allows for some run-time optimizations which are not (easily) possible at compile-time: for example, you can take advantage of special features on new CPUs, even if those CPUs didn't exist when you wrote your program - only the JIT compiler needs to know about that.
The idea of Econo JIT is to spend less time compiling so that startup latency is lower for interactive applications. This is actually what you want once you notice the app takes seconds to start up. . NET startup time is increadibly slow already (so is Java's :) ).
And for the full proof of concept here is a fully capable translation of Rasmus' approach to JIT into F#
open System open System.Runtime.InteropServices type AllocationType = | COMMIT=0x1000u type MemoryProtection = | EXECUTE_READWRITE=0x40u type FreeType = | DECOMMIT = 0x4000u [<DllImport("kernel32.dll", SetLastError=true)>] extern IntPtr VirtualAlloc(IntPtr lpAddress, UIntPtr dwSize, AllocationType flAllocationType, MemoryProtection flProtect); [<DllImport("kernel32.dll", SetLastError=true)>] extern bool VirtualFree(IntPtr lpAddress, UIntPtr dwSize, FreeType freeType); let JITcode: byte[] = [|0x55uy;0x8Buy;0xECuy;0x8Buy;0x45uy;0x08uy;0xD1uy;0xC8uy;0x5Duy;0xC3uy|] [<UnmanagedFunctionPointer(CallingConvention.Cdecl)>] type Ret1ArgDelegate = delegate of (uint32) -> uint32 [<EntryPointAttribute>] let main (args: string[]) = let executableMemory = VirtualAlloc(IntPtr.Zero, UIntPtr(uint32(JITcode.Length)), AllocationType.COMMIT, MemoryProtection.EXECUTE_READWRITE) Marshal.Copy(JITcode, 0, executableMemory, JITcode.Length) let jitedFun = Marshal.GetDelegateForFunctionPointer(executableMemory, typeof<Ret1ArgDelegate>) :?> Ret1ArgDelegate let mutable test = 0xFFFFFFFCu printfn "Value before: %X" test test <- jitedFun.Invoke test printfn "Value after: %X" test VirtualFree(executableMemory, UIntPtr.Zero, FreeType.DECOMMIT) |> ignore 0
that happily executes yielding
Value before: FFFFFFFC Value after: 7FFFFFFE
Yes, you can. In fact, it's my job :)
I've written GPU.NET entirely in F# (modulo our unit tests) -- it actually disassembles and JITs IL at run-time, just like the .NET CLR does. We emit native code for whatever underlying acceleration device you want to use; currently we only support Nvidia GPU's, but I've designed our system to be retargetable with a minimum of work so it's likely we'll support other platforms in the future.
As for performance, I have F# to thank -- when compiled in optimized mode (with tailcalls), our JIT compiler itself is probably about as fast as the compiler within the CLR (which is written in C++, IIRC).
For execution, we have the benefit of being able to pass control to hardware drivers to run the jitted code; however, this wouldn't be any harder to do on the CPU since .NET supports function pointers to unmanaged/native code (though you'd lose any safety/security normally provided by .NET).
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With