Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Where did code morphing go? [closed]

Tags:

processor

Linus Torvalds used to work for a processor company called Transmeta. The processor they made was a RISC based object in the core. If I remember correctly, the idea was that the core ran an arbitrary and upgradable "processor emulation layer" (could be x86, powerpc etc), which translated the high level opcodes into the RISC core instruction set.

What happened to this idea, and what in your opinion were the pros, cons and situations where such approach could have had an advantage (in terms of programming)?

like image 964
Stefano Borini Avatar asked Jun 30 '09 00:06

Stefano Borini


People also ask

Why did Transmeta fail?

Failure to ExecuteRight out of the gate, Transmeta proved they were not ready to perform. They announced the Crusoe in January of 2000 and had no product ready for market for another 10 months. Also their early products immediately had recalls, which did not build consumer confidence.

What is meant by molecules in Transmeta Crusoe processor?

The Crusoe is a VLIW microprocessor that executes bundles of instructions, termed molecules by Transmeta. Each molecule contains multiple instructions, termed atoms. The Code Morphing Software translates x86 instructions into native instructions.


2 Answers

The company did not do as well as they expected, and were eventually acquired by Novafora for it's power-saving technology. ( http://www.novafora.com/pr01-28-09.html )

From all accounts that I am aware of, the technology simply did not compete with existing systems. They fell far short of their performance numbers. Also, while it may have been possible to put another translator on top of their VLIW design, I'm not aware of any products they produced that did. I don't remember the Crusoe chip being able to accept an alternative "translation" microcode download.

I personally owned a device that used a Crusoe processor, and while it certainly delivered on battery life, the performance of the device was dismal. Some of the blame could probably be leveled on the special version of Windows it used, but it was still slow.

At best, it was good for portable remote desktop.

IMHO, the technology has the same benefits as software VM's like .Net and the JVM:

  • The upside is that you can probably accelerate the code faster with a hardware solution (like IBM does with it's Java accelerator processors) than pure software JIT.
  • The downside is that you never get the raw performance that processors executing native code get.

From some perspectives you can think of modern x86 chips as code morphing, although as very specialized ones. They translate the x86 architecture into a more efficient RISC-like subinstruction set, and then execute those.

Another example of this sort of technology could be FPGAs which can be programmed to emulate on a circuit level various kinds of processors or raw circuits. I believe that some Cray systems can come with "accelerator nodes" of this sort.

like image 94
Christopher Avatar answered Oct 12 '22 15:10

Christopher


For one thing most CISC processors internally translate their opcodes to uops micro-ops which are similar to RISC ops. Pipelining and multiple cores have closed the gap on RISC processors to the point where it's a very small difference between them, if any. If you need cross compatibility from C source or another assembly front end you can use LLVM. http://llvm.org/

like image 39
clemahieu Avatar answered Oct 12 '22 14:10

clemahieu