in the last times I heard lots of people claiming that the Cell processor is dead, mainly due to the following reasons:
What do you think? If you started two or three years ago to program the cell, will you continue on this or are you considering switching to GPU's? Is a new version of the cell coming?
Thanks
Low manufacturing yield, high cost (partly due to low yield), and lack of affordable hardware systems other than the PS3. Development difficulty (the cell is an unusual processor to design for and the tooling is lacking)
The processor, also known as the CPU, provides the instructions and processing power the computer needs to do its work. The more powerful and updated your processor, the faster your computer can complete its tasks.
Cell is a multi-core microprocessor microarchitecture that combines a general-purpose PowerPC core of modest performance with streamlined coprocessing elements which greatly accelerate multimedia and vector processing applications, as well as many other forms of dedicated computation.
Back in 2006 it had blu-ray, hdmi, full hd support, uncompressed 7.1 audio and a multi threaded cpu hard to program but very powerful when pushed to its limits. It was at least ten years before all that stuff became standard.
I'd say the reasons for the lack of popularity for cell development are closer to:
It's easier to write parallel programs for 1000s of threads than it is for 10s of threads. GPUs have 1000s of threads, with hardware thread scheduling and load balancing. Although current GPUs are suited mainly for data parallel small kernels, they have tools that make doing such programming trivial. Cell has only a few, order of 10s, of processors in consumer configurations. (The Cell derivatives used in supercomputers cross the line, and have 100s of processors.)
IMHO one of the biggest problems with Cell was lack of an instruction cache. (I argued this vociferously with the Cell architects on a plane back from the MICRO conference Barcelona in 2005. Although they disagreed with me, I have heard the same from bigsuper computer users of cell.) People can cope with fitting into fixed size data memories - GPUs have the same problem, although they complain. But fitting code into fixed size instruction memory is a pain. Add an IF statement, and performance may fall off a cliff because you have to start using overlays. It's a lot easier to control your data structures than it is to avoid having to add code to fix bugs late in the development cycle.
GPUs originally had the same problems as cell - no caches, neither I nor D.
But GPUs did more threads, data parallelism so much better than Cell, that they ate up that market. Leaving Cell only its locked in console customers, and codes that were more complicated than GPUs, but less complicated than CPU code. Squeezed in the middle.
And, in the meantime, GPUs are adding I$ and D$. So they are becoming easier to program.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With