There was a question like this before, in 2011: Exotic architectures the standards committees care about
Now, I'd like to ask a very similar question, but this time, I'm asking it from the programmer's view of perspective, and in the perspective of C++11.
Which hardwares exist currently, which has a C++11 compiler for it, and can be considered exotic?
What do I consider exotic?
So anything, which is not the standard, which we see on x86/ARM world, where we have:
Note: I'd like to have answers, where a C++11 conformant compiler exists for the hardware, not where a C++ compiler exists, but isn't fully conformant.
I'm asking this, because a lot of times, I get answers like "you cannot depend on that, it is implementation defined", and I'd like to know, that actually, in the real world, how much I can depend on the standard. Just an example: whenever I write std::uint16_t
, I may worry (as this feature is optional), that on a platform, this type is non-existent. But, is there an actual platform, where this type doesn't exist?
Go looking for DSP cores, that's your best bet for "exotic" architectures.
For example, the Motorola/Freescale/NXP 56720 has a C++ compiler available from Tasking, but has 24-bit memory on three or more buses. I think the stack model on the device (at least the older 56K devices) was a hardware stack and didn't really fit the C/C++ model.
edit: more details...
The register model on this beast is odd:
The modulo and step size registers don't map to anything intrinsically modeled in C/C++, so there's always some oddball constructs and #pragma
to help the compiler along to support circular buffers.
There's no stack pointer (no push or pop instruction). There's a hardware stack for function return addresses, but that's only 16 calls deep. The software has to manage overflows and local variables don't live in on the stack.
Because there's no stack, the compiler does weird things like static call tree analysis and puts local variables in overlayed memory pools. This means no re-entrant functions and necessarily only one context without much weirdness or severe performance penalties.
sizeof(int)
= sizeof(char)
= sizeof(short)
= 1
= 24 bits
This means no byte access (at least on the old 56002, not sure about the 56300). I think it takes about 24 cycles to read/write a specific byte from an array of 24-bit integers. This core is not good and barrel shifting, masking, and or-ing
Not ALL DSP cores are like this of course, but they're usually varying degrees of 'weird' from the standard of 32/64 bit unified memory and sizeof(char)=1 expectations of GCC because of the intrinsic modulo pointers and multiple memory buses.
There are computers that have different bit widths for their registers.
The CDC Cyber series uses 6-bits to represent common characters and an extended 12-bits for non-common characters.
However, in order to be compliant with the C language standards, the compiler would need to use 12-bit characters because 6-bits does not satisfy the minimum range.
As for other requirements, you are talking about a small portion of the universe: custom implementations. Some platforms may have 80 bit floating point. Some platforms may use 4-bits as their minimal addressable unit.
Most hardware component manufacturers have standardized on 8-bit, 16-bit, 32-bit, 64-bit or 128-bit units. To get other non-standard units you may have to augment existing standard sizes. The standardization lowers the cost of integrated circuits.
Some hardware components, such as Digital to Analog converters (DAC), and Analog to Digital Converters (ADC) have bit widths that are not divisible by 8. For example, a 12-bit ADC is very common.
Let's talk really custom: Programmable Gate Arrays, e.g. FPGAs. Basically, you can program the device to have any number of bits for input or output or internal busses.
Summary:
In order to be C or C++ compliant, there are a minimum set of standards that must be met. The compiler is in responsible for allocating registers and memory to meet the standards. If a character is 6-bits, the compiler will have to use two 6-bit units in order to satisfy the minimum range of a character.
When people say that something is implementation defined, this doesn't apply only to memory model, basic variable sizes etc. (i.e. hardware implementation), but rather the fact it may depend on a particular compiler implementation (different compilers may handle some things differently and they often do) and/or operating system the program is compiled for. So even though the overwhelming majority of hardware may be non-exotic according to your definition, still "you cannot depend on that, it is implementation defined" ;)
Example: C++ standard states that the long double
type has to be at least as large as regular double
(i.e. 8 bytes), but it's implementation defined, and, in fact, whereas g++ implements long double
as 16 byte long for x64 platform, latest VC++ compiler sticks to the minimum and as for now long double
is only 8 byte long just like double
, but this may change in the future - you never know as it's implementation defined and Microsoft is free to change it anytime they want and the standard will still be adhered to.
It's not the exact answer to the question you have asked, but answers the last paragraphs of your question ("how much can I depend on the standard?") and clearly this may make you review the way you think of this problem. Also it's a bit long to be a comment and would be less readable, so I'll just leave it here.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With