Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Is there a currently used system with a C++ compiler where int is over 32 bits wide?

C++ standard says only that int has to be at least 16 bits wide. And at least according to cppreference, it's almost always either 16 or 32 bits wide:

data model       int width in bits ---------------------------------- C++ standard     at least 16 LP32             16 ILP32            32 LLP64            32 LP64             32 

...

Other models are very rare. For example, ILP64 (8/8/8: int, long, and pointer are 64-bit) only appeared in some early 64-bit Unix systems (e.g. Unicos on Cray).


Is there an example of a currently used system with a C++ compiler where int is over 32 bits wide? By currently used I mean e.g. some old system maybe still actively used by a specific industry because there's a valid reason to use it for that specific task and which cannot reasonably be replaced with something else. Preferably this would be something that's actively being developed/worked on, and not just a system running legacy code, which hasn't been touched in 20 years. A modern system with for example 64 bit int, which is used for scientific computing would also be an excellent answer.

I am not looking for a system that was used 2 years in the 90s and then dumped completely. I'm also not looking for something which is only used as a hobby to play around, or some old system, which two companies in the world use just because they are too cheap to upgrade.

like image 471
ruohola Avatar asked Jul 25 '19 21:07

ruohola


People also ask

Is int always 32 bits in C?

int is always 32 bits wide. sizeof(T) represents the number of 8-bit bytes (octets) needed to store a variable of type T .

Can a'int be of 32 bits?

In 32-bit operating systems, the int type is usually 32 bits, or 4 bytes. Thus, the int type is equivalent to either the short int or the long int type, and the unsigned int type is equivalent to either the unsigned short or the unsigned long type, depending on the target environment.

How to compile C program with 32-bit gcc?

Now in order to compile with 32-bit gcc, just add a flag -m32 in the command line of compiling the ‘C’ language program. For instance, to compile a file of geek.c through Linux terminal, you must write the following command with -m32 flag. If you get an error as follows: Then it indicates that a standard library of gcc is been missing.

Why is the size of data types in C++ compiler dependent?

Since the size of data types like long, size_t, pointer data type (int*, char* etc) is compiler dependent, therefore it will generate a different output according to bit of compiler. This article is contributed by Shubham Bansal.

What is the best free C++ compiler for Windows?

1 Compilers Microsoft’s Visual C++ / C# Compilers are very popular compilers and come with the Visual Studio IDE. ... 2 GNU C/C++ Compiler (GCC, g++) (Free) GNU GCC is another powerful C/C++ compiler originally written as the compiler for the Unix, Linux and GNU operating system. ... 3 Borland C++ Compiler (Free)

Will my C++ code be supported by other IDEs?

This means that if you code for a CLANG compiler, most other IDEs, Compilers of Platforms will support your code without any changes. The latest C++17 standard is supported by the most C++ compilers. More information about core language features can be found here. C++ 20 is new and needs adaptation time.


1 Answers

Please note that this answer is intended as a frame challenge; that even 64 operating systems wouldn't normally want >32 bits due to several points. Which means it is unlikely a team would go through the effort of creating an operating system without already having taken into consideration these points and even less likely that it'd be non-obsolete by this point in time. I hope a more direct answer is found, but I think that this justifies at least the major operating system's decisions.

To get started, you are correct that the C++ draft permits for plain ints that are permitted to be wider than 32 bits. To quote:

 Note: Plain ints are intended to have the natural size suggested by the architecture of the execution environment; the other signed integer types are provided to meet special needs. — end note

Emphasis mine

This would ostensibly seem to say that on my 64 bit architecture (and everyone else's) a plain int should have a 64 bit size; that's a size suggested by the architecture, right? However I must assert that the natural size for even 64 bit architecture is 32 bits. The quote in the specs is mainly there for cases where 16 bit plain ints is desired.

Convention is a powerful factor, going from a 32 bit architecture with a 32 bit plain int and adapting that source for a 64 bit architecture is simply easier if you keep it 32 bits, both for the designers and their users in two different ways:

The first is that less differences across systems there are the easier is for everyone. Discrepancies between systems been only headaches for most programmer: they only serve to make it harder to run code across systems. It'll even add on to the relatively rare cases where you're not able to do it across computers with the same distribution just 32 bit and 64 bit. However, as John Kugelman pointed out, architectures have gone from a 16 bit to 32 bit plain int, going through the hassle to do so could be done again today, which ties into his next point:

The more significant component is the gap it would cause in integer sizes or a new type to be required. Because sizeof(short) <= sizeof(int) <= sizeof(long) <= sizeof(long long) is in the actual specification, a gap is forced if int is moved to 64 bits, a gap is simply inevitable. It starts with shifting long. If a plain int is adjusted to 64 bits, the constraint that sizeof(int) <= sizeof(long) would force long to be at least 64 bits and from there there's an intrinsic gap in sizes. Since long or a plain int usually are used as a 32 bit integer and neither of them could now, we only have one more data type that could, short. Because short has a minimum of 16 bits if you simply discard that size it could become 32 bits and fill that gap. However short is intended to be optimized for space so it should be kept like that and there are use cases for small, 16 bit, integers as well. No matter how you arrange the sizes there is a loss of a width and therefore use case for an int entirely unavailable.

This now would imply a requirement for the specifications to change, but even if a designer goes rogue, it's highly likely it'd be damaged or grow obsolete from the change. Designers for long lasting systems have to work with an entire base of entwined code, both their own in the system, dependencies, and user's code they'll want to run and a huge amount of work to do so without considering the repercussions is simply unwise.

As a side note, if your application is incompatible with a >32 bit integer, you can use static_assert(sizeof(int) * CHAR_BIT <= 32, "Int wider than 32 bits!");. However, who knows maybe the specifications will change and 64 bits plain ints will be implemented, so if you want to be future proof, don't do the static assert.

like image 153
David Archibald Avatar answered Oct 15 '22 16:10

David Archibald