Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Why is it so hard to make 64-bit versions of software?

What are all the aspects must be taken into account when designing your software into 64-bit environment, and why wouldn't the same code work as 32-bit and 64-bit (when talking about applications)?

Drivers obviously are a different beast, missing 64-bit drivers are infamous problem for almost all hardware. What's so different in that domain that it's next to impossible to find drivers?

Why is it so hard to make 64-bit versions of software?

Edit: Let's forget the basic flaws of old, buggy software with magic numbers, etc. and think you'd create the software yourself, to be compatible with both. What aspects do you need to take into account, and are there things you just can't overcome with current compiler design? All the missing 64-bit software can not simply be because people like code with magic numbers?! :)

Conclusion: It seems to be all about human laziness and historical reasons, instead of technical reasons.

like image 375
Tuminoid Avatar asked Sep 02 '10 16:09

Tuminoid


3 Answers

One specific reason why this might be hard is that pointer sizes are going to be different. Instead of a pointer taking up 32 bits, a pointer would now take up 64 bits.

That's a problem if the software somewhere shoehorns a pointer into an int via a reinterpret_cast in C++ (which may occur in some really low level code), and it happened to work because the size of an int and a pointer were the same size. Basically, the code assumed a certain size for a pointer.

Another way that can bite back is if the code is littered with magic numbers like 4 instead of sizeof(void*), or 0xffffffff instead of INT_MAX or something similar.

There might not be a 64-bit version of a software if it depends on a library or a function that is not available in 64 bits. You can't have an application that is part 32 bits and 64 bits. For example, in Windows, there's a function called SetWindowLong that can only accept 32-bits of data, so it's not very useful for 64-bit programs if a pointer needs to be passed to the function. That's why there's a function called SetWindowLongPtr that can handle up to 64-bits in 64-bit programs and 32-bits in 32-bit programs.

Note that Internet Explorer runs on 32-bits by default even on 64-bit windows, because a huge majority of plugins for it are available only in 32-bits. A big example of this is the Adobe Flash Player, which is available only in 32-bits. So, apparently even for a big company like Adobe, porting for 64-bits may not always be trivial.

Bitshifting operations may be affected. For example, bit shifting 0x80000 left 10 times in 32 bits gives you 0x0, but bit shifting 0x80000 left 10 times in 64 bits gives you 0x200000000.

All that being said, there's no real technical reason why it's too difficult to port an application to 64-bits if the code was written well. The best case scenario is that a simple project reconfiguration and complete rebuild is all that's needed.

The cynical side of me say that companies use this as a way to implement planned obsolescence - force or encourage people to upgrade to/purchase the newest products!

like image 190
In silico Avatar answered Oct 24 '22 22:10

In silico


The nutshell version: In the most popular family of languages — C and its children — the size and structure of data types is both very important and implementation-defined. In fact, C has a lot of implementation-dependent features. This means it's easy to write nonportable code. It's not impossible to write code that doesn't make assumptions about the underlying architecture, but it is really easy to depend on x86-specific behaviors without realizing what you've done until you try running the code in a different environment.

It's mainly these low-level features that make architecture independence hard. In higher-level languages like Python and C#, it's much easier.

like image 34
Chuck Avatar answered Oct 24 '22 21:10

Chuck


Reasonably written software is usually very easy to port to another architecture. Just look at NetBSD, Debian or other big free OSes... lots of open-source software works on more than two architectures.

The problem is that lots and lots of software is written with disregard to good practices. Making "it" work is usually the only thing typical programmer thinks of, disregarding further problems. Typical explanation is: why bother with good practices if customer doesn't see the code, and it works? Why spend more time on something that already works?

Drivers are slightly different here. Different architectures might handle low-level stuff in different ways. x86 and amd64 on Windows have another problem: Microsoft set more strict standards for amd64 drivers--hardware companies do not bother to produce drivers for old hardware that complies with stricter requirements (again: why bother? customer usually already buys new hardware with their new 64-bit boxes; and if he doesn't, we will make him do that anyway by not providing drivers). Again, open source drivers very often work on both amd64 and x86.

I've got a sound card which works quite well on both x86 and amd64 systems on Linux, but doesn't work with amd64 Windows exactly because of this issue. So it was not impossible to write a driver for amd64 for it; the hardware company just didn't want to.

So, the ultimate answer to your question is: money.

like image 40
liori Avatar answered Oct 24 '22 21:10

liori