I am presently working on converting a 32-bit application into a 64-bit application in C. This application is currently working on x86 architecture (Windows, osx, Unix, Linux). So, before starting coding, I wanted to know what do I need to consider while converting the application.
LONG
is not long
.(int)&x
casts and typing with intptr_t
and (unsigned int)&x
with uintptr_t
char*
to do pointer arithmetic with it.4 = sizeof(void*)
#ifdef RUN64
or anything similar. You'll regret it if 128-bit platforms ever go into vogue.EDIT added uintptr_t
note as suggested by comment.
One potential problem not already mentioned is that if your app reads or writes binary data from disk (e.g., read an array of structs using fread
), you are going to have to check very carefully and perhaps wind up having two readers: one for legacy files and one for 64-bit files. Or, if you are careful to use types like uint32_t
and so on from the <stdint.h>
header file, you can redefine your structs to be bit-for-bit compatible. In any case, binary I/O is a thing to watch out for.
This really depends on the application and how it has been coded. Some code can just be recompiled with a 64-bit compiler and it will just work, but usually this only happens if the code has been designed with portability in mind.
If the code has a lot of assumptions about the size of native types and pointers, if it has a lot of bit packing hacks or of it talks to an external process using a byte specified protocol but using some assumptions about the size of native types then it may require some, or a lot, of work to get a clean compile.
Pretty much every cast and compiler warning is a red flag that needs checking out. If the code wasn't "warning clean" to start with then that is also a sign that a lot of work may be required.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With