I am trying to understand, what would be the best way to define BYTE
, WORD
and DWORD
macros, which are mentioned in answers of this question.
#define LOWORD(l) ((WORD)(l)) #define HIWORD(l) ((WORD)(((DWORD)(l) >> 16) & 0xFFFF)) #define LOBYTE(w) ((BYTE)(w)) #define HIBYTE(w) ((BYTE)(((WORD)(w) >> 8) & 0xFF))
Would it be correct to assume, that:
BYTE
is macro defined as
#define BYTE __uint8_t
WORD
is macro defined as
#define WORD __uint16_t
DWORD
is macro defined as
#define DWORD __uint32_t
If yes, why cast to another macro instead of casting to __uint8_t
, __uint16_t
or __uint32_t
? Is it written like that to increase clarity?
I also found another question which answers include typedef
, with little bit more of research I've found answers to question about comparing #define
and typedef
. Would typedef
be better to use in this case?
This is a portable solution:
#include <stdint.h>
typedef uint32_t DWORD; // DWORD = unsigned 32 bit value
typedef uint16_t WORD; // WORD = unsigned 16 bit value
typedef uint8_t BYTE; // BYTE = unsigned 8 bit value
You have it defined at: https://msdn.microsoft.com/en-us/library/windows/desktop/aa383751(v=vs.85).aspx, and that is already defined in Windows Data Type headers for WinAPI:
typedef unsigned short WORD;
typedef unsigned char BYTE;
typedef unsigned long DWORD;
and it is a type, and not a macro.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With