#include <stdio.h> typedef struct size { unsigned int a:1; unsigned int b:31; unsigned int c:1; } mystruct; int main() { mystruct a; printf("%d", sizeof(a)); return 0; }
int b:31
, the output is 8.int b:1
, the output is 4.int b:32
, the output is 12.Can somebody explain the reason for this?
In above student structure size of the structure without bit field is size of (StdId) + size of (Age) = 8 bytes + 8 Bytes = 16 bytes. After using bit fields to its members, it is 8 bits + 4 bits = 12 bits = 1.5 bytes which is very much less. Hence we can save lot of memory.
The size is at least sizeof(int) + sizeof(struct node *) + sizeof(struct node *) . But it may be more as the compiler is allowed to add padding bytes to your structure if it wishes.
In C, we can specify size (in bits) of structure and union members. The idea is to use memory efficiently when we know that the value of a field or group of fields will never exceed a limit or is within a small range. For example, consider the following declaration of date without the use of bit fields.
These space-saving structure members are called bit fields, and their width in bits can be explicitly declared. Bit fields are used in programs that must force a data structure to correspond to a fixed hardware representation and are unlikely to be portable.
It is the order that matters. The following code will give Output: 8
#include<stdio.h> typedef struct size { unsigned int b:32; unsigned int a:1; unsigned int c:1; }mystruct; int main(int argc, char const *argv[]) { mystruct a; printf("\n %lu \n",sizeof(a)); return 0; }
Unsigned int is a 32 bit integer, occupying 4 bytes. Memory is allocated contiguously in memory.
Option 1:
unsigned int a:1; // First 4 bytes are allocated unsigned int b:31; // Will get accomodated in the First 4 bytes unsigned int c:1; // Second 4 bytes are allocated
Output: 8
Option 2:
unsigned int a:1; // First 4 bytes are allocated unsigned int b:32; // Will NOT get accomodated in the First 4 bytes, Second 4 bytes are allocated unsigned int c:1; // Will NOT get accomodated in the Second 4 bytes, Third 4 bytes are allocated
Output: 12
Option 3:
unsigned int a:1; // First 4 bytes are allocated unsigned int b:1; // Will get accomodated in the First 4 bytes unsigned int c:1; // Will get accomodated in the First 4 bytes
Output: 4
Option 4:
unsigned int b:32; // First 4 bytes are allocated unsigned int a:1; // Second 4 bytes are allocated unsigned int c:1; // Will get accomodated in the Second 4 bytes
Output: 8
You don't say whether you know what bitfields are, but I'll assume you do.
On your implementation, evidently unsigned int
is a 32 bit integer, occupying 4 bytes. This accounts for the first and second examples. Clearly 3 bitfields totalling 33 bits don't fit into a single unsigned int
, hence the need for 8 bytes in the first example. 3 bitfields totalling 3 bits certainly do fit into an unsigned int
, hence only 4 bytes in the second example.
Furthermore, a bitfield cannot span multiple integers. This accounts for the third example. I can't remember whether that's a requirement of the standard, or just a detail of your implementation. Either way, since b
is 32 bits, it fills a whole unsigned int
on its own, forcing both of a
and c
to occupy their own unsigned int
, before and after the middle one. Hence, 12 bytes.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With