Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

structs with uint8_t on a MCU without uint8_t datatype

Tags:

c

embedded

uint8t

I am an Embedded Software developer and I want to interface to an external device. This device sends data via SPI. The structure of that data is predefined from the external device manufacturer and can't be edited. The manufacturer is providing some Header files with many typedefs of all the data send via SPI. The manufacturer also offers an API to handle the received packets in the correct way(I have access to the source of that API).

Now to my problem: The typedefed structures contain many uint8_t datatypes. Unfortunately, our MCU doesn't support uint8_t datatypes, because the smallest type is 16bit-wide(so even a char has 16-bit).

To use the API correctly the structures must be filled with the data received via SPI. Since the incoming data is byte-packet, we can't just copy this data into the struct, because our structs use 16-bit for those 8-bit types. As a result, we need to do many bitshift-operations to assign the received data correctly.

EXAMPLE:(manufacturers typedef struct)

typedef struct NETX_COMMUNICATION_CHANNEL_INFOtag
{
  uint8_t   bChannelType;              //uint16_t in our system
  uint8_t   bChannelId;                //uint16_t in our system
  uint8_t   bSizePositionOfHandshake;  //uint16_t in our system
  uint8_t   bNumberOfBlocks;           //uint16_t in our system
  uint32_t  ulSizeOfChannel;           
  uint16_t  usCommunicationClass;      
  uint16_t  usProtocolClass;           
  uint16_t  usProtocolConformanceClass;
  uint8_t   abReserved[2];             //uint16_t in our system
} NETX_COMMUNICATION_CHANNEL_INFO;

Can anybody think of an easy workaround to this problem? I really don't want to write a separate bitshift operation for every received packet type. (performance/time/space-waste)

My Idea (using bitfields to stuff 2xuint8_t into uint16_t or 4xuint8_t into uint32_t)

typedef struct NETX_COMMUNICATION_CHANNEL_INFOtag
{
  struct packet_uint8{
    uint32_t  bChannelType              :8;
    uint32_t  bChannelId                :8;
    uint32_t  bSizePositionOfHandshake  :8;
    uint32_t  bNumberOfBlocks           :8;
  }packet_uint8;
  uint32_t  ulSizeOfChannel;               
  uint16_t  usCommunicationClass;          
  uint16_t  usProtocolClass;               
  uint16_t  usProtocolConformanceClass;    
  uint16_t  abReserved;                    
} NETX_COMMUNICATION_CHANNEL_INFO;

Now I am not sure if this solution is going to work since the order of the bits inside the bitfield is not necessarily the order in the source file. (or is it if all the bitfields have the same size?)

I hope I described the problem well enough for you to understand.

Thanks and Regards.

like image 226
Muperman Avatar asked Nov 08 '18 08:11

Muperman


1 Answers

Your compiler manual should describe how the bit fields are laid out. Read it carefully. There is something called __attribute__((byte_peripheral)) too that should help with packing bitfields sanely in memory-mapped devices.


If you're unsure about the bitfields, just use uint16_t for these fields and an access macro with bit shifts, for example

#define FIRST(x) ((x) >> 8)
#define SECOND(x) ((x) & 0xFF)

...
    uint16_t channel_type_and_id;
...

int channel_type = FIRST(x->channel_type_and_id);
int channel_id = SECOND(x->channel_type_and_id);

Then you just need to be sure of the byte-order of the platform. If you need to change endianness which the MCU seems to support? you can just redefine these macros.


A bitfield would most probably still be implemented in terms of bitshifts so there wouldn't be much savings - and if there are byte-access functions for registers, then a compiler would know to optimize x & 0xff to use them

like image 171