Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

+= operator for uint16_t promotes the assigned value to int and won't compile

Tags:

c

gcc

This is a real WTF for me, looks like a bug in GCC, but I'd like to have the community have a look and find a solution for me.

Here's the simplest program I could muster:

#include <stdio.h> #include <stdint.h>  int main(void) {  uint16_t i = 1;  uint16_t j = 2;  i += j;  return i; } 

I'm trying to compile this on GCC with -Werror=conversion flag, which I'm using for much of my code.

Here's the result:

.code.tio.c: In function ‘main’: .code.tio.c:9:7: error: conversion to ‘uint16_t {aka short unsigned int}’ from ‘int’ may alter its value [-Werror=conversion]   i += j; 

Same error would happen for this code:

uint16_t i = 1; i += ((uint16_t)3); 

Error is

.code.tio.c: In function ‘main’: .code.tio.c:7:7: error: conversion to ‘uint16_t {aka short unsigned int}’ from ‘int’ may alter its value [-Werror=conversion]   i += ((uint16_t)3);        ^ 

Just to be clear, the error here is on the += operator, NOT the cast.

It looks like the operator overloading for the += with uint16_t is messed up. Or am I missing something subtle here?

For your use: MCVE

Edit: Some more of the same:

.code.tio.c:8:6: error: conversion to ‘uint16_t {aka short unsigned int}’ from ‘int’ may alter its value [-Werror=conversion]   i = i + ((uint16_t)3); 

But i = (uint16_t)(i +3); at least works...

like image 419
immortal Avatar asked Nov 21 '17 15:11

immortal


People also ask

What does uint16_t represent in C?

uint16_t is unsigned 16-bit integer. unsigned short int is unsigned short integer, but the size is implementation dependent. The standard only says it's at least 16-bit (i.e, minimum value of UINT_MAX is 65535 ).

What is the difference between int and uint16_t?

int is usually (but may not be) a 32-bit signed integer, while uint16_t is guaranteed to be an unsigned 16-bit integer.

What is uint8_t and uint16_t?

So uint8_t is the same as an 8 bit unsigned byte. The uint16_t would be the same as unsigned int on an UNO. But it's half the size of unsigned int on a Due. The point of those is that you always know exactly how big they are.


1 Answers

The reason for the implicit conversion is due to the equivalency of the += operator with = and +.

From section 6.5.16.2 of the C standard:

3 A compound assignment of the form E1 op= E2 is equivalent to the simple assignment expression E1 = E1 op (E2), except that the lvalue E1 is evaluated only once, and with respect to an indeterminately-sequenced function call, the operation of a compound assignment is a single evaluation

So this:

i += ((uint16_t)3); 

Is equivalent to:

i = i + ((uint16_t)3); 

In this expression, the operands of the + operator are promoted to int, and that int is assigned back to a uint16_t.

Section 6.3.1.1 details the reason for this:

2 The following may be used in an expression wherever an int or unsigned int may be used:

  • An object or expression with an integer type (other than int or unsigned int) whose integer conversion rank is less than or equal to the rank of int and unsigned int.
  • A bit-field of type _Bool, int, signed int, or unsigned int.

If an int can represent all values of the original type (as restricted by the width, for a bit-field), the value is converted to an int; otherwise, it is converted to an unsigned int. These are called the integer promotions. All other types are unchanged by the integer promotions.

Because a uint16_t (a.k.a. an unsigned short int) has lower rank than int, the values are promoted when used as operands to +.

You can get around this by breaking up the += operator and casting the right hand side. Also, because of the promotion, the cast on the value 3 has no effect so that can be removed:

i =  (uint16_t)(i + 3); 

Note however that this operation is subject to overflow, which is one of the reasons a warning is given when there is no cast. For example, if i has value 65535, then i + 3 has type int and value 65538. When the result is cast back to uint16_t, the value 65536 is subtracted from this value yielding the value 2, which then gets assigned back to i.

This behavior is well defined in this case because the destination type is unsigned. If the destination type were signed, the result would be implementation defined.

like image 86
dbush Avatar answered Oct 02 '22 11:10

dbush