Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

The maximum value for an int type in Go

Tags:

numbers

go

People also ask

What is the maximum for int?

The number 2,147,483,647 (or hexadecimal 7FFFFFFF16) is the maximum positive value for a 32-bit signed binary integer in computing. It is therefore the maximum value for variables declared as integers (e.g., as int ) in many programming languages.

Is int int32 in Golang?

int is one of the available numeric data types in Go used to store signed integers. int32 is a version of int that only stores signed numeric values composed of up to 32 bits.

What is * int in Go lang?

int is one of the available numeric data types in Go . int has a platform-dependent size, as, on a 32-bit system, it holds a 32 bit signed integer, while on a 64-bit system, it holds a 64-bit signed integer.


https://groups.google.com/group/golang-nuts/msg/71c307e4d73024ce?pli=1

The germane part:

Since integer types use two's complement arithmetic, you can infer the min/max constant values for int and uint. For example,

const MaxUint = ^uint(0) 
const MinUint = 0 
const MaxInt = int(MaxUint >> 1) 
const MinInt = -MaxInt - 1

As per @CarelZA's comment:

uint8  : 0 to 255 
uint16 : 0 to 65535 
uint32 : 0 to 4294967295 
uint64 : 0 to 18446744073709551615 
int8   : -128 to 127 
int16  : -32768 to 32767 
int32  : -2147483648 to 2147483647 
int64  : -9223372036854775808 to 9223372036854775807

https://golang.org/ref/spec#Numeric_types for physical type limits.

The max values are defined in the math package so in your case: math.MaxUint32

Watch out as there is no overflow - incrementing past max causes wraparound.


I would use math package for getting the maximal and minimal values for integers:

package main

import (
    "fmt"
    "math"
)

func main() {
    // integer max
    fmt.Printf("max int64   = %+v\n", math.MaxInt64)
    fmt.Printf("max int32   = %+v\n", math.MaxInt32)
    fmt.Printf("max int16   = %+v\n", math.MaxInt16)

    // integer min
    fmt.Printf("min int64   = %+v\n", math.MinInt64)
    fmt.Printf("min int32   = %+v\n", math.MinInt32)

    fmt.Printf("max float64 = %+v\n", math.MaxFloat64)
    fmt.Printf("max float32 = %+v\n", math.MaxFloat32)

    // etc you can see more int the `math`package
}

Output:

max int64   = 9223372036854775807
max int32   = 2147483647
max int16   = 32767
min int64   = -9223372036854775808
min int32   = -2147483648
max float64 = 1.7976931348623157e+308
max float32 = 3.4028234663852886e+38

I originally used the code taken from the discussion thread that @nmichaels used in his answer. I now use a slightly different calculation. I've included some comments in case anyone else has the same query as @Arijoon

const (
    MinUint uint = 0                 // binary: all zeroes

    // Perform a bitwise NOT to change every bit from 0 to 1
    MaxUint      = ^MinUint          // binary: all ones

    // Shift the binary number to the right (i.e. divide by two)
    // to change the high bit to 0
    MaxInt       = int(MaxUint >> 1) // binary: all ones except high bit

    // Perform another bitwise NOT to change the high bit to 1 and
    // all other bits to 0
    MinInt       = ^MaxInt           // binary: all zeroes except high bit
)

The last two steps work because of how positive and negative numbers are represented in two's complement arithmetic. The Go language specification section on Numeric types refers the reader to the relevant Wikipedia article. I haven't read that, but I did learn about two's complement from the book Code by Charles Petzold, which is a very accessible intro to the fundamentals of computers and coding.

I put the code above (minus most of the comments) in to a little integer math package.


Quick summary:

import "math/bits"
const (
    MaxUint uint = (1 << bits.UintSize) - 1
    MaxInt int = (1 << bits.UintSize) / 2 - 1
    MinInt int = (1 << bits.UintSize) / -2
)

Background:

As I presume you know, the uint type is the same size as either uint32 or uint64, depending on the platform you're on. Usually, one would use the unsized version of these only when there is no risk of coming close to the maximum value, as the version without a size specification can use the "native" type, depending on platform, which tends to be faster.

Note that it tends to be "faster" because using a non-native type sometimes requires additional math and bounds-checking to be performed by the processor, in order to emulate the larger or smaller integer. With that in mind, be aware that the performance of the processor (or compiler's optimised code) is almost always going to be better than adding your own bounds-checking code, so if there is any risk of it coming into play, it may make sense to simply use the fixed-size version, and let the optimised emulation handle any fallout from that.

With that having been said, there are still some situations where it is useful to know what you're working with.

The package "math/bits" contains the size of uint, in bits. To determine the maximum value, shift 1 by that many bits, minus 1. ie: (1 << bits.UintSize) - 1

Note that when calculating the maximum value of uint, you'll generally need to put it explicitly into a uint (or larger) variable, otherwise the compiler may fail, as it will default to attempting to assign that calculation into a signed int (where, as should be obvious, it would not fit), so:

const MaxUint uint = (1 << bits.UintSize) - 1

That's the direct answer to your question, but there are also a couple of related calculations you may be interested in.

According to the spec, uint and int are always the same size.

uint either 32 or 64 bits

int same size as uint

So we can also use this constant to determine the maximum value of int, by taking that same answer and dividing by 2 then subtracting 1. ie: (1 << bits.UintSize) / 2 - 1

And the minimum value of int, by shifting 1 by that many bits and dividing the result by -2. ie: (1 << bits.UintSize) / -2

In summary:

MaxUint: (1 << bits.UintSize) - 1

MaxInt: (1 << bits.UintSize) / 2 - 1

MinInt: (1 << bits.UintSize) / -2

full example (should be the same as below)

package main

import "fmt"
import "math"
import "math/bits"

func main() {
    var mi32 int64 = math.MinInt32
    var mi64 int64 = math.MinInt64

    var i32 uint64 = math.MaxInt32
    var ui32 uint64 = math.MaxUint32
    var i64 uint64 = math.MaxInt64
    var ui64 uint64 = math.MaxUint64
    var ui uint64 = (1 << bits.UintSize) - 1
    var i uint64 = (1 << bits.UintSize) / 2 - 1
    var mi int64 = (1 << bits.UintSize) / -2

    fmt.Printf(" MinInt32: %d\n", mi32)
    fmt.Printf(" MaxInt32:  %d\n", i32)
    fmt.Printf("MaxUint32:  %d\n", ui32)
    fmt.Printf(" MinInt64: %d\n", mi64)
    fmt.Printf(" MaxInt64:  %d\n", i64)
    fmt.Printf("MaxUint64:  %d\n", ui64)
    fmt.Printf("  MaxUint:  %d\n", ui)
    fmt.Printf("   MinInt: %d\n", mi)
    fmt.Printf("   MaxInt:  %d\n", i)
}