This is a question about programming style in Swift, specifically Int
vs UInt
.
The Swift Programming Language Guide advises programmers to use the generic signed integer type Int
even when variables are known to be non-negative. From the guide:
Use UInt only when you specifically need an unsigned integer type with the same size as the platform’s native word size. If this is not the case, Int is preferred, even when the values to be stored are known to be non-negative. A consistent use of Int for integer values aids code interoperability, avoids the need to convert between different number types, and matches integer type inference, as described in Type Safety and Type Inference.
However, UInt
will be 32-bit unsigned on 32-bit architectures and 64-bit unsigned on 64-bit architectures so there is no performance benefit to using Int
over UInt
.
By contrast, the Swift guide gives a later example:
let age = -3
assert(age >= 0, "A person's age cannot be less than zero")
// this causes the assertion to trigger, because age is not >= 0
Here, a runtime issue could be caught at compile time if the code had been written as:
let age:UInt = -3 // this causes a compiler error because -3 is negative
There are many other cases (for example anything that will index a collection) where using a UInt
would catch issues at compile time rather than runtime.
So the question: is the advice in the Swift Programming Language guide sound, and do the benefits of using Int
"even when the values to be stored are known to be non-negative" outweigh the safety advantages of using UInt
?
Additional note: Having used Swift for a couple of weeks now its clear that for interoperability with Cocoa UInt
is required. For example the AVFoundation
framework uses unsigned integers anywhere a "count" is required (number of samples / frames / channels etc). Converting these values to Int
could lead to serious bugs where values are greater than Int.max
I don't think using UInt is as safe as you think it is. As you noted:
let age:UInt = -3
results in a compiler error. I also tried:
let myAge:Int = 1 let age:UInt = UInt(myAge) - 3
which also resulted in a compiler error. However the following (in my opinion much more common in real programs) scenarios had no compiler error, but actually resulted in runtime errors of EXC_BAD_INSTRUCTION
:
func sub10(num: Int) -> UInt { return UInt(num - 10) //Runtime error when num < 10 } sub10(4)
as well as:
class A { var aboveZero:UInt init() { aboveZero = 1 } } let a = A() a.aboveZero = a.aboveZero - 10 //Runtime error
Had these been plain Int
s, instead of crashing, you could add code to check your conditions:
if a.aboveZero > 0 { //Do your thing } else { //Handle bad data }
I might even go so far as to equate their advice against using UInt
s to their advice against using implicitly unwrapped optionals: Don't do it unless you are certain you won't get any negatives, because otherwise you'll get runtime errors (except in the simplest of cases).
It says in your question.. "A consistent use of Int for integer values aids code interoperability, avoids the need to convert between different number types, and matches integer type inference, as described in Type Safety and Type Inference."
This avoids issues such as assigning an Int to an UInt. Negative Int values assigned to UInts result in large values instead of the intended negative value. The binary representation of both doesn't differentiate one type from the other.
Also, both are classes, one not descended from the other. Classes built receive Ints cannot receive UInts without overloading, meaning converting between the two would be a common task of UInts are being used when most of the framework receives Ints. Converting between the two can become a non-trival task as well.
The two previous paragraphs speak to "interoperability" and "converting between different number types". Issues that are avoided if UInts aren't used.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With