Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How do I use integer number literals when using generic types?

I wanted to implement a function computing the number of digits within any generic type of integer. Here is the code I came up with:

extern crate num;
use num::Integer;

fn int_length<T: Integer>(mut x: T) -> u8 {
    if x == 0 {
        return 1;
    }

    let mut length = 0u8;
    if x < 0 {
        length += 1;
        x = -x;
    }

    while x > 0 {
        x /= 10;
        length += 1;
    }

    length
}

fn main() {
    println!("{}", int_length(45));
    println!("{}", int_length(-45));
}

And here is the compiler output

error[E0308]: mismatched types
 --> src/main.rs:5:13
  |
5 |     if x == 0 {
  |             ^ expected type parameter, found integral variable
  |
  = note: expected type `T`
             found type `{integer}`

error[E0308]: mismatched types
  --> src/main.rs:10:12
   |
10 |     if x < 0 {
   |            ^ expected type parameter, found integral variable
   |
   = note: expected type `T`
              found type `{integer}`

error: cannot apply unary operator `-` to type `T`
  --> src/main.rs:12:13
   |
12 |         x = -x;
   |             ^^

error[E0308]: mismatched types
  --> src/main.rs:15:15
   |
15 |     while x > 0 {
   |               ^ expected type parameter, found integral variable
   |
   = note: expected type `T`
              found type `{integer}`

error[E0368]: binary assignment operation `/=` cannot be applied to type `T`
  --> src/main.rs:16:9
   |
16 |         x /= 10;
   |         ^ cannot use `/=` on type `T`

I understand that the problem comes from my use of constants within the function, but I don't understand why the trait specification as Integer doesn't solve this.

The documentation for Integer says it implements the PartialOrd, etc. traits with Self (which I assume refers to Integer). By using integer constants which also implement the Integer trait, aren't the operations defined, and shouldn't the compiler compile without errors?

I tried suffixing my constants with i32, but the error message is the same, replacing _ with i32.

like image 281
Léo Ercolanelli Avatar asked Feb 17 '15 15:02

Léo Ercolanelli


2 Answers

Many things are going wrong here:

  1. As Shepmaster says, 0 and 1 cannot be converted to everything implementing Integer. Use Zero::zero and One::one instead.
  2. 10 can definitely not be converted to anything implementing Integer, you need to use NumCast for that
  3. a /= b is not sugar for a = a / b but an separate trait that Integer does not require.
  4. -x is an unary operation which is not part of Integer but requires the Neg trait (since it only makes sense for signed types).

Here's an implementation. Note that you need a bound on Neg, to make sure that it results in the same type as T

extern crate num;

use num::{Integer, NumCast};
use std::ops::Neg;

fn int_length<T>(mut x: T) -> u8
where
    T: Integer + Neg<Output = T> + NumCast,
{
    if x == T::zero() {
        return 1;
    }

    let mut length = 0;
    if x < T::zero() {
        length += 1;
        x = -x;
    }

    while x > T::zero() {
        x = x / NumCast::from(10).unwrap();
        length += 1;
    }

    length
}

fn main() {
    println!("{}", int_length(45));
    println!("{}", int_length(-45));
}
like image 123
oli_obk Avatar answered Sep 22 '22 16:09

oli_obk


The problem is that the Integer trait can be implemented by anything. For example, you could choose to implement it on your own struct! There wouldn't be a way to convert the literal 0 or 1 to your struct. I'm too lazy to show an example of implementing it, because there's 10 or so methods. ^_^

num::Zero and num::One

This is why Zero::zero and One::one exist. You can (very annoyingly) create all the other constants from repeated calls to those.

use num::{One, Zero}; // 0.4.0

fn three<T>() -> T
where
    T: Zero + One,
{
    let mut three = Zero::zero();
    for _ in 0..3 {
        three = three + One::one();
    }
    three
}

From and Into

You can also use the From and Into traits to convert to your generic type:

use num::Integer; // 0.4.0
use std::ops::{DivAssign, Neg};

fn int_length<T>(mut x: T) -> u8
where
    T: Integer + Neg<Output = T> + DivAssign,
    u8: Into<T>,
{
    let zero = 0.into();
    if x == zero {
        return 1;
    }

    let mut length = 0u8;
    if x < zero {
        length += 1;
        x = -x;
    }

    while x > zero {
        x /= 10.into();
        length += 1;
    }

    length
}

fn main() {
    println!("{}", int_length(45));
    println!("{}", int_length(-45));
}

See also:

  • How do I use floating point number literals when using generic types?
like image 34
Shepmaster Avatar answered Sep 24 '22 16:09

Shepmaster