Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Convert [8]byte to a uint64

Tags:

go

all. I'm encountering what seems to be a very strange problem. (It could be that it's far past when I should be asleep, and I'm overlooking something obvious.)

I have a []byte with length 8 as a result of some hex decoding. I need to produce a uint64 in order to use it. I have tried using binary.Uvarint(), from encoding/binary to do so, but it seems to only use the first byte in the array. Consider the following example.

package main

import (
    "encoding/binary"
    "fmt"
)

func main() {
    array := []byte{0x00, 0x01, 0x08, 0x00, 0x08, 0x01, 0xab, 0x01}
    num, _ := binary.Uvarint(array[0:8])
    fmt.Printf("%v, %x\n", array, num)
}

Here it is on play.golang.org.

When that is run, it displays the num as 0, even though, in hex, it should be 000108000801ab01. Furthermore, if one catches the second value from binary.Uvarint(), it is the number of bytes read from the buffer, which, to my knowledge, should be 8, even though it is actually 1.

Am I interpreting this wrong? If so, what should I be using instead?

Thanks, you all. :)

like image 447
Alexander Bauer Avatar asked Dec 01 '12 07:12

Alexander Bauer


3 Answers

You're decoding using a function whose use isn't the one you need :

Varints are a method of encoding integers using one or more bytes; numbers with smaller absolute value take a smaller number of bytes. For a specification, see http://code.google.com/apis/protocolbuffers/docs/encoding.html.

It's not the standard encoding but a very specific, variable byte number, encoding. That's why it stops at the first byte whose value is less than 0x080.

As pointed by Stephen, binary.BigEndian and binary.LittleEndian provide useful functions to decode directly :

type ByteOrder interface {
    Uint16([]byte) uint16
    Uint32([]byte) uint32
    Uint64([]byte) uint64
    PutUint16([]byte, uint16)
    PutUint32([]byte, uint32)
    PutUint64([]byte, uint64)
    String() string
}

So you may use

package main

import (
    "encoding/binary"
    "fmt"
)

func main() {
    array := []byte{0x00, 0x01, 0x08, 0x00, 0x08, 0x01, 0xab, 0x01}
    num := binary.LittleEndian.Uint64(array)
    fmt.Printf("%v, %x", array, num)
}

or (if you want to check errors instead of panicking, thanks jimt for pointing this problem with the direct solution) :

package main

import (
    "encoding/binary"
    "bytes"
    "fmt"
)

func main() {
    array := []byte{0x00, 0x01, 0x08, 0x00, 0x08, 0x01, 0xab, 0x01}
    var num uint64
    err := binary.Read(bytes.NewBuffer(array[:]), binary.LittleEndian, &num)
    fmt.Printf("%v, %x", array, num)
}
like image 119
Denys Séguret Avatar answered Nov 09 '22 19:11

Denys Séguret


If don't care byte order, you can try this:

arr := [8]byte{1,2,3,4,5,6,7,8}
num := *(*uint64)(unsafe.Pointer(&arr[0]))

http://play.golang.org/p/aM2r40ANQC

like image 2
Eric Avatar answered Nov 09 '22 18:11

Eric


If you look at the function for Uvarint you will see that it is not as straight a conversion as you expect.

To be honest, I haven't yet figured out what kind of byte format it expects (see edit).

But to write your own is close to trivial:

func Uvarint(buf []byte) (x uint64) {
    for i, b := range buf {
        x = x << 8 + uint64(b)
        if i == 7 {
            return
        }
    }
    return
}

Edit

The byte format is none I am familiar. It is a variable width encoding where the highest bit of each byte is a flag.
If set to 0, that byte is the last in the sequence.
If set to 1, the encoding should continue with the next byte.

Only the lower 7 bits of each byte are used to build the uint64 value. The first byte will set the lowest 7 bits of the uint64, the following byte bit 8-15, etc.

like image 1
ANisus Avatar answered Nov 09 '22 18:11

ANisus