I am working with a dataset that has several columns that represent integer ID numbers (e.g. transactionId
and accountId
). These ID numbers are are often 12 digits long, which makes them too large to store as a 32 bit integer.
What's the best approach in a situation like this?
I have been warned about the dangers of testing equality with doubles, but I'm not sure if that will be a problem in the context of using them as IDs, where I might merge and filter based on them, but never do arithmetic on the ID numbers.
Character strings seems intuitively like it should be slower to test for equality and do merges, but maybe in practice it doesn't make much of a difference.
See comment by Roland to the original question. Your IDs should be character vectors. Since it is very unlikely that IDs are used for math-like operations, it is generally safer to store the value as a character vectors. He also points out that the speed of merges in data.table using character vectors are very fast. Perhaps not as fast as integer merges, but nonetheless fast. In most cases this should be okay.
If performance you are after use bit64.
With ’integer64’ vectors you can store very large integers at the expense of 64 bits, which is by factor 7 better than ’int64’ from package ’int64’. Due to the smaller memory footprint, the atomic vector architecture and using only S3 instead of S4 classes, most operations are one to three orders of magnitude faster: Example speedups are 4x for serialization, 250x for adding, 900x for coercion and 2000x for object creation. Also ’integer64’ avoids an ongoing (potentially infinite) penalty for garbage collection observed during existence of ’int64’ objects (see code in example section).
See the following PDF: https://cran.r-project.org/web/packages/bit64/bit64.pdf
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With