When using a timestamp column in Entity Framework it is backed by rowversion
column type in SQL Server and represented as CLR's byte[]
(according to the docs). The column has length of 8 bytes.
Why they decided to use byte[]
instead of UInt64
? It would hold the value just fine. Are there any non-obvious benefits of using byte[] or is it just for use of EF with other DB engines, which could implement rowversion-like column as different data type internally.
The purpose of the rowversion/timestamp field is that every time it updates, it is a new unique value. The fact that for some implementations it is a 'timestamp' is irrelevant.
According to this page, for MS-SQL it is an incrementing number, according to this page, for MySql it is a timestamp.
Therefore an array of 'bytes' makes the most sense compatibility wise.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With