The reason I am asking this question is because we are planning to read A LOT (several GB's) of data from a SQL Server database to a .Net app for processing. I would like to know how much space overhead to calculate for each record for estimating the impact on our network traffic.
E.g. a record consists of 5 integers (which makes 4 * 5 = 20 bytes of data). How many bytes are physically transferred per record? Is there a precise formula or a rule of thumb?
The only available serializable-SQL commands are INSERT INTO x () , DELETE FROM x WHERE id=y . Where we can only change x and y . At first create all necessary tables on the server once. Keep a hash table on the server that maps each table to a number.
Is a data type that exposes automatically generated, unique binary numbers within a database. rowversion is generally used as a mechanism for version-stamping table rows. The storage size is 8 bytes. The rowversion data type is just an incrementing number and does not preserve a date or a time.
SQL Server uses the TDS protocol. And MSDN
Frankly, I wouldn't worry about it. GBs of data will take time no matter how it's done unfortunately.
I don't have a clue about the actual format, but I would suggest an empirical approach and hook up Wireshark and measure the data yourself.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With