I have some sectors on my drive with poor reading. I could measure the reading time required by each sector and then compare the time of the good sectors and the bad sectors.
I could use a timer of the processor to make the measurements. How do I write a program in C/Assembly that measures the exact time it takes for each sector to be read?
So the procedure would be something like this:
Start the timer
Read the disk sector
Stop the timer
Read the time measured by the timer
The most useful functionality is the "rdtsc" instruction (ReaD Time Stamp Counter) which is incremented every time the processor's internal clock increments. For a 3 Ghz processor it increments 3 billion times per second. It returns a 64 bit unsigned integer containing the number of clock cycles since the processor was powered on.
Obviously the difference between two read-outs is the number of elapsed clock cycles consumed for executing the code sequence in-between. For a 3 Ghz machine you could use any of the following algorithms to convert to parts of seconds:
(time_difference+150)/300 gives a rounded off elapsed time in 0.1 us (tenths of microseconds) (time_difference+1500)/3000 gives a rounded off elapsed time in us (microseconds) (time_difference+1500000/3000000 gives a rounded off elapsed time in ms (milliseconds)
The 0.1 us algorithm is the most precise value you can use without having to adjust for read-out overhead.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With