I have a test in a Rails 3 test suite that makes a number assertions that compare timestamps that passes on my local machine but fails within our CI pipeline. This test stores a timestamp in a Postgres database timestamp field with the precision of 6 and compares the stored value to the original timestamp, very similar to the following example:
tmp_time = Time.zone.now
u = User.find(1)
u.updated_at = tmp_time
u.save!
u.reload
assert_equal u.updated_at.to_i, tmp_time.to_i # passes...
assert_equal u.updated_at, tmp_time # fails...
assert_equal u.updated_at.to_f, tmp_time.to_f # fails...
I believe the problem relates to Ruby's time representation being of a higher precision than the stored value.
What is the best way of compensating for the slight difference in values due to precision, outside of being less precise in comparisons? We have considered overriding Time.zone.now, but believe that will lead to downstream problems.
Thanks in advance.
The issue is probably not with precision in the database, but rather that a small time passes between when you define tmp_time
and when you save.
You can see that the .to_f
representation of Time changes instantly:
irb(main):011:0> 2.times.map { Time.now.to_f }
=> [1551755487.5737898, 1551755487.573792]
This difference is usually not visible when you use .to_i
because it rounds to the nearest second.
You can use Timecop, as another answer mentions, to get around this:
irb(main):013:0> Timecop.freeze { 2.times.map { Time.now.to_f } }
=> [1551755580.12368, 1551755580.12368]
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With