Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Timing user tasks with seconds precision

I'm building a website where I need to time users' tasks, show them the time as it elapses and keep track of how long it took them to complete the task. The timer should be precise by the second, and an entire task should take about 3-4 hrs top.
I should also prevent the user from forging the completion time (there is no money involved, so it's not really high-risk, but there is some risk).

Currently I use a Timestamp to keep track of when the user began, and at the same time, initialize a JS based timer, when the user finishes I get a notice, and I calculate the difference between current time and the beginning timestamp - this approach is no good, there is a few seconds difference between the user's timer and my time difference (i.e. the time I calculated it took the user to complete the task, note: this was only tested at my dev env., since I don't have any other env. yet..).
Two other approaches I considered are:
1. Relying entirely on client side timer (i.e. JS), and when the user completes the task - sending the time it took him encrypted (this way the user can't forge a start time). This doesn't seem very practical, since I can't figure out a way to generate a secret key at client side which will really be "secret".
2. Relying entirely on server side timer, and sending "ticks" every second. This seem like a lot of server side work comparing to the other two methods(machine, not human.. e.g. accessing the DB for every "tick" to get start time), and I'm also not sure it will be completely accurate.

EDIT:
Here's what's happening now in algorithm wording:

  1. User starts task - server sends user a task id and records start time at db, client side timer is initialized.
  2. User does task, his timer is running...
  3. User ends task, timer is stopped and user's answer and task id are sent to the server.
  4. Server retrieves start time (using received task id) and calculates how long it took user to complete task.

Problem - the time as calculated by server, and the time as displayed at client side are different.

Any insight will be much appreciated.

like image 975
Oren A Avatar asked Jul 18 '11 01:07

Oren A


2 Answers

If I've understood correctly the problem is that the server and client times are slightly different, which they always will be.

So I'd slightly tweak your original sequence as follows:

  1. User starts task - server sends user a task id and records start time at db, client side timer is initialized.
  2. User client notifies server of client start time; recorded in DB alongside Server Start Time
  3. User does task, his timer is running...
  4. User ends task, timer is stopped and user's elapsed time, answer and task id are sent to the server.
  5. Upon receipt the server notes the incoming request time, retrieves start time calculates how long it took user to complete task for both server time (start/finish) and client times.
  6. Server ensures that the client value is within an acceptable range of the server verified time and uses the client time. If the client time is not within acceptable range (e.g. 30seconds) then use the server times as the figure.

There will be slight differences in time due to latency, server load, etc. so by using the client values it will be more accurate and just as secure, because these values are sanity checked.


To answer the comment:

You can only have one sort of accuracy, either accurate in terms of what the client/user sees, or accurate in terms of what the server knows. Anything coming from the client side could be tainted, so there has to be a compromise somewhere. You can minimise this by measurement and offsets, such that the end difference is within the same range as the start difference, using the server time, but it will never be 100% unchangeable. If it's really that much of an issue then store times with less accuracy.

If you really must have accuracy and reliability then the only way is to use the server time and periodically grab it via ajax for display and use a local timer to fill in the gaps with a sliding adjustment algorithm between actual and reported times.

like image 125
Richard Harrison Avatar answered Nov 20 '22 00:11

Richard Harrison


I think this will work. Seems like you've got a synchronization issue and also a cryptography issue. My suggestion is to work around the synchronization issue in a way invisible to the user, while still preserving security.

Idea: Compute and display the ticks client side, but use cryptographic techniques to prevent the user from sending a forged time. As long as the user's reported time is close to the server's measured time, just use the user's time. Otherwise, claim forgery.

  1. Client asks server for a task.

  2. Server gets the current timestamp, and encrypts it with its own public key. This is sent back to the client along with the task (which can be plain text).

  3. The client works until they are finished. Ticks are recorded locally in JS.

  4. The client finishes and sends the server back its answer, the number of ticks it recored, and the encrypted timestamp the server first sent it.

  5. The server decrypts the timestamp, and compares it with the current local time to get a number of ticks.

  6. If the server's computed number of ticks is within some tolerance (say, 10 seconds, to be safe), the server accepts the user's reported time. Otherwise, it knows the time was forged.

Because the user's time is accepted (so long as it is within reason), the user never knows that the server time could be out of sync with their reported time. Since the time periods you're tracking are long, loosing a few seconds of accuracy doesn't seem like it will be an issue. The method requires only the encryption of a single timestamp, so it should be fast.

like image 2
John Doucette Avatar answered Nov 20 '22 00:11

John Doucette