I am programming a game using OpenGL GLUT code, and I am applying a game developing technique that consists in measuring the time consumed on each iteration of the game's main loop, so you can use it to update the game scene proportionally to the last time it was updated. To achieve this, I have this at the start of the loop:
void logicLoop () {
float finalTime = (float) clock() / CLOCKS_PER_SEC;
float deltaTime = finalTime - initialTime;
initialTime = finalTime;
...
// Here I move things using deltaTime value
...
}
The problem came when I added a bullet to the game. If the bullet does not hit any target in two seconds, it must be destroyed. Then, what I did was to keep a reference to the moment the bullet was created like this:
class Bullet: public GameObject {
float birthday;
public:
Bullet () {
...
// Some initialization staff
...
birthday = (float) clock() / CLOCKS_PER_SEC;
}
float getBirthday () { return birthday; }
}
And then I added this to the logic just beyond the finalTime and deltaTime measurement:
if (bullet != NULL) {
if (finalTime - bullet->getBirthday() > 2) {
world.remove(bullet);
bullet = NULL;
}
}
It looked nice, but when I ran the code, the bullet keeps alive too much time. Looking for the problem, I printed the value of (finalTime - bullet->getBirthday()), and I watched that it increases really really slow, like it was not a time measured in seconds.
Where is the problem? I though that the result would be in seconds, so the bullet would be removed in two seconds.
We can call the clock function at the beginning and end of the code for which we measure time, subtract the values, and then divide by CLOCKS_PER_SEC (the number of clock ticks per second) to get processor time, like following. #include <time.
measure execution time of a program. Using time() function in C & C++. time() : time() function returns the time since the Epoch(jan 1 1970) in seconds. Prototype / Syntax : time_t time(time_t *tloc);
To get the elapsed time, we can get the time using clock() at the beginning, and at the end of the tasks, then subtract the values to get the differences. After that, we will divide the difference by CLOCK_PER_SEC (Number of clock ticks per second) to get the processor time.
This is a common mistake. clock()
does not measure the passage of actual time; it measures how much time has elapsed while the CPU was running this particular process.
Other processes also take CPU time, so the two clocks are not the same. Whenever your operating system is executing some other process's code, including when this one is "sleeping", does not count to clock()
. And if your program is multithreaded on a system with more than one CPU, clock()
may "double count" time!
Humans have no knowledge or perception of OS time slices: we just perceive the actual passage of actual time (known as "wall time"). Ultimately, then, you will see clock()
's timebase being different to wall time.
Do not use clock()
to measure wall time!
You want something like gettimeofday()
or clock_gettime()
instead. In order to allay the effects of people changing the system time, on Linux I personally recommend clock_gettime()
with the system's "monotonic clock", a clock that steps in sync with wall time but has an arbitrary epoch unaffected by people playing around with the computer's time settings. (Obviously switch to a portable alternative if needs be.)
This is actually discussed on the cppreference.com page for clock()
:
std::clock
time may advance faster or slower than the wall clock, depending on the execution resources given to the program by the operating system. For example, if the CPU is shared by other processes,std::clock
time may advance slower than wall clock. On the other hand, if the current process is multithreaded and more than one execution core is available,std::clock
time may advance faster than wall clock.
Please get into the habit of reading documentation for all the functions you use, when you are not sure what is going on.
Edit: Turns out GLUT itself has a function you can use for this, which is might convenient. glutGet(GLUT_ELAPSED_TIME)
gives you the number of wall milliseconds elapsed since your call to glutInit()
. So I guess that's what you need here. It may be slightly more performant, particularly if GLUT (or some other part of OpenGL) is already requesting wall time periodically, and if this function merely queries that already-obtained time… thus saving you from an unnecessary second system call (which costs).
If you are on windows you can use QueryPerformanceFrequency / QueryPerformanceCounter which gives pretty accurate time measurements.
Here's an example.
#include <Windows.h>
using namespace std;
int main()
{
LARGE_INTEGER freq = {0, 0};
QueryPerformanceFrequency(&freq);
LARGE_INTEGER startTime = {0, 0};
QueryPerformanceCounter(&startTime);
// STUFF.
for(size_t i = 0; i < 100; ++i) {
cout << i << endl;
}
LARGE_INTEGER stopTime = {0, 0};
QueryPerformanceCounter(&stopTime);
const double ellapsed = ((double)stopTime.QuadPart - (double)startTime.QuadPart) / freq.QuadPart;
cout << "Ellapsed: " << ellapsed << endl;
return 0;
}
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With