When using libfaketime
to alter the speed of time for a process, the timeout set by setTimout
expires according to the altered time when running under Linux but according to the original system time when running under Mac OS.
In Mac OS:
DYLD_INSERT_LIBRARIES=src/libfaketime.1.dylib DYLD_FORCE_FLAT_NAMESPACE=y FAKETIME="@2020-12-24 00:00:00 x3600" node
> setTimeout(() => {console.log('hello');}, 3600 * 1000); // Takes an hour
In Linux:
LD_PRELOAD=src/libfaketime.1.so FAKETIME="@2020-12-24 00:00:00 x3600" node
> setTimeout(() => {console.log('hello');}, 3600 * 1000); // Takes a second
While investigating this issue I noticed that libc
's clock_gettime
function is polled by node.js (libuv?) under Linux but this function is not called when running under Mac OS. (I added some printf
s to the libfaketime
functions)
What is the difference in the implementation of node.js (libuv?) which causes this disparity in behavior between Mac OS and Linux? And why does this difference in implementation exist?
Another observation that I made is that when time is frozen using libfaketime
the behavior of setImmediate
and setTimeout(cb, 0)
differs under Linux in that the callback is run when using setImmediate
but not when using setTimeout(cb, 0)
.
It's definitely a difference in libuv. Darwin does not support CLOCK_MONOTONIC*
, therefore mach_absolute_time()
must be called in order to get the current time. This ends up bypassing libfaketime
, causing the client code to run at realtime on OS X.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With