Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Why is Time.utc slower in a forked process in Ruby on OS X (and not in Python)?

I saw the question Why does Process.fork make stuff slower in Ruby on OS X? and was able to determine that Process.fork does not actually make tasks, in general, slower.

However, it does seem to make Time.utc, in particular, much slower.

require 'benchmark'

def do_stuff
  50000.times { Time.utc(2016) }
end

puts "main: #{Benchmark.measure { do_stuff }}"

Process.fork do
  puts "fork: #{Benchmark.measure { do_stuff }}"
end

Here are some results:

main:   0.100000   0.000000   0.100000 (  0.103762)
fork:   0.530000   3.210000   3.740000 (  3.765203)

main:   0.100000   0.000000   0.100000 (  0.104218)
fork:   0.540000   3.280000   3.820000 (  3.858817)

main:   0.100000   0.000000   0.100000 (  0.102956)
fork:   0.520000   3.280000   3.800000 (  3.831084)

One clue might be that the above takes place on OS X, whereas on Ubuntu, there doesn't seem to be a difference:

main:   0.100000   0.070000   0.170000 (  0.166505)
fork:   0.090000   0.070000   0.160000 (  0.169578)

main:   0.090000   0.080000   0.170000 (  0.167889)
fork:   0.100000   0.060000   0.160000 (  0.169160)

main:   0.100000   0.070000   0.170000 (  0.170839)
fork:   0.100000   0.070000   0.170000 (  0.176146)

Can anyone explain this oddity?

Further investigation:

@tadman suggested that it might be a bug in the macOS / OS X time code, so I wrote a similar test in Python:

from timeit import timeit
from os import fork

print timeit("datetime.datetime.utcnow()", setup="import datetime")

if fork() == 0:
  print timeit("datetime.datetime.utcnow()", setup="import datetime")
else:
  pass

Again, on Ubuntu, the benchmarks are the same for the forked/main processes. On OS X, however, the forked process is now slightly faster than the main process, which is the opposite of the behavior in Ruby.

This leads me to believe that the source of the "fork penalty" is in the Ruby implementation and not in the OS X time implementation.

like image 937
user513951 Avatar asked Dec 27 '16 23:12

user513951


1 Answers

As it turns out, the slowdown is due in approximately equal measure to two function calls in time.c, in the function gmtime_with_leapsecond. The two functions are tzset and localtime_r.

That discovery led me to the question Why is tzset() a lot slower after forking on Mac OS X?, of which the current question might reasonably be said to be a duplicate.

There are two answers there, neither accepted, which point to root causes involving either

  • the "async-signal-safe"-ness of tzset and localtime/localtime_r, or
  • Apple's use of a passive notification registry that invalidates when fork'd.

The fact that the slowdown only occurs in years with no known leap seconds (as discovered by user that other guy) is obviously due to the fact that Ruby doesn't call gmtime_with_leapsecond when it knows that the year has no leap seconds.

I'm not sure why there is no such slowdown in Python. One possible explanation is that my test script using fork and utcnow may not be creating a child process that calls tzset or localtime/localtime_r.

like image 158
user513951 Avatar answered Oct 23 '22 17:10

user513951