My API allows users to buy certain unique items, where each item can only be sold to one user. So when multiple users try to buy the same item, one user should get the response: ok and the other user should get the response too_late.
Now, there seems to be bug in my code. A race condition. If two users try to buy the same item at the same time, they both get the answer ok. The issue is clearly reproducable in production. Now I have written a simple test that tries to reproduce it via rspec:
context "when I try to provoke a race condition" do
# ...
before do
@concurrent_requests = 2.times.map do
Thread.new do
Thread.current[:answer] = post "/api/v1/item/buy.json", :id => item.id
end
end
@answers = @concurrent_requests.map do |th|
th.join
th[:answer].body
end
end
it "should only sell the item to one user" do
@answers.sort.should == ["ok", "too_late"].sort
end
end
It seems like does not execute the queries at the same time. To test this, I put the following code into my controller action:
puts "Is it concurrent?"
sleep 0.2
puts "Oh Noez."
Expected output would be, if the requests are concurrent:
Is it concurrent?
Is it concurrent?
Oh Noez.
Oh Noez.
However, I get the following output:
Is it concurrent?
Oh Noez.
Is it concurrent?
Oh Noez.
Which tells me, that capybara requests are not run at the same time, but one at a time. How do I make my capabara requests concurrent?
You can't make concurrent capybara requests. However, you can create multiple capybara sessions and use them within the same test to simulate concurrent users.
user_1 = Capybara::Session.new(:webkit) # or whatever driver
user_2 = Capybara::Session.new(:webkit)
user_1.visit 'some/page'
user_2.visit 'some/page'
# ... more tests ...
user_1.click_on 'Buy'
user_2.click_on 'Buy'
Multithreading and capybara does not work, because Capabara uses a seperate server thread which handles connection sequentially. But if you fork, it works.
I am using exit codes as an inter-process communication mechanism. If you do more complex stuff, you may want to use sockets.
This is my quick and dirty hack:
before do
@concurrent_requests = 2.times.map do
fork do
# ActiveRecord explodes when you do not re-establish the sockets
ActiveRecord::Base.connection.reconnect!
answer = post "/api/v1/item/buy.json", :id => item.id
# Calling exit! instead of exit so we do not invoke any rspec's `at_exit`
# handlers, which cleans up, measures code coverage and make things explode.
case JSON.parse(answer.body)["status"]
when "accepted"
exit! 128
when "too_late"
exit! 129
end
end
end
# Wait for the two requests to finish and get the exit codes.
@exitcodes = @concurrent_requests.map do |pid|
Process.waitpid(pid)
$?.exitstatus
end
# Also reconnect in the main process, just in case things go wrong...
ActiveRecord::Base.connection.reconnect!
# And reload the item that has been modified by the seperate processs,
# for use in later `it` blocks.
item.reload
end
it "should only accept one of two concurrent requests" do
@exitcodes.sort.should == [128, 129]
end
I use rather exotic exit codes like 128 and 129, because processes exit with code 0 if the case block is not reached and 1 if an exception occurs. Both should not happen. So by using higher codes, I notice when things go wrong.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With