Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Stop a request programmatically

At work we have every now and then requests that that so long to return that by the time they finish, the frontend (nginx) already killed the connection, so the user will not see the output (either if it's good or bad).

Worst is that the balancer (haproxy) will also kill the connection and then assume that the server is free to handle another request, which means that while the server is still handling the old request a new one comes in and fight for the resources.

Ideally servers should handle only one request at a time to reuse as much as possible the connection thread with the ZEO database, so having two requests running at the same time makes the server even slower and then one of our monitoring systems rightfully restarts plone all together because the dummy prove it sends times out.

So given some logic (maybe reusing Products.LongRequestLogger which we already use) is there a way to tell a thread processing a request to stop doing it?

like image 913
gforcada Avatar asked Nov 14 '16 10:11

gforcada


1 Answers

IMHO this is a bad idea to abort a request manually. You somehow interfere with the conflict resolution, which is IMHO not a really good behavior.

I'm running some large plone sites with 200 - 400 authors publishing/modifying 1000 - 3000 objects a day. Usually the load is spread across the day, so also longer request are going to be processed in a reasonable amount time.

For example in the evening, als long requests (30s - 60s) are doing well. no reason to abort them.

In Plone we have some classic long requests, like rename/move a large tree, change permissions, copy a lot of objects. Then usually conflicts happens somewhere in the catalog and aborts the transaction after 3 retries.

By aborting long request you simply remove some features from Plone. You may consider adding a condition to the rename/move/copy actions, so they're no longer there, if you have for example 1000 objects in a container.

What I tried/did so far:

  • Make long requests shorter (Haha, I now simply said but hard to achieve :-)) --> For example checkout this package's copy/move patch: It does no longer doing a uncatalog/catalog for renaming and moving, instead it updates just the necessary indexes. We achieved a lot with this one.

  • Queues: I used for example redis to queue and handle known long actions. Of course you need to know in advance, which is a potential long request, but I guess you already know this in your environment. You may contact the user by email, or some kind of flash message if the request is done.

  • Keep catalog as small as possible, delegate everything to solr/elasticsearch (Removing the SearchableText gives you a lot...)

  • Hardware: I know sounds silly, but this is often a quick-win. Try to load at least all catalog objects in the RAM. invest a few $ in a fast cpu/ssd (general I/O). This is not the way I like, but it happens and in the year 2016 it can give you some time to solve the long requests problem.

The Future:

  • You probably saw Jim Fultons "The ZODB" talk at the ploneconf 2016. If you can handle the conflict resolution on the zeoclient and you got the object instead only the state, it's may be possible to implement a better conflict resolution.

Ehhh... First I only made a comment, but I exceeded the characters limit ;-)

like image 83
Mathias Avatar answered Nov 19 '22 02:11

Mathias