Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

PHP on pageload, MYSQL, or cronjob - which to use for checking if time has passed?

I'm making a checklist where items can be clicked again after a period of time. Each user (potentially up to 1 million, but likely between 10,000 - 100,000) will have up to 200 items (probably split into chucks of less than 20 on different ajax tabs) on their checklist that all update after different intervals of time - some at 2 min 30 sec, some at 1 hour, some at 20 hours, and the tricky ones that reset at a specific time rather than an interval (I'm thinking cronjob for the specific time items).

My database rows will look something like:

---------------------------------------------
| UserID |  D1  |  D2  |  D3  |  D4  | D150 |
---------------------------------------------
| 345    | time | time | time | time | time |
| 7294   | time | time | time | time | time |
| 2385   | time | time | time | time | time |
---------------------------------------------

I plan on saving the reset times with something like:

mysql_query ("INSERT INTO checklists (D1) 
VALUES ((SYSDATE() + INTERVAL 20 HOUR))") 
or die (mysql_error()); 

I thought using SYSDATE() would be better than using NOW() since I read that NOW() uses the time it's inserted, rather than called, which if there are locked rows or something NOW() wouldn't be accurate enough for this (unless I have it backwards?). Info on this here:https://coderwall.com/p/eslwuw/why-not-to-use-now-in-mysql. Accuracy to the millisecond doesn't matter here, but accuracy to the second does.

So, after I have the reset time saved into the database with the above code, what is the most efficient way to show an accurate up-to-date checklist on the page?

Do I use SELECT * FROM checklists WHERE D1 < NOW() on pageload with the UserID to limit the search, some kind of PHP script on pageload, or run a cronjob several times a minute (I doubt this is an appropriate method but figured I should include it anyways)?

Which method of checking is better for having fast page loading times? Which will put more stress on the server?

Would it be better to have 100 different tables with the lists split into chunks to match tab contents like:

-----------------    -----------------    -----------------         
| UserID |  D1  |    | UserID |  D2  |    | UserID |  D10 |        
-----------------    -----------------    -----------------         
| 345    | time |    | 345    | time |    | 345    | time |
| 7294   | time |    | 7294   | time |    | 7294   | time |
| 2385   | time |    | 2385   | time |    | 2385   | time |
-----------------    -----------------    -----------------

More info:
The user's page will have tabs with 10-20 of the checklist items on each tab.
The user will click a button to show they finished a task which is when the reset time will be added to the database.
When they reload the tab it will show if any of the checklist items are ready to be clicked again.

like image 563
Kat Cox Avatar asked Feb 11 '23 13:02

Kat Cox


1 Answers

"When they reload the tab it will show if any of the checklist items are ready to be clicked again." -- Let's start by improving on that. Let's get rid of the reload tab. Each checklist item's "remaining time" can be loaded onto the page upon page load. A fairly simple JavaScript function can wake up every second, run through the items (even if it is 200 items), check which ones have "timed out", and change the item from Red to Green (or however you want to indicate that "It's time now!"). Meanwhile, each item could even have a countdown timer displayed on it. Note, also, that pushing the burden onto the users' browsers takes a big load off your server.

Once the user clicks the item, then you go back to the server, which goes back into MySQL to reset that one timer.

So, back to the database design.

Plan A: One row per user; 200 columns, one per item. UPDATE tbl SET item123 = ... WHERE user_id = 9876; However, you would have to 'construct' the SQL because of needing to fabricate the column name: item123.

Plan B: One row per user per item. UPDATE tbl SET next = ... WHERE user_id = 9876 AND item_num = 123.

Either plan would be 'efficient'; it should be easy to handle more than 5K updates per minute. Plan B would take more disk space.

But, there is another query to worry about: Loading the page. That, as I understand it, involves: Given a user_id, fetch the 200 (or just 20?) timers for that user.

SELECT * FROM tbl WHERE user_id = 9876;

Plan A (as defined above): That SELECT would fetch one wide row.

Plan B: That SELECT would fetch 200 (or 20) rows.

Still, both are "efficient", with one proviso:

Plan A's table needs PRIMARY KEY(user_id)

Plan B's table needs PRIMARY KEY(user_id, item_num)

Keep in mind that a cronjob cannot reach out to a web page. Hence, this design is "turned around".

Now for some numbers. If you have 1000 users "online" at any given time, and they click an "item" on average once a minute,... That's 1K UPDATEs and 1K SELECTs to build the reloaded page. 2K/minute is well within the 5K I mentioned. However, it is pushing the limit -- think about spikes in traffic, etc. So, there may need to be some extra care in how things are implemented. We can get to that if these numbers make sense.

Edits

Since not all users will have all items, let's discuss the use of space take (or not taken) by unneeded items.

  • Plan A: There is a small overhead for each NULL column.
  • Plan B: You don't even need to have the unused rows. That is, a user would have up to 200 rows. Hence, no 'wasted' space.

What about increasing the "200"?

  • Plan A: Downtime to do ALTER TABLE. This is a big drawback to Plan A.
  • Plan B: No schema change needed.

RAM size and Dataset size?

If everything can be cached in RAM, then the only I/O is for writing to the transaction log (InnoDB) and for eventually persisting the data to disk. Even if the number of active users is such that their rows can be cached in RAM until they log out, that comment about I/O is approximately correct.

If you have more active users than can effectively be cached, then the process becomes I/O-bound, and you won't be able to keep up. This could be solved by (a) more RAM (and increasing innodb_buffer_pool_size) or (b) "sharding" -- spreading users across multiple machines.

1GB VM means innodb_buffer_pool_size should be only 100M, but that might be big enough to handle your projected active load. (As you say, the numbers are fuzzy.)

Multiple databases?

One table, not PARTITIONed, in one database. There is no advantage (as far as I can see) in splitting up by database, tables, or PARTITION. Sharding (mentioned above) may be useful if you grow a lot. Still, I would beef up the one server first: Two hardware improvements: more RAM and RAID striping with write cache. (Those are not needed yet. It would be better to get metrics on active users, click rate, etc, before deciding when/if to beef up the hardware.)

200 connections limit?

Is that max_connections = 200? Or max_user_connections? Are you using a cloud service that won't let you increase that?

Suggest you disconnect promptly rather than hanging onto the connection, which the user won't hit for another minute anyway. You can enforce that by setting wait_timeout (or is it interactive_wait_timeout?) to, say, 10 seconds.

Let's try to fix the problem at this level, before resorting to adding "instances".

like image 161
Rick James Avatar answered Feb 13 '23 04:02

Rick James