Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

critical section in php between all requests

I am making a website in php, in which I want to sell some charge-numbers. For each request, I have to do these operations atomic:

  • Asking database for an available charge-number
  • marking the charge-number as sold
  • increasing a number (in a file or database) which shows the number of clients got a charge-number

I know how to do these operations and if it may help, I am using mysql. But my problem is how to do these operations atomic between all request? I mean how can I force the web-server(apache) and php interpreter to run this part one by one for all request, not in a parallel manner?

P.s: Please make your answers as a solution by php, not a database related solution.

like image 514
Saeed Avatar asked Dec 22 '22 07:12

Saeed


2 Answers

Theres a down side to using a php oriented solution, and it's that you can only assure this on a single machine. You can certainly lock it down to a single process in a critical region, but only on a single machine. If you were to have 2 frontend apache/php servers and one backend mysql server, this solution would fail. A MySQL transaction is by far the better solution..

Yet, imagining there is only one machine running this code, it's possible with either the solution Jon posted (using a file as a lock) or if you're on a linux/unix server you can also use IPC methods, and create a system V semaphore with length 1 (a mutex).

# in your scripts setup/init phase:
define('MUTEX_KEY', 123456); # the key to access you unique semaphore
sem_get( MUTEX_KEY, 1, 0666, 1 );
# later on, you reach the critical section:
# sem_acquire will block until the mutex has become availible
sem_acquire( ($resource = sem_get( MUTEX_KEY )) );
# queries here ...
sem_release( $resource );
# now that sem_release has been called, the next processes that was blocked
# on the sem_acquire call may enter the critical region

Although the file based solution is more portable (works on windows servers) the mutex/sem_* solution is much faster and safer (the auto_release, for example if an application for some reason crashes during the critical region wont block all further requests)

Cheers

like image 132
smassey Avatar answered Dec 24 '22 00:12

smassey


All of these operations are really database-dependent. So you don't really need to make PHP run code in a critical section; it's enough to serialize these operations at the database.

The simplest way to do that would be to LOCK TABLES ... WRITE while you perform these; this guarantees that only one script would be talking to the database at once.

Another approach would be to SET TRANSACTION ISOLATION LEVEL SERIALIZABLE and run all operations in a transaction with autocommit turned off (you should really use a transaction to ensure data integrity even if you decide to go with the table lock though).

Update: If you absolutely must do this in PHP, then you can achieve the goal using flock:

$fp = fopen('sunc_file', 'r+');
if (flock($fp, LOCK_EX)) {
    // Perform database ops here
    flock($fp, LOCK_UN); // release the lock
}
else {
    die("Couldn't get the lock!");
}
fclose($fp);

flock in exclusive mode will prevent any other process from locking the guard file and thus let your scripts execute strictly serialized, but please read the giant red warnings on the man page!

like image 36
Jon Avatar answered Dec 24 '22 01:12

Jon