We are using occi in order to access Oracle 12 via a C++ process. One of the operations has to ensure that the client has to pick the latest data in the database and operate according to the latest value. The statement is
std::string sqlStmt = "SELECT REF(a) FROM O_RECORD a WHERE G_ID= :1 AND P_STATUS IN (:2, :3) FOR UPDATE OF PL_STATUS"
(we are using TYPES). For some reason this command did not go though and the database table is LOCKED. All other operations are waiting the first thread to finish, however the thread is killed and we have reached a deadend.
What is the optimal solution to avoid this catastrophic senario? Can I set a timeout in the statement in order to by 100% that a thread can operate on the "select for update", let's say for a maximum of 10 seconds? In other words the thread of execution can lock the database table/row but no more than a predifined time.
Is this possible?
There is a session parameter ddl_lock_timeout
but no dml_lock_timeout. So you can not go this way. So Either you have to use
SELECT REF(a)
FROM O_RECORD a
WHERE G_ID= :1 AND P_STATUS IN (:2, :3)
FOR UPDATE OF PL_STATUS SKIP LOCKED
And modify the application logic. Or you can implement your own interruption mechanism. Simply fire a parallel thread and after some time execute OCIBreak
. It is documented and supported solution. Calling OCIBreak
is thread safe. The blocked SELECT .. FOR UPDATE
statement will be released and you will get an error ORA-01013: user requested cancel of current operation
So on OCCI level you will have to handle this error.
Edit: added the Resource Manager, which can impose an even more precise limitation, just focused on those sessions that are blocking others...
The Resource Manager allows the definition of more complex policies than those available to the profiles and in your case is more suitable than the latter.
You have to define a plan and the groups of users associated to the plan, have to specify the policies associated to plan/groups and finally have to attach the users to the groups. To have an idea of how to do this, you can reuse this example @support.oracle.com (it appears a bit too long to be posted here) but replacing the MAX_IDLE_TIME
with MAX_IDLE_BLOCKER_TIME
.
The core line would be
dbms_resource_manager.create_plan_directive(
plan => 'TEST_PLAN',
group_or_subplan => 'my_limited_throttled_group',
comment => 'Limit blocking idle time to 300 seconds',
MAX_IDLE_BLOCKER_TIME => 300)
;
You can limit the inactivity period of those session specifying an IDLE_TIME
.
CREATE PROFILE:
If a user exceeds the
CONNECT_TIME
orIDLE_TIME
session resource limit, then the database rolls back the current transaction and ends the session. When the user process next issues a call, the database returns an error
To do so, specify a profile with a maximux idle time, and apply it to just the relevant users (so you wont affect all users or applications)
CREATE PROFILE o_record_consumer
LIMIT IDLE_TIME 2; --2 minutes timeout
alter user the_record_consumer profile o_record_consumer;
The drawback is that this setting is session-wide, so if the same session should be able to stay idle in the course of other operations, this policy will be enforced anyway.
Maybe you already know that the other sessions may cohordinate their access to the same record in several ways:
FOR UPDATE WAIT x;
If you append the WAIT x
clause to your select for update
statement, the waiting session will give up the wait after "x" seconds have elapsed. (the integer "x" must be hardcoded there, for instance the value "3"; a variable won't do, at least in Oracle 11gR2).SKIP LOCKED;
If you append the SKIP LOCKED
clause to your select for update
statement, the select won't return the records that are locked (as ibre5041 already pointed up).Finally, it may be that you are attempting to implement a homemade queue infrastructure. In this case, bear in mind that Oracle already has its own queue mechanics called Advanced Queue and you may get a lot with very little by simply using them; see ORACLE-BASE - Oracle Advanced Queuing.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With