I have a Postgres database (9) that I am writing a trigger for. I want the trigger to set the modification time, and user id for a record. In Firebird you have a CONNECTIONID that you can use in a trigger, so you could add a value to a table when you connect to the database (this is a desktop application, so connections are persistent for the lifetime of the app), something like this:
UserId | ConnectionId --------------------- 544 | 3775
and then look up in the trigger that connectionid 3775 belongs to userid 544 and use 544 as the user that modified the record.
Is there anything similar I can use in Postgres?
Listing Active Connections on PostgreSQLselect pid as process_id, usename as username, datname as database_name, client_hostname, application_name, backend_start, state, wait_event, state_change from pg_stat_activity; To Check max connection limit in Postgres database.
the connection url for postgres syntax: "Server=host ipaddress;Port=5432;Database=dbname;User Id=userid;Password=password; example: "Server=192.168.1.163;Port=5432;Database=postgres;User Id=postgres;Password=root; Follow this answer to receive notifications.
The process ID of an active backend can be found from the pid column of the pg_stat_activity view, or by listing the postgres processes on the server (using ps on Unix or the Task Manager on Windows). The role of an active backend can be found from the usename column of the pg_stat_activity view.
The PDO_PGSQL Data Source Name (DSN) is composed of the following elements, delimited by spaces or semicolons: DSN prefix. The DSN prefix is pgsql: . host. The hostname on which the database server resides.
While it may work for your desktop use case, note that process ID numbers do rollover (32768 is a common upper limit), so using them as a unique key to identify a user can run into problems. If you ever end up with leftover data from a previous session in the table that's tracking user->process mapping, that can collide with newer connections assigned the same process id once it's rolled over. It may be sufficient for your app to just make sure you aggressively clean out old mapping entries, perhaps at startup time given how you've described its operation.
To avoid this problem in general, you need to make a connection key that includes an additional bit of information, such as when the session started:
SELECT procpid,backend_start FROM pg_stat_activity WHERE procpid=pg_backend_pid();
That has to iterate over all of the connections active at the time to compute, so it does add a bit of overhead. It's possible to execute that a bit more efficiently starting in PostgreSQL 8.4:
SELECT procpid,backend_start FROM pg_stat_get_activity(pg_backend_pid());
But that only really matters if you have a large number of connections active at once.
you could use the process id. It can be retrieved with:
pg_backend_pid()
With this pid you can also use the table pg_stat_activity to get more information about the current backend, althouht you already should know everything, since you are using this backend.
Or better. Just create a serial, and retrieve one value from it for each connection:
CREATE SEQUENCE 'connectionids';
And then:
SELECT next_val('connectionids');
in each connection, to retrieve a connection unique id.
One way is to use the custom_variable_classes
configuration option. It appears to be designed to allow the configuration of add-on modules, but can also be used to store arbitrary values in the current database session.
Something along the lines of the following needs to be added to postgresql.conf
:
custom_variable_classes = 'local'
When you first connect to the database you can store whatever information you require in the custom class, like so:
SET local.userid = 'foobar';
And later in on you can retrieve this value with the current_setting()
function:
SELECT current_setting('local.userid');
Adding an entry to a log table might look something like this:
INSERT INTO audit_log VALUES (now(), current_setting('local.userid'), ...)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With