I need to set as primary key a timestamp type,(in each case it must be unique, some instructions could insert k records at time) with default value "current_timestamp". It is a kind of log file. To do this I should improve the timestamp resolution to microsecs(I do not think is possible for pg write a million of records in a sec). This kind of precision is possible for postgres.See here It is something like this:
CREATE TABLE upsistema
(
t timestamp(6) without time zone NOT NULL DEFAULT current_timestamp(6) without time zone,
blabla bigint,
foo bigint,
CONSTRAINT "XIDS3w" PRIMARY KEY (t)
)
But it does not work. When I check, with pgAdmin3, it writes ever with millisecs precision. And of course , it is possible write more records in the same millisecond. So, did I need to set some misterious variable or something else, in order to save with microsecs precision?
By default, PostgreSQL stores timestamps at the maximum possible precision.
If I were in your shoes, I might use a different default for the timestamp. The value of current_timestamp is the time the current transaction started. It doesn't change during execution of a transaction, no matter how many rows you insert, and no matter how long it takes. That means INSERT statements are liable to fail because of duplicate primary key values.
create table upsistema (
t timestamp without time zone primary key
default current_timestamp,
foo bigint
);
insert into upsistema (foo) values (42), (43), (44);
ERROR: duplicate key value violates unique constraint "upsistema_pkey"
Try clock_timestamp() as the default instead. This doesn't guarantee success, but it does make it more likely.
create table upsistema (
t timestamp without time zone primary key
default clock_timestamp(),
foo bigint
);
insert into upsistema (foo) values (42), (43), (44);
select * from upsistema;
t foo -- 2014-11-19 19:17:23.369432 42 2014-11-19 19:17:23.36958 43 2014-11-19 19:17:23.369587 44
To absolutely guarantee success, either change the primary key constraint, or drop it altogether. Whether this makes sense is application-dependent, but I would not be surprised to find multiple clients logging data at the same microsecond.
Your table uses timestamp without time zone
, but current_timestamp and clock_timestamp() return timestamp with time zone
. It might be a good idea to change the time zone to UTC for your session before you run SQL statements.
set time zone 'UTC';
select ... ;
and
set time zone 'UTC';
insert ... ;
If it makes sense to set the server to use UTC by default, you can set the timezone parameter in postgresql.conf to 'UTC'. Then you don't have to set the time zone in each session.
As you saw it in the documentation here, the resolution can be up to microseconds
using the timestamp
type.
Here is an example that show you that microseconds are there:
CREATE TABLE table_test
(
column1 timestamp(6) without time zone,
column2 timestamp(6) without time zone
);
insert into table_test (column1,column2) values (current_timestamp, current_timestamp + interval '100 MICROSECONDS');
select extract (MICROSECONDS from column1 - column2 ) from table_test;
Result:
date_part
-----------
-100
(1 ligne)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With