I have a postgres table, with a column C
which has type T
. People will be using COPY
to insert data into this table. However sometimes they try to insert a value for C that isn't of type T, however I have a postgres function which can convert the value to T.
I'm trying to write a BEFORE INSERT
trigger on the table which will call this function on the data so that I can ensure that I get no insert type errors. However it doesn't appear to work, I'm getting errors when trying to insert the data, even with the trigger there.
Before I spend too much time investigating, I want to find out if this is possible. Can I use triggers in this way to change the type of incoming data?
I want this to run on postgresql 9.3, but I have noticed the error and non-functioning trigger on postgres 9.5.
Syntax. CREATE TRIGGER trigger_name [BEFORE|AFTER|INSTEAD OF] event_name ON table_name [ -- Trigger logic goes here.... ]; Here, event_name could be INSERT, DELETE, UPDATE, and TRUNCATE database operation on the mentioned table table_name. You can optionally specify FOR EACH ROW after table name.
A “trigger” is defined as any event that sets a course of action in a motion. In PostgreSQL, if you want to take action on specific database events, such as INSERT, UPDATE, DELETE, or TRUNCATE, then trigger functionality can be useful as it will invoke the required function on defined events.
As Patrick stated you have to specify a permissive target so that Postgres validation doesn't reject the data before you get a chance to manipulate it.
Another way without a second table, is to create a view on your base table that casts everything to varchar, and then have an INSTEAD OF
trigger that populates the base table whenever an insert is tried on the view.
For example, the table tab1 below has an integer column. The view v_tab1 has a varchar instead so any insert will work for the view. The instead of trigger then checks to see if the entered value is numeric and if not uses a 0 instead.
create table tab1 (i1 int, v1 varchar);
create view v_tab1 as select cast(i1 as varchar) i1, v1 from tab1;
create or replace function v_tab1_insert_trgfun() returns trigger as
$$
declare
safe_i1 int;
begin
if new.i1 ~ '^([0-9]+)$' then
safe_i1 = new.i1::int;
else
safe_i1 = 0;
end if;
insert into tab1 (i1, v1) values (safe_i1, new.v1);
return new;
end;
$$
language plpgsql;
create trigger v_tab1_insert_trigger instead of insert on v_tab1 for each row execute procedure v_tab1_insert_trgfun();
Now the inserts will work regardless of the value
insert into v_tab1 values ('12','hello');
insert into v_tab1 values ('banana','world');
select * from tab1;
Giving
|i1 |v1 |
+-----+-----+
|12 |hello|
|0 |world|
Fiddle at: http://sqlfiddle.com/#!15/9af5ab/1
No, you can not use this approach. The reason is that the backend already populates a record with the values that are to be inserted into the table. That is in the form of the NEW
parameter that is available in the trigger. So the error is thrown even before the trigger fires.
The same applies to rules, incidentally, so Kevin's suggestion in his comment won't work.
Probably your best solution is to create a staging table with "permissive" column data types (such as text
) and then put a BEFORE INSERT
trigger on that table that casts all column values to their correct type before inserting them in the final table. If that second insertion is successful you can even RETURN NULL
from the insert so the row won't go into the table (not sure, though, what COPY
thinks about that...). Those records that do end up in the table have some weird data in them and you can then deal with those rows manually.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With