I have a fairly standard SQL Query as follows:
TRUNCATE TABLE TABLE_NAME;
INSERT INTO TABLE_NAME
(
UPRN,
SAO_START_NUMBER,
SAO_START_SUFFIX,
SAO_END_NUMBER,
SAO_END_SUFFIX,
SAO_TEXT,
PAO_START_NUMBER,
PAO_START_SUFFIX,
PAO_END_NUMBER,
PAO_END_SUFFIX,
PAO_TEXT,
STREET_DESCRIPTOR,
TOWN_NAME,
POSTCODE,
XY_COORD,
EASTING,
NORTHING,
ADDRESS
)
SELECT
BASIC_LAND_AND_PROPERTY_UNIT.UPRN,
LAND_AND_PROPERTY_IDENTIFIER.SAO_START_NUMBER AS SAO_START_NUMBER,
LAND_AND_PROPERTY_IDENTIFIER.SAO_START_SUFFIX AS SAO_START_SUFFIX,
LAND_AND_PROPERTY_IDENTIFIER.SAO_END_NUMBER AS SAO_END_NUMBER,
LAND_AND_PROPERTY_IDENTIFIER.SAO_END_SUFFIX AS SAO_END_SUFFIX,
LAND_AND_PROPERTY_IDENTIFIER.SAO_TEXT AS SAO_TEXT,
LAND_AND_PROPERTY_IDENTIFIER.PAO_START_NUMBER AS PAO_START_NUMBER,
LAND_AND_PROPERTY_IDENTIFIER.PAO_START_SUFFIX AS PAO_START_SUFFIX,
LAND_AND_PROPERTY_IDENTIFIER.PAO_END_NUMBER AS PAO_END_NUMBER,
LAND_AND_PROPERTY_IDENTIFIER.PAO_END_SUFFIX AS PAO_END_SUFFIX,
LAND_AND_PROPERTY_IDENTIFIER.PAO_TEXT AS PAO_TEXT,
STREET_DESCRIPTOR.STREET_DESCRIPTOR AS STREET_DESCRIPTOR,
STREET_DESCRIPTOR.TOWN_NAME AS TOWN_NAME,
LAND_AND_PROPERTY_IDENTIFIER.POSTCODE AS POSTCODE,
BASIC_LAND_AND_PROPERTY_UNIT.GEOMETRY AS XY_COORD,
BASIC_LAND_AND_PROPERTY_UNIT.X_COORDINATE AS EASTING,
BASIC_LAND_AND_PROPERTY_UNIT.Y_COORDINATE AS NORTHING,
decode(SAO_START_NUMBER,null,null,SAO_START_NUMBER||SAO_START_SUFFIX||' ')
||decode(SAO_END_NUMBER,null,null,SAO_END_NUMBER||SAO_END_SUFFIX||' ')
||decode(SAO_TEXT,null,null,SAO_TEXT||' ')
||decode(PAO_START_NUMBER,null,null,PAO_START_NUMBER||PAO_START_SUFFIX||' ')
||decode(PAO_END_NUMBER,null,null,PAO_END_NUMBER||PAO_END_SUFFIX||' ')
||decode(PAO_TEXT,null,null,'STREET RECORD',null,PAO_TEXT||' ')
||decode(STREET_DESCRIPTOR,null,null,STREET_DESCRIPTOR||' ')
||decode(POST_TOWN,null,null,POST_TOWN||' ')
||Decode(Postcode,Null,Null,Postcode) As Address
From (Land_And_Property_Identifier
Inner Join Basic_Land_And_Property_Unit
On Land_And_Property_Identifier.Uprn = Basic_Land_And_Property_Unit.Uprn)
Inner Join Street_Descriptor
On Land_And_Property_Identifier.Usrn = Street_Descriptor.Usrn
Where Land_And_Property_Identifier.Postally_Addressable='Y';
If I run this query in SQL Developer, it runs fine with 1.8million features inserted (select count(*) from TABLE_NAME
within the session confirms this).
But when I run the commit, the data disappears! select count(*) from TABLE_NAME
now returns 0 results.
We've done a number of things to try and see what's going on:
During the Truncate
, tablespace is freed up, and during the insert its filled again. There is no change during the commit. This implies the data is in the database.
If I do the exact same query but with and rownum < 100
appended to the end, the commit works. Same with 1000
.
I found this question: oracle commit kills and had our DBA try the "SQL Trace". This produced a >4GB file which when parsed with TKPROF produced a 120 page report but we don't know how to read it and there's nothing obviously wrong in there.
Our error logs have nothing in them. And obviously no error during the commit itself.
There's a trigger/sequence which does increment by 1.8million during the process.
I've repeated this about 4 times now, but the result is always the same.
So my question is simple - what's happening to the data during the commit? How can we find out? Thanks.
Note: This has run fine in the past so I don't believe there's anything wrong with the SQL per-se.
Edit: Issue resolved by recreating the table from scratch. Now when I insert it only takes 500 seconds compared to the previous 2000. And commiting is instantaneous; when it was broken the commit took 4000 seconds! I still have no idea why it happened though.
For those asking, the Create Table syntax:
CREATE TABLE TABLE_NAME
(
ADDRESS VARCHAR2(4000),
UPRN NUMBER(12),
SAO_START_NUMBER NUMBER(4),
SAO_START_SUFFIX VARCHAR2(1),
SAO_END_NUMBER NUMBER(4),
SAO_END_SUFFIX VARCHAR2(1),
SAO_TEXT VARCHAR2(90),
PAO_START_NUMBER NUMBER(4),
PAO_START_SUFFIX VARCHAR2(1),
PAO_END_NUMBER NUMBER(4),
PAO_END_SUFFIX VARCHAR2(1),
PAO_TEXT VARCHAR2(90),
STREET_DESCRIPTOR VARCHAR2(100),
TOWN_NAME VARCHAR2(30),
POSTCODE VARCHAR2(8),
XY_COORD MDSYS.SDO_GEOMETRY,
EASTING NUMBER(7),
NORTHING NUMBER(7)
)
CREATE INDEX TABLE_NAME_ADD_IDX ON TABLE_NAME (ADDRESS);
Do you still lose the data if you wrap the transaction in an anonymous block?
My guess is that you are opening two SQL windows in SQL Developer and that means two separate sessions. i.e. Running SQL code in window 1 and doing commit; in window 2 will not commit changes done in window 1.
Truncate table does an implicit commit. So the table will be empty until insert + commit finishes.
begin
execute immediate 'truncate table table_name reuse storage'; --use "reuse" if you know the data will be of similar size
-- implicit commit has occured and the table is empty for all sessions
insert into table_name (lots)
select lots from table2;
commit;
end;
You should use truncate with reuse storage, so that the database doesn't go a free all the blocks just to acquire the same number of blocks in the insert.
If you want/need to have the data available at all times a better (but longer) method is
begin
savepoint letsgo;
delete from table_name;
insert into table_name (lots)
select lots from table2;
commit;
exception
when others then
rollback to letsgo;
end;
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With