Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Fastest way to insert a million rows in Oracle

How can I insert more than a million rows in Oracle in optimal way for the following procdeure? It hangs if I increase FOR loop to a million rows.

create or replace procedure inst_prc1 as
   xssn number;
   xcount number;
   l_start Number;
   l_end Number;
   cursor c1 is select max(ssn)S1 from dtr_debtors1;

Begin
  l_start := DBMS_UTILITY.GET_TIME;
  FOR I IN 1..10000 LOOP
    For C1_REC IN C1 Loop
      insert into dtr_debtors1(SSN) values (C1_REC.S1+1);
    End loop;
  END LOOP;
  commit;
  l_end := DBMS_UTILITY.GET_TIME;
  DBMS_OUTPUT.PUT_LINE('The Procedure  Start Time is '||l_start);
  DBMS_OUTPUT.PUT_LINE('The Procedure End Time is '||l_end); 
End inst_prc1;
like image 515
user1016594 Avatar asked Aug 24 '13 03:08

user1016594


People also ask

How can I speed up bulk insert in Oracle?

To optimize insert speed, combine many small operations into a single large operation. Ideally, you make a single connection, send the data for many new rows at once, and delay all index updates and consistency checking until the very end.

How do I add a million record in SQL?

Firstly, I wrote a prc to insert row by row. Then I generate some random data for insert with the NVARCHAR(MAX) column to be a string of 1000 characters. Then use a loop to call the prc to insert the rows.

How can insert large volume data in Oracle?

Because the insert into select is the best bulk you can load. The fastest would be to disable the indexes (mark them unusable) and do this in a SINGLE insert: insert /*+ append */ into TARGET select COLS from SOURCE; commit; and rebuild the indexes using UNRECOVERABLE (and maybe even parallel).


1 Answers

Your approach will lead to memory issues. Fastest way will be this [Query edited after David's comment to take care of null scenario] :

insert into dtr_debtors1(SSN)
select a.S1+level
   from dual,(select nvl(max(ssn),0) S1 from dtr_debtors1) a
connect by level <= 10000 

A select insert is the fastest approach as everything stays in RAM. This query can become slow if it slips into Global temp area but then that needs DB tuning . I don't think there can be anything faster than this.

Few more details on memory use by Query:

Each query will have its own PGA [Program global area] which is basically RAM available to each query. If this this area is not sufficient to return query results then SQL engine starts using Golabl temp tablespace which is like hard disk and query starts becoming slow. If data needed by query is so huge that even temp area is not sufficient then you will tablespace error.

So always design query so that it stays in PGA else its a Red flag.

like image 91
Lokesh Avatar answered Nov 07 '22 00:11

Lokesh