Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

SQL*Loader stuck after loading 4.2 billion records

Tags:

sql-loader

We are stuck with a problem in sql loader. We are trying to load a data file with around 4.6 billion rows (nearly 340 GB) into 2 oracle tables on the basis of some when condition using Sql Loader. But after loading 4.2 billion records the SQL loader process is getting completed without throwing any errors even when rest of the records are still to be loaded.

There are no dicarded or bad records as well. Is there any limit for the number of records SQL Loader can load? Could not find any such thing documented anywhere. Please let me know if anyone has any clue for this issue.

Thanks!!

like image 765
SRD Avatar asked Aug 18 '11 06:08

SRD


People also ask

Can one improve the performance of SQL * Loader?

sql must be executed. 2. Disable Indexes and Constraints. For conventional data loads only, the disabling of indexes and constraints can greatly enhance the performance of SQL*Loader.

Which are the two methods of loading data using SQL * Loader?

SQL*Loader provides the following methods to load data: Conventional Path Loads. Direct Path Loads. External Table Loads.

How do I load multiple CSV files in SQL Loader?

You can specify multiple input files in your control file! Provided they have the same record format, they can all go in the same table. You can do this by listing them out explicitly: infile 'file1.

What is trailing Nullcols in SQL * Loader?

The TRAILING NULLCOLS clause tells SQL*Loader to treat any relatively positioned columns that are not present in the record as null columns. See Handling Short Records with Missing Data. The remainder of the control file contains the field list, which provides information about column formats in the table being loaded.


2 Answers

The value of 2³² is 4294967295 which is close to 4.2 billion.

So from the description of your problem I could only say that maybe sqlldr's default value for LOAD is actually 2³² - 1, and this value is stored on 32 bits.

like image 129
Benoit Avatar answered Sep 28 '22 04:09

Benoit


I've tested the hypothesis of Benoit about the internal LOAD integer being too small for +4.2bil rows. It holds true. When cutting up my source file into files of -4.2bil rows, and inserting them using append, the insert works perfectly.

So, it's not a database limitation, as suggested here: https://forums.oracle.com/message/11201322 (also posted my conclusions there)

Also, there's the suggestion of BluShadow on the Oracle Forum to try and use External tables. Didn't test that yet.

like image 38
Wouter Avatar answered Sep 28 '22 04:09

Wouter