Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

importing CSV data in Oracle (trying Apex/SQL Developer)

I initially asked this on Superuser, but somebody advised me to repost here.

I am using recent versions of APEX (4.1.1) and Oracle (11.2.0.3).

I am uploading CSV data to a set of tables. I have been trying it out and am encountering some problems which I haven’t seen before.

As an example, I tried to import data to this table:

CREATE TABLE SW_ENGINEER 
   (ENGINEER_NO VARCHAR2(10), 
          ENGINEER_NAME VARCHAR2(50), 
           CONSTRAINT SW_ENG_PK PRIMARY KEY (ENGINEER_NO))

It fails on this simplified data subset:

call log no,contract_no,call date,agreed date,agreed time,actual arrive,engineer no,engineer name,equipment code,cust desc,eng desc
a,b,c,22-Mar-06,1,23/03/2006 15:00,654,Flynn Hobbs,d,e,f
a,b,c,22-Mar-06,2,23/03/2006 15:00,654,Flynn Hobbs,d,e,f
a,b,c,19-Mar-06,3,19/03/2006 09:15,351,Rory Juarez,d,e,f

(It fails the same way on a larger data set) I can’t load it with either APEX or SQL Developer, as follows:

  • APEX: With APEX Data Workshop, import text, load existing table, use comma separated, select the table and the file, tick the checkbox for ‘header row’, and we have the column mapping form showing the right data. Then set all columns except engineer_no and engineer_name as ‘No’, and click Load Data.

This appears to work, and displays a summary row in the Text Data Load Repository, but on inspection it has loaded 0 rows and 79 failed. Clicking the 79 shows that they all suffered from "ORA-01008: not all variables bound". The selected columns look OK to me, so I’m wondering whether it is still including some of the others in spite of the ‘No’ settings ?

The problem doesn't occur if I edit the csv to remove irrelevant columns before uploading, but it would be a lot easier if I could use column mapping.

  • SQL Developer: Using the import data wizard in SD, I find that it won’t progress to column mapping unless Header Row is checked (the button is enabled but doesn’t function - why not ?), - but if it is checked (correctly in this case) then the Verify step fails because “Table Column Engineer_no is not big enough ...”.

I found that can be overcome by removing the header values from the csv file columns. So it appears that setting the Header Row checkbox does not actually cause it ignore the header row at all, and it is the header value ‘engineer-no’ which is causing the error. Which doesn’t seem right to me. I’ve used both approaches without a problem in the past, and am wondering whether the recent upgrade has anything to do with this.

Any ideas ? Or am I missing something obvious ?

like image 453
boisvert Avatar asked Mar 30 '12 10:03

boisvert


1 Answers

A colleague of mine finds this (not totally satisfying) solution:

if the columns you require are both at the beginning, it works, like this:

engineer no,engineer name,equipment code,cust desc,eng desc,call log no,contract_no,call date,agreed date,agreed time,actual arrive
654,Flynn Hobbs,d,e,f,a,b,c,22-Mar-06,1,23/03/2006 15:00
654,Flynn Hobbs,d,e,f,a,b,c,22-Mar-06,2,23/03/2006 15:00
351,Rory Juarez,d,e,f,a,b,c,19-Mar-06,3,19/03/2006 09:15

So it looks like a bug with the YES/NO include functionality in APEX text import?

like image 60
boisvert Avatar answered Oct 15 '22 14:10

boisvert