I have a comma delimited text file with the structure
field1 field2 field3 field4
1 2 3 4
I wrote the following script to bulk insert the text file, but I wanted to leave out column 3
create table test (field1 varchar(50),field2 varchar(50),field4 varchar(50))
go
bulk insert test
from 'c:\myFilePath'
with
(fieldterminator=',',
rowterminator='\n'
)
The insert worked fine, but the results of the insert made field4 look like field3,field4, so the field 3 was actually just concatenated onto field4. The flat files I'm working with are several gigs and can't be easily modified. Is there a way to use bulk insert but have it ignore the columns that aren't declared in the create table statement?
Use BULK INSERT with a view If you have to skip any column other than the last column, you must create a view of the target table that contains only the columns contained in the data file. Then, you can bulk import data from that file into the view.
BCP is faster in most cases then BULK Insert.
You need to use a 'format file' to implement a text qualifier for bulk insert. Essentially, you will need to teach the bulk insert that there's potentially different delimiters in each field. Create a text file called "level_2. fmt" and save it.
Try using LASTROW option of the bulk insert command by specifying the last row number to read. So if you have 100 rows, you can specify LASTROW=99 .
The easiest way is to create a view that has just the columns you require.
Then bulk insert into that view.
Example:
create table people (name varchar(20) not null, dob date null, sex char(1) null)
--If you are importing only name from list of names in names.txt
create view vwNames as
select name from people
bulk insert 'names.txt'
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With