I use Talend to load data into a sql-server database.
It appears that the weakest point of my job is not the dataprocessing, but the effective load in my database, which is not faster than 17 rows/sec.
The funny point is that I can launch 5 jobs in the same time, and they'll all load at 17rows/sec .
What could explain this slowness and how could I improve the speed?
Thanks
New informations:
The transfer speed between my desktop and the server is about 1MByte
My job commits every 10 000
I use sql server 2008 R2
And the schema I use for my jobs is like this:
Talend connects natively to MSSQL. When you are creating the database connection to Microsoft SQL, you can specify additional connection properties through the Additional parameters field in the Database Settings area. You can also create Generic ODBC and Microsoft SQL Server (ODBC) connections with Java 7.
Database INSERT OR UPDATE
methods are incredibly costly as the database cannot batch all of the commits to do all at once and must do them line by line (ACID transactions force this because if it attempted to do an insert and then failed then all of the other records in this commit would also fail).
Instead, for large bulk operations it is always best to predetermine whether a record would be inserted or updated before passing the commit to the database and then sending 2 transactions to the database.
A typical job that needed this functionality would assemble the data that is to be INSERT OR UPDATEd
and then query the database table for the existing primary keys. If the primary key already exists then you can send this as an UPDATE
, otherwise it is an INSERT
. The logic for this can be easily done in a tMap
component.
In this job we have some data that we wish to INSERT OR UPDATE
into a database table that contains some pre-existing data:
And we wish to add the following data to it:
The job works by throwing the new data into a tHashOutput
component so it can be used multiple times in the same job (it simply puts it to memory or in large instances can cache it to the disk).
Following on from this one lot of data is read out of a tHashInput
component and directly into a tMap
. Another tHashInput
component is utilised to run a parameterised query against the table:
You may find this guide to Talend and parameterised queries useful. From here the returned records (so only the ones inside the database already) are used as a lookup to the tMap
.
This is then configured as an INNER JOIN
to find the records that need to be UPDATED
with the rejects from the INNER JOIN
to be inserted:
These outputs then just flow to separate tMySQLOutput
components to UPDATE
or INSERT
as necessary. And finally when the main subjob is complete we commit
the changes.
I think that @ydaetskcoR 's answer is perfect on a teorical point of view (divide rows that need Insert from those to Update) and gives you a working ETL solution useful for small dataset (some thousands rows).
Performing the lookup to be able to decide wheter a row has to be updated or not is costly in ETL as all the data is going back and forth between the Talend machine and the DB server.
When you get to some hundred of thousands o even millions of records you have to pass from ETL to ELT: you just load your data to some temp (staging) table as suggested from @Balazs Gunics and then you use SQL to manipulate it.
In this case after loading your data (only INSERT = fast, even faster using BULK LOAD components) you will issue a LEFT OUTER JOIN between the temp table and the destination one to divide the rows that are already there (need update) and the others.
This query will give you the rows you need to insert:
SELECT staging.* FROM staging
LEFT OUTER JOIN destination ON (destination.PK = staging.PK)
WHERE destination.PK IS NULL
This other one the rows you need to update:
SELECT staging.* FROM staging
LEFT OUTER JOIN destination ON (destination.PK = staging.PK)
WHERE destination.PK IS NOT NULL
This will be orders of magnitude faster than ETL, BUT you will need to use SQL to operate on your data, while in ETL you can use Java as ALL the data is taken to the Talend server, so often is common a first step on the local machine to pre-process the data in java (to clean and validate it) and then fire it up on the DB where you use join to load it in the right way.
Here are the ELT JOB screen shots.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With