I've noticed the following behavior.
I have a file that is about 3MB containing several thousand rows. In the rows I split and create prepared statement (about 250 000 statements).
What I do is:
preparedStatement
addBatch
do for every 200 rows {
executeBatch
clearBatch().
}
at the end
commit()
The memory usage will increase to around 70mb without out of memory error. Is it possible get the memory usage down? and have the transactional behavior (if one fails all fails.).
I was able to lower the memory by doing commit with the executeBatch
and clearBatch
... but this will cause a partial insert of the total set.
You could insert all rows into a temp table with same structure and if everything is fine. let the database insert them into to target table using: insert into target (select * from temp)
.
In case the import into the temp table fails you haven't changed anything in you target table.
EDIT: fixed syntax
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With