I have several services which dumps data to database (oracle) after processing different input file formats (XML, flat files etc). I was wondering if I can have them generate SQL statements instead and log them to some file system, and have a single SQL processor ( something like java hibernet) which will process these SQL files and upload to DB. What's the fastest way to execute a huge set of SQL statements ( spread over a file system, and written by multiple writers) into an oracle DB? I was considering partioning the DB and batch updates. However, I want to know the best practice here. Seems like this is a common problem and somebody must have faced/resolved this issue already. Thanks Atanu
atanu, the worst thing to do is to generate huge lists of insert statements. If you want speed and if you know the layout of your data, use external tables to load the data into your oracle database. This looks a lot like using sql*loader but you can access your data using a table. In the table definition your data fields are mapped to your column names and data types. This will be the fastest way to do bulk loads into your database, for sure it is.
See Managing External Tables for some documentation.
What is the best practice rather depends on your criteria for determining "best". In many places the approach taken in many places is to use an ETL tool, perhaps Oracle Warehouse Builder, perhaps a third-party product. This need not be an expensive product: Pentaho offers Kettle in a free "self-supported" community edition.
When it comes to rolling your own, I don't think Hibernate is the way to go. Especially if your main concern is performance. I also think changing your feeds to generate SQL statements is an overly-complicated solution. What is wrong with PL/SQL modules to read the files and execute the SQL natively?
Certainly when I have done things like this before it has been with PL/SQL. The trick is to separate the input reading layer from the data writing layer. This is because the files are likely to require a lot of bespoke coding whereas the writing stuff is often fairly generic (this obviously depends on the precise details of your application).
A dynamic metadata-driven architecture is an attractive concept, particularly if your input structures are subject to a lot of variability. However such an approach can be difficult to debug and to tune. Code generation is an alternative technique.
When it comes to performance look to use bulk processing as much as possible. This is the main reason to prefer PL/SQL over files with individual SQL statements. Find out more.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With