As detailed here and confirmed here, the default number of rows Oracle returns at time when querying for data over JDBC is 10. I am working on an app that to has to read and compare lots of data from our database. I thought that if we just increase defaultRowPrefetch
to be something like 1000, then surely our app would perform faster. As it turned out, it performed slower and by about 20%.
We then decided to just slowly increase the number from 10 and see how it performs. We've seen about a 10% increase by setting it somewhere between 100 and 200. I would have never guessed, however, that setting it higher would make our app perform more slowly. Any ideas why this might happen?
Thanks!
EDIT:
Just for clarification, I'm using Oracle 11g R2 and Java 6.
EDIT 2:
Okay, I wanna restate my question to be clear, because judging from the answers below, I am not expressing myself properly:
How is it possible that if I set a higher fetch size, my app performs slower? To me, that sounds like saying "We're giving you a faster internet connection, i.e. a fatter pipe, but your web browsing will be slower.
All other things being equal, as they have been in our tests, we're super curious about how our app could perform worse with only this one change.
Possible explanations:
Java is doing nothing, while Oracle is computing the first 1000 rows instead of first 10.
Oracle is doing nothing, while Java is computing the last 1000 rows instead of last 10.
Communication protocols (e.g. TCP/IP) wait a lot and then must handle more data at once, but the peak data transfer will be throttled down by hardware limits. This is countered by protocol's overhead, so there should be optimal fetch size and anything less or more would be slower ;))
It would get worse if the process of fetching is synchronous with other Java code, so that Java asks for more rows only after processing the previous data and Oracle does nothing in the mean time.
Imagine there are 3 people:
- 1st one folds A4 paper in half
- 2nd one brings stacks of folded paper from one room to another
- 3rd cuts some shape from the folded paper.
How big should the stacks be, if the 1st one has to wait until the 2nd one returns and the 2nd one has to wait until the 3rd one finishes their job?
Stacks of 1000 will not be better than stacks of 10 i guess ;))
As with everything, there is no FAST=TRUE
setting. While the JDBC default fetch size of 10 is not ideal for your situation, it is OK for a "typical" OLTP application, and really isn't that bad for your case either, it seems. Apparently a large fetch size is not ideal for your situation either. But again, it isn't that bad to do 1000 at a time.
The other factor which you haven't mentioned is how WIDE the rows are that are being pulled. Consider that the chunk of data you are pulling from the database server across the network to the app server is the sum(WIDTH*ROWS)
. If your rows are 5000 bytes across, and you're pulling 1000 at a time, then each fetch is going to bring in 5 MB of data. In another case, perhaps your rows are "skinny" at only 100 bytes across. Then fetching 1000 of those is only shuttling 100K pieces around.
Because only YOU can know what the data will look like coming back, the recommendation is to set the fetch size system-wide for the "general" case, then adjust the oddball queries individually as needed.
In general, I too have found 100 to be a better setting for large data processes. That's not a recommendation, but relaying an observation.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With