Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How does Hibernate's batch-fetching algorithm work?

I found this description of the batch-fetching algorithm in "Manning - Java Persistence with Hibernate":

What is the real batch-fetching algorithm? (...) Imagine a batch size of 20 and a total number of 119 uninitialized proxies that have to be loaded in batches. At startup time, Hibernate reads the mapping metadata and creates 11 batch loaders internally. Each loader knows how many proxies it can initialize: 20, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1. The goal is to minimize the memory consumption for loader creation and to create enough loaders that every possible batch fetch can be produced. Another goal is to minimize the number of SQL SELECTs, obviously. To initialize 119 proxies Hibernate executes seven batches (you probably expected six, because 6 x 20 > 119). The batch loaders that are applied are five times 20, one time 10, and one time 9, automatically selected by Hibernate.

but I still don't understand how it works.

  1. Why 11 batch loaders ?
  2. Why batch loaders can initialize: 20, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1 proxies ?

If anybody could present a step by step algorithm ... :)

like image 696
Maciek Kreft Avatar asked Aug 12 '10 15:08

Maciek Kreft


People also ask

How does Hibernate batch processing work?

Hibernate in Practice - The Complete Course save(employee); } tx. commit(); session. close(); By default, Hibernate will cache all the persisted objects in the session-level cache and ultimately your application would fall over with an OutOfMemoryException somewhere around the 50,000th row.

Which approach may be used to improve the performance of lazy fetching by retrieving a batch of objects or collections when a lazy association is accessed?

By default, Hibernate uses lazy select fetching for collections and lazy proxy fetching for single-valued associations. These defaults make sense for most associations in the majority of applications. If you set hibernate. default_batch_fetch_size , Hibernate will use the batch fetch optimization for lazy fetching.

What is batch update in Hibernate?

1. Overview. In this tutorial, we'll learn how we can batch insert and update entities using Hibernate/JPA. Batching allows us to send a group of SQL statements to the database in a single network call. This way, we can optimize the network and memory usage of our application.


2 Answers

This helps avoid creating a large number of different prepared statements.

Each query (prepared statement) needs to be parsed and its execution plan needs to be calculated and cached by the database. This process may be much more expensive than the actual execution of the query for which the statement has already been cached.

A large number of different statements may lead to purging other cached statements out of the cache, thus degrading the overall application performance.

Also, since hard parse is generally very expensive, it is usually faster to execute multiple cached prepared statements (including multiple database round trips) than to parse and execute a new one. So, besides the obvious benefit of reducing the number of different statements, it may actually be faster to retrieve all of the 119 entities by executing 11 cached statements than to create and execute a single new one which contains all of the 119 ids.

As already mentioned in the comments, Hibernate invokes ArrayHelper.getBatchSizes method to determine the batch sizes for the given maximum batch size.

like image 170
Dragan Bozanovic Avatar answered Oct 16 '22 20:10

Dragan Bozanovic


I couldn't find any information on the web about how hibernate handles batch loading, but judging from your information, one could guess the following:

Why 11 batch loaders?

With a batch size of 20, if you want to minimize the number of loaders required for any combination of proxies, there are basically two options:

  • create a loader for 1,2,3,4,5,6,7,...20,21,22,23,... N uninitialized proxies (stupid!) OR
  • create a loader for any N between 1..9 and then create more loaders for batch_size/2(recursively)

Example: for batch size 40, you would end up with loaders for 40,20,10,9,8,7,6,5,4,3,2,1 loaders.

  1. If you have 33 uninitialized proxies, you can use the following loaders: 20, 10, 3
  2. If you have 119 uninitialized proxies, you can use the following loaders, 40(x2), 20, 10, 9
  3. ...

Why batch loaders can initialize: 20, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1 proxies ? I think the hibernate team chose this as a balance between the number of loaders required for loading a "common" number N of uninitialized proxies and memory consumption. The could have created a loader for every N between 0 and batch_size, but I suspect that the loaders have a considerable memory footprint so this is a tradeoff. The algorithm can be something like this (educated guess):

  1. n = batch_size; while (n > 10)

    1.1. loader(n); n = n / 2

  2. for n = 0..10 create loader(n)

like image 41
Miguel Ping Avatar answered Oct 16 '22 20:10

Miguel Ping