Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Moving 5,000,000 rows to another Postgresql DBs by Clojure & JDBC

I am trying to move 5,000,000 rows from one Postgre DB to another one. Both connections are in Hikari CP connection pool.

I went through a lot of documentation and posts. It left me with the code bellow. But it is not really usable:

(jdbc/with-db-connection [tx {:datasource source-db}]
  (jdbc/query tx
      [(jdbc/prepare-statement (jdbc/get-connection tx)
                                answer-sql
                                {:fetch-size 100000})]
                  {:result-set-fn (fn [result-set]
                                    (jdbc/insert-multi!
                                     {:datasource target-db}
                                     :migrated_answers
                                     result-set))}))

I've tried a lot of little different forms of this. jdbc/with-db-transaction or any other I can think of didn't help much.

  1. A lot of tutorials and posts mention only the way how to process the result as a whole. It is absolutely ok with small tables that get in RAM but and it seems fast. But this is not the case.

  2. So when I properly use :fetch-size and my RAM doesn't explode (hocus pocus) than the transfer IS very slow with both connections switching between 'active' and 'idle in transaction' states on DB sides. Ive never waited for so long to find any of the data actually transferred!

    When I create this simple batch in Talend Open Studio (ETL tool generating Java code) it transfers all the data in 5 minutes. And the cursor-size is "also" set to 100000 there. I think that Clojure's clean code should be faster.

  3. The fastest result that I've got was with this code below. I think it is because the :as-array parameter. If I don't use :max-rows parameter memory explodes because it is not processed lazily, so I can't use this for the whole transfet. Why? I don't understand the rules here.

    (jdbc/with-db-transaction [tx {:datasource source-db}]
      (jdbc/query tx
                  [(jdbc/prepare-statement (:connection tx)
                                            answer-sql
                                           {:result-type :forward-only
                                            :concurrency :read-only
                                            :fetch-size 2000
                                            :max-size 250000})]
                  {:as-arrays? true
                   :result-set-fn (fn [result-set]
                                    (let [keys (first result-set)
                                          values (rest result-set)]
                                      (jdbc/insert-multi! 
                                         {:datasource dct-db}
                                          :dim_answers
                                           keys values)))}))
    

I will appreciate any help or info that I am clearly missing.

like image 768
Akiz Avatar asked Mar 07 '23 23:03

Akiz


1 Answers

I think the key observation here is that while your query is lazily streaming results from one DB, your insert is just one giant write to the other DB. With regard to memory usage, I don't think it makes much difference whether you've streamed the query results lazily or not if you're collecting all those results (in-memory) for a single write operation at the end.

One way to balance memory usage with throughput is to batch the writes:

(db/with-db-transaction [tx {:datasource source-db}]
  (db/query tx
    [(db/prepare-statement (:connection tx)
                           answer-sql
                           {:result-type :forward-only
                            :concurrency :read-only
                            :fetch-size 2000})]
    {:as-arrays? true
     :result-set-fn (fn [result-set]
                      (let [keys (first result-set)
                            values (rest result-set)]
                        (doseq [batch (partition-all 2000 values)]
                          (db/insert-multi! {:datasource dct-db}
                                            :dim_answers
                                            keys
                                            batch))))}))

The difference is this uses partition-all to insert values in batches (the same size as :fetch-size but I'm sure this could be tuned). Compare the performance/memory usage of this approach with the other by setting JVM max heap size to something like -Xmx1g. I couldn't get the non-batched version to complete using this heap size.

I was able to migrate 6 million small-ish rows between local PostgreSQL DBs on my laptop in ~1 minute and with java using <400MB memory. I also used HikariCP.

If you do insert in batches, you may want to consider wrapping all the inserts in single transaction if it suits your use case. I left the additional transaction out here for brevity.

If i dont use :max-size parameter memory explodes

I can't find any reference (besides a spec) to this option in the latest clojure.java.jdbc, and it didn't affect my testing. I do see a :max-rows but surely you don't want that.

I think it is because the :as-array parameter.

I would expect this to be beneficial to memory usage; the row vectors should be more space-efficient than row maps.

like image 70
Taylor Wood Avatar answered Mar 23 '23 03:03

Taylor Wood