I recently started to work with AWS Data Migration Service (DMS) and running into some issues.
Currently attempting to migrate a 10GB Oracle DB to AWS RDS Postgres. Works but has crazy(?) memory requirements. Feels like it loads the entire DB into memory... Started with dms.r4.large
(15.5GB) but can not allocate memory
after approx. 98%.... Will run smoothly with dms.r4.xlarge
(30.5GB)
As you can see in the screenshot (free-able memory, minimum), the instance is constantly running "full" before all memory gets released when the task finishes (or crashs).
Is there any setting to change this and why does it behave like this? It makes the whole task unnecessary expensive...
As confirmed by AWS, this was indeed a bug with the latest engine (v3.1.3). Following additional insights have been provided by AWS to estimate the actual memory requirements:
Full LOB mode (using single row insert+update, commit rate)
Memory: (# of lob columns in a table) x (Number of table in parallel, default is 8) x (lob chunk size) x (Commit rate during full load) = 2 * 8 *64(k) * 10000k
Note: You may consider to reduce the "Commit rate during full load " value because we allocate memory using roughly the above method
Limited LOB mode (using array)
Memory: (# of lob columns in a table) x (Number of table in parallel, default is 8) x maxlobSize x bulkArraySize = 2 * 8 * 4096(k) * 1000
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With