I'm using Ansible to copy a directory (900 files, 136MBytes) from one host to another:
--- - name: copy a directory copy: src={{some_directory}} dest={{remote_directory}}
This operation takes an incredible 17 minutes, while a simple scp -r <src> <dest>
takes a mere 7 seconds.
I have tried the Accelerated mode, which according to the ansible docs, but to no avail.
can be anywhere from 2-6x faster than SSH with ControlPersist enabled, and 10x faster than paramiko.
but to no avail.
If you want to copy a file from an Ansible Control Master to remote hosts, the COPY (scp) module would be just fine.
By default, the ansible copy module does a force copy to the destination and overwrites the existing file when present.
TLDR: use synchronize
instead of copy
.
Here's the copy
command I'm using:
- copy: src=testdata dest=/tmp/testdata/
As a guess, I assume the sync operations are slow. The files module documentation implies this too:
The "copy" module recursively copy facility does not scale to lots (>hundreds) of files. For alternative, see synchronize module, which is a wrapper around rsync.
Digging into the source shows each file is processed with SHA1. That's implemented using hashlib.sha1. A local test implies that only takes 10 seconds for 900 files (that happen to take 400mb of space).
So, the next avenue. The copy is handled with module_utils/basic.py's atomic_move method. I'm not sure if accelerated mode helps (it's a mostly-deprecated feature), but I tried pipelining, putting this in a local ansible.cfg
:
[ssh_connection] pipelining=True
It didn't appear to help; my sample took 24 minutes to run . There's obviously a loop that checks a file, uploads it, fixes permissions, then starts on the next file. That's a lot of commands, even if the ssh connection is left open. Reading between the lines it makes a little bit of sense- the "file transfer" can't be done in pipelining, I think.
So, following the hint to use the synchronize
command:
- synchronize: src=testdata dest=/tmp/testdata/
That took 18 seconds, even with pipeline=False
. Clearly, the synchronize
command is the way to go in this case.
Keep in mind synchronize
uses rsync, which defaults to mod-time and file size. If you want or need checksumming, add checksum=True
to the command. Even with checksumming enabled the time didn't really change- still 15-18 seconds. I verified the checksum option was on by running ansible-playbook
with -vvvv
, that can be seen here:
ok: [testhost] => {"changed": false, "cmd": "rsync --delay-updates -FF --compress --checksum --archive --rsh 'ssh -o StrictHostKeyChecking=no' --out-format='<<CHANGED>>%i %n%L' \"testdata\" \"user@testhost:/tmp/testdata/\"", "msg": "", "rc": 0, "stdout_lines": []}
synchronize
configuration can be difficult in environments with become_user
. For one-time deployments you can archive source directory and copy it with unarchive
module:
- name: copy a directory unarchive: src: some_directory.tar.gz dest: {{remote_directory}} creates: {{remote_directory}}/indicator_file
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With