Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to copy folders with 'gsutil'?

I've read the documentation on the gsutil cp command, but still don't understand how to copy folders to keep the same permissions. I tried this command:

gsutil cp gs://bucket-name/folder1/folder_to_copy gs://bucket-name/folder1/new_folder

But it resulted with the error:

CommandException: No URLs matched

Although, when I tried it with slashes in the end of each name, it didn't show any error:

gsutil cp gs://bucket-name/folder1/folder_to_copy/ gs://bucket-name/folder1/new_folder/

However, there was no new folder in the bucket when I checked with gsutil ls. What am I doing wrong?

like image 823
Praytic Avatar asked Dec 24 '19 09:12

Praytic


People also ask

Does gsutil cp create folder?

You cannot create a folder with gsutil on GCS. But you can copy an existing folder with gsutil to GCS.


2 Answers

You should use the -r option to copy a folder and its contents recursively:

gsutil cp -r gs://bucket-name/folder1/folder_to_copy gs://bucket-name/folder1/new_folder

Note that this will only work if folder_to_copy contains files. This is due to the fact Cloud Storage doesn't really have "folders" as one would expect in a typical GUI, it instead provides the illusion of a hierarchical file tree atop the "flat" name space, as explained here. In other words, the files within a folder are simply objects that have the folder prefix appended to them. Therefore, when you're doing gsutil cp, it expects actual objects to be copied and not empty directories which is something the CLI does not understand.

Another approach would be to simply use rsync instead, which tolerates the use of empty folders and also synchronizes the contents between source and destination folders:

gsutil rsync -r gs://bucket-name/folder1/folder_to_copy gs://bucket-name/folder1/new_folder

If you also want to preserve the ACL (permissions) of the objects, use the -p option:

gsutil rsync -p -r gs://bucket-name/folder1/folder_to_copy gs://bucket-name/folder1/new_folder
like image 114
Maxim Avatar answered Oct 11 '22 13:10

Maxim


To add to @Maxim's answer, you might consider using the -m argument when calling gsutil to allow parallel copy.

gsutil -m cp -r gs://bucket-name/folder1/folder_to_copy gs://bucket-name/folder1/new_folder

The -m arg enables parallelism.

As advised in the gsutil doc, the -m arg might not yield better performance with a slow network (i.e., at home). But for the case of inter bucket copy (machines in data center) the performance are likely to "significantly improve" to quote gsutil manual. See below

 -m          Causes supported operations (acl ch, acl set, cp, mv, rm, rsync,
              and setmeta) to run in parallel. This can significantly improve
              performance if you are performing operations on a large number of
              files over a reasonably fast network connection.

              gsutil performs the specified operation using a combination of
              multi-threading and multi-processing, using a number of threads
              and processors determined by the parallel_thread_count and
              parallel_process_count values set in the boto configuration
              file. You might want to experiment with these values, as the
              best values can vary based on a number of factors, including
              network speed, number of CPUs, and available memory.

              Using the -m option may make your performance worse if you
              are using a slower network, such as the typical network speeds
              offered by non-business home network plans. It can also make
              your performance worse for cases that perform all operations
              locally (e.g., gsutil rsync, where both source and destination
              URLs are on the local disk), because it can "thrash" your local
              disk.

              If a download or upload operation using parallel transfer fails
              before the entire transfer is complete (e.g. failing after 300 of
              1000 files have been transferred), you will need to restart the
              entire transfer.

              Also, although most commands will normally fail upon encountering
              an error when the -m flag is disabled, all commands will
              continue to try all operations when -m is enabled with multiple
              threads or processes, and the number of failed operations (if any)
              will be reported as an exception at the end of the command's
              execution.

Notes: at the time of this writing python3.8 seems to lead to problems with the -m flag. Use python3.7. More info on this Github Issue

like image 43
amiabl Avatar answered Oct 11 '22 13:10

amiabl