Currently AWS CLI doesn't provide support for UNIX wildcards in a command's “path” argument. However, it is quite easy to replicate this functionality using the --exclude and --include parameters available on several aws s3 commands.
The /sync key that follows the S3 bucket name indicates to AWS CLI to upload the files in the /sync folder in S3. If the /sync folder does not exist in S3, it will be automatically created. The code above will result in the output, as shown in the demonstration below.
To download multiple files from an aws bucket to your current directory, you can use recursive
, exclude
, and include
flags.
The order of the parameters matters.
Example command:
aws s3 cp s3://data/ . --recursive --exclude "*" --include "2016-08*"
For more info on how to use these filters: http://docs.aws.amazon.com/cli/latest/reference/s3/#use-of-exclude-and-include-filters
The Order of the Parameters Matters
The exclude and include should be used in a specific order, We have to first exclude and then include. The viceversa of it will not be successful.
aws s3 cp s3://data/ . --recursive --include "2016-08*" --exclude "*"
This will fail because order of the parameters maters in this case. The include is excluded by the *
aws s3 cp s3://data/ . --recursive --exclude "*" --include "2016-08*"`
This one will work because the we excluded everything but later we had included the specific directory.
Okay I have to say the example is wrong and should be corrected as follows:
aws s3 cp . s3://data/ --recursive --exclude "*" --include "2006-08*" --exclude "*/*"
The .
needs to be right after the cp
. The final --exclude
is to make sure that nothing is picked up from any subdirectories that are picked up by the --recursive
(learned that one by mistake...)
This will work for anyone struggling with this by the time they got here.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With