I am using s3cmd
to upload some stuff to a S3 bucket. The problem is, how do I feed some config vars to it, programmatically?
I am not using version 1.5, so I don't have the --access_key
and --secret_key
flags available. I only have --configure
, which creates a config file interactively, and -c
, which has to be fed a config file. But how do I actually build that config file? The config file built by --configure
adds numerous options there; I only need to pass the access key and secret key to my s3cmd
command.
I've been struggling with the same issue, but luckily, since I'm using docker
I could generate the config file during the image build.
Dockerfile:
FROM ubuntu:xenial
ARG ACCESS_KEY
ARG SECRET_KEY
COPY template.s3cfg /tmp/template.s3cfg
RUN apt-get -y update; \
apt-get -y install python-setuptools wget gettext-base; \
wget http://netix.dl.sourceforge.net/project/s3tools/s3cmd/1.6.0/s3cmd-1.6.0.tar.gz; \
tar xvfz s3cmd-1.6.0.tar.gz; \
cd s3cmd-1.6.0; \
python setup.py install
RUN ACCESS_KEY=$ACCESS_KEY \
SECRET_KEY=$SECRET_KEY \
bash -c '/usr/bin/envsubst < "/tmp/template.s3cfg" > "/root/.s3cfg";'
CMD [<whatever you wanna run>]
template.s3cfg:
[default]
access_key = ${ACCESS_KEY}
access_token =
add_encoding_exts =
add_headers =
bucket_location = US
ca_certs_file =
cache_file =
check_ssl_certificate = True
check_ssl_hostname = True
cloudfront_host = cloudfront.amazonaws.com
default_mime_type = binary/octet-stream
delay_updates = False
delete_after = False
delete_after_fetch = False
delete_removed = False
dry_run = False
enable_multipart = True
encrypt = False
expiry_date =
expiry_days =
expiry_prefix =
follow_symlinks = False
force = False
get_continue = False
gpg_command = None
gpg_decrypt = %(gpg_command)s -d --verbose --no-use-agent --batch --yes --passphrase-fd %(passphrase_fd)s -o %(output_file)s %(input_file)s
gpg_encrypt = %(gpg_command)s -c --verbose --no-use-agent --batch --yes --passphrase-fd %(passphrase_fd)s -o %(output_file)s %(input_file)s
gpg_passphrase =
guess_mime_type = True
host_base = nyc3.digitaloceanspaces.com
host_bucket = %(bucket)s.nyc3.digitaloceanspaces.com
human_readable_sizes = False
invalidate_default_index_on_cf = False
invalidate_default_index_root_on_cf = True
invalidate_on_cf = False
kms_key =
limit = -1
limitrate = 0
list_md5 = False
log_target_prefix =
long_listing = False
max_delete = -1
mime_type =
multipart_chunk_size_mb = 15
multipart_max_chunks = 10000
preserve_attrs = True
progress_meter = True
proxy_host =
proxy_port = 0
put_continue = False
recursive = False
recv_chunk = 65536
reduced_redundancy = False
requester_pays = False
restore_days = 1
restore_priority = Standard
secret_key = ${SECRET_KEY}
send_chunk = 65536
server_side_encryption = False
signature_v2 = False
signurl_use_https = False
simpledb_host = sdb.amazonaws.com
skip_existing = False
socket_timeout = 300
stats = False
stop_on_error = False
storage_class =
urlencoding_mode = normal
use_http_expect = False
use_https = True
use_mime_magic = True
verbosity = WARNING
website_endpoint = http://%(bucket)s.s3-website-%(location)s.amazonaws.com/
website_error =
website_index = index.html
Now. When building the image, you simply specify the ACCESS_KEY
and SECRET_KEY
arguments, and you're good to go.
Surely, you can specify even more values that way. You could create a bash script, you can echo the config into the file so you wouldn't lose your currently existing profiles. You don't have use docker for it at all, that's just my use case.
Long story short: use envsubst
.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With