I'm providing an API via Rust and Rocket via Amazon Elastic Container Service. Whenever I put or get objects to Amazon S3, it works great locally, but if deployed on Amazon ECS, I'm getting this run-time error:
HttpDispatch(HttpDispatchError { message: "The OpenSSL library reported an error" })
This also happens when I run the Docker image on my machine.
I've added comments where the error is happening:
use super::types::SomeCustomType;
use rusoto_core::{DefaultCredentialsProvider, Region, default_tls_client};
use rusoto_s3::{S3, S3Client, GetObjectRequest};
pub fn load_data_from_s3(object_name: String) -> SomeCustomType {
let credentials = DefaultCredentialsProvider::new().unwrap();
let client = S3Client::new(default_tls_client().unwrap(), credentials, Region::UsWest2);
let mut request = GetObjectRequest::default();
request.bucket = "bucket-name".to_string();
request.key = object_name.to_string();
match client.get_object(&request) {
// *** This is going to fail in docker container on run-time ***
Ok(file) => {
// this part is actually not important for this example,
// so code has been omitted
someCustomType
}
Err(e) => {
println!("{:?}", e); // *** errors out here! ***
SomeCustomType::default()
}
}
}
Cargo.toml
[dependencies]
brotli="1.0.8"
chrono = "0.3.1"
fnv = "1.0.5"
rusted_cypher = "1.1.0"
rocket = { git = "https://github.com/SergioBenitez/Rocket", rev = "614297eb9bc8fa5d9c54f653dc35b8cc3a22891f" }
rocket_codegen = { git = "https://github.com/SergioBenitez/Rocket", rev = "614297eb9bc8fa5d9c54f653dc35b8cc3a22891f" }
rocket_contrib = { git = "https://github.com/SergioBenitez/Rocket", rev = "614297eb9bc8fa5d9c54f653dc35b8cc3a22891f" }
rusoto_core = "0.25.0"
rusoto_s3 = "0.25.0"
serde = "1.0.8"
serde_json = "1.0.2"
serde_derive = "1.0.8"
This is how I build the Docker image on macOS:
cargo clean &&
docker run -v $PWD:/volume -w /volume -t manonthemat/muslrust cargo build --release &&
docker build -t dockerimagename .
The Docker image manonthemat/muslrust is essentially clux/muslrust. I had to build my own image because I needed a more recent nightly build of Rust.
This is the (simplified) Dockerfile which has been working great for me so far:
FROM scratch
ADD target/x86_64-unknown-linux-musl/release/project /
CMD ["/project"]
Some of the things I've tried to resolve the issue....
Added openssl = "0.9.14"
to the Cargo.toml.
Change my Dockerfile to this:
FROM alpine:edge
ADD target/x86_64-unknown-linux-musl/release/project /
RUN apk add --no-cache curl perl openssl-dev ca-certificates linux-headers build-base zsh
CMD ["/project"]
This also didn't change anything, but gave me some more options to look inside.
I changed the cross-compilation step after cargo clean
to this:
docker run -v $PWD:/volume -w /volume -e RUST_LOG="rusoto,hyper=debug" -e OPENSSL_STATIC=1 -e OPENSSL_DIR=/usr/local -t manonthemat/muslrust cargo build --release --features "logging"
After the new docker images was built, get a shell:
docker run -i -e ROCKET_ENV=prod -e ROCKET_ADDRESS=0.0.0.0 -e RUST_LOG="rusoto,hyper=debug" dockerimagename /bin/zsh
There I executed my project by providing a different path to the ssl certificates that doesn't exist with no different effect.
In the next run, I set it to point to a different path:
SSL_CERT_DIR=/etc/ssl/certs /project
and I got an interesting result, when printing out the error of the client.get_object(&request)
call:
Unknown("<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<Error><Code>SignatureDoesNotMatch</Code><Message>The request signature we calculated does not match the signature you provided. Check your key and signing method.</Message><AWSAccessKeyId>...
I replaced rusoto with the aws-sdk-rust crate
thread 'main' panicked at 'Error dispatching request: HttpDispatchError { message: "the handshake failed" }', /checkout/src/libcore/result.rs:860 stack backtrace:
0: std::sys::imp::backtrace::tracing::imp::unwind_backtrace at ./checkout/src/libstd/sys/unix/backtrace/tracing/gcc_s.rs:49
1: std::sys_common::backtrace::_print at ./checkout/src/libstd/sys_common/backtrace.rs:71
2: std::panicking::default_hook::{{closure}} at ./checkout/src/libstd/sys_common/backtrace.rs:60
at ./checkout/src/libstd/panicking.rs:355
3: std::panicking::default_hook
at ./checkout/src/libstd/panicking.rs:371
4: std::panicking::rust_panic_with_hook
at ./checkout/src/libstd/panicking.rs:549
5: std::panicking::begin_panic
at ./checkout/src/libstd/panicking.rs:511
6: std::panicking::begin_panic_fmt
at ./checkout/src/libstd/panicking.rs:495
7: rust_begin_unwind
at ./checkout/src/libstd/panicking.rs:471
8: core::panicking::panic_fmt
at ./checkout/src/libcore/panicking.rs:69
9: core::result::unwrap_failed
10: <aws_sdk_rust::aws::s3::s3client::S3Client<P, D>>::get_object
11: himitsu::ingest::load_data_from_s3
12: himitsu::ingest::load_data
13: himitsu::main
14: __rust_maybe_catch_panic
at ./checkout/src/libpanic_unwind/lib.rs:98
15: std::rt::lang_start
at ./checkout/src/libstd/panicking.rs:433
at ./checkout/src/libstd/panic.rs:361
at ./checkout/src/libstd/rt.rs:59
I installed a Linux distribution via VirtualBox on my Mac, updated libraries, installed the OpenSSL headers and rust, then imported the project. Now I'm getting the SignatureDoesNotMatch error right away. I verified that I can access a Neo4j server via https through the vpn of the host machine, so SSL seems to work at least in parts.
Compiling and running the project on Amazon ECS-Optimized Amazon Linux AMI 2017.03.a works. Building the docker image works, too. Running the docker image from within that system does not, as it returns with standard_init_linux.go:178: exec user process caused "no such file or directory"
even though the file is there, has the right permissions, can run other operations on it etc... Just not executing it. This is also the case when rolling back to a previous state that doesn't have any S3/OpenSSL dependencies. This is true for scratch
and alpine
base images. But, if I build the docker image with ubuntu
as base image, I get the pre-S3/OpenSSL version running. For the version with rusuto, I'll get an OpenSSL error, even when installing the OpenSSL library and its headers.
Compiled the Docker image on my Mac, pushed to private repo to docker hub. Pulled that docker image via ssh session onto EC2 instance (same one as in 6). Running it now does not give me the "no such file or directory" error as in 6, but the good ol' HttpDispatch(HttpDispatchError { message: "The OpenSSL library reported an error" })
(now even when passing SSL_CERTS_DIR=/etc/ssl/certs into the container's environment)
try running
update-ca-certificates
in the image
like:
FROM scratch
ADD target/x86_64-unknown-linux-musl/release/project /
RUN update-ca-certificates
CMD ["/project"]
These are the steps I've taken to make the deployment work on AWS.
I'm sure there are ways to optimize this and I will edit this post as I'll learn more about the process, but these are the steps I've taken.
I built the binary on macOS:
docker run -v $PWD:/volume -w /volume -e RUST_LOG="rusoto,hyper=debug" -e OPENSSL_STATIC=1 -e OPENSSL_DIR=/usr/local -e SSL_CERT_DIR=/etc/ssl/certs -t manonthemat/muslrust cargo build --release --features "logging"
I modified the Dockerfile
FROM alpine:edge
COPY target/x86_64-unknown-linux-musl/release/project /
RUN apk update && apk add --no-cache pkgconfig openssl-dev ca-certificates linux-headers && update-ca-certificates
CMD [ "/project" ]
I built the docker image
docker run -e SSL_CERT_DIR=/etc/ssl/certs secretuser/secretrepo:notsosecrettag
I tagged and pushed the docker image to the AWS repository
For a successful run on Amazon Elastic Container Service, I had to modify the task definition. In the containerDefinitions I had to up the memory and add this into the environment array:
`{
"name": "SSL_CERT_DIR",
"value": "/etc/ssl/certs"
}`
For some unknown and probably unrelated reason I also had to update the agents on the EC2 instances and then restart those.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With