Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Error saving Docker images to AWS S3 bucket from private Docker Registry

I'm trying to set up a private Docker Registry and save images to AWS S3 instance. The Registry seems to be working fine -- it starts up ok and I can authenticate to it over https. The problem I'm having is that I'm getting an error saving to S3, so I assume there is some permission problem with the S3 IAM policy.

The docker run command looks like this:

docker run -p 443:5000 \
  --link redis:redis \
  -e REGISTRY_STORAGE=s3 \
  -e REGISTRY_STORAGE_S3_BUCKET=my-docker-registry \
  -e REGISTRY_STORAGE_S3_ACCESSKEY=**** \
  -e REGISTRY_STORAGE_S3_SECRETKEY=**** \
  -e REGISTRY_STORAGE_S3_REGION=us-east-1 \
  -v `pwd`/auth:/auth \
  -e REGISTRY_AUTH=htpasswd \
  -e "REGISTRY_AUTH_HTPASSWD_REALM=Registry Realm" \
  -e REGISTRY_AUTH_HTPASSWD_PATH=/auth/htpasswd \
  -v `pwd`/certs:/certs \
  -e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/my.com_chain.crt \
  -e REGISTRY_HTTP_TLS_KEY=/certs/my.com.key \
  -e REGISTRY_STORAGE_CACHE_BLOBDESCRIPTOR=redis \
  -e REGISTRY_REDIS_ADDR=redis:6379 \
  registry:2.5

And the S3 IAM policy looks like this:

{
   "Version":"2012-10-17",
   "Statement":[
      {
         "Effect":"Allow",
         "Action":[
            "s3:ListAllMyBuckets"
         ],
         "Resource":"arn:aws:s3:::*"
      },
      {
         "Effect":"Allow",
         "Action":[
            "s3:ListBucket",
            "s3:GetBucketLocation"
         ],
         "Resource":"arn:aws:s3:::my-docker-registry"
      },
      {
         "Effect":"Allow",
         "Action":[
              "s3:PutObject",
              "s3:GetObject",
              "s3:DeleteObject",
              "s3:ListMultipartUploadParts",
              "s3:AbortMultipartUpload"
         ],
         "Resource":"arn:aws:s3:::my-docker-registry/*"
      }
   ]
}

The error log entry is:

level=error msg="error resolving upload: s3aws: AccessDenied: Access Denied\n\tstatus code: 403, request id: 2B224..." auth.user.name=my-user go.version=go1.6.3 http.request.host=my.domain.com http.request.id=13b79c07-... http.request.method=PATCH http.request.remoteaddr="xx.xx.xx.xx:41392" http.request.uri="/v2/my-test/blobs/uploads/467d94ea-2a77...?_state=zQd-..." http.request.useragent="docker/1.12.0 go/go1.6.3 git-commit/8eab123 kernel/4.4.15-moby os/linux arch/amd64 UpstreamClient(Docker-Client/1.12.0 \\(darwin\\))" instance.id=8a8db6f1-8fe4 vars.name=my-test vars.uuid=467d94ea-2a77 version=v2.5.0

I've used a similar policy for file uploads in other apps, so I'm not sure where the problem is. What do I need to change in the IAM policy to allow the registry to save to the S3 bucket?

like image 834
ldg Avatar asked Aug 07 '16 05:08

ldg


People also ask

Can we store docker images in S3 bucket?

For registry storage, we can use filesystem, s3, azure, swift etc. For the complete list of options please visit docker site site. We need to store the docker images pushed to the registry. We will use S3 to store these docker images.

How do I copy a docker image to AWS ECR?

To push a Docker image to an Amazon ECR repository Authenticate your Docker client to the Amazon ECR registry to which you intend to push your image. Authentication tokens must be obtained for each registry used, and the tokens are valid for 12 hours. For more information, see Private registry authentication.


2 Answers

I figured it out - not sure if something changed with how Docker saves image files but it seems you now need to add s3:ListBucketMultipartUploads to the bucket-level permission (middle block below, IAM shown in full for completeness):

{
   "Version":"2012-10-17",
   "Statement":[
      {
         "Effect":"Allow",
         "Action":[
            "s3:ListAllMyBuckets"
         ],
         "Resource":"arn:aws:s3:::*"
      },
      {
         "Effect":"Allow",
         "Action":[
            "s3:ListBucket",
            "s3:GetBucketLocation",
            "s3:ListBucketMultipartUploads"
         ],
         "Resource":"arn:aws:s3:::my-docker-registry"
      },
      {
         "Effect":"Allow",
         "Action":[
              "s3:PutObject",
              "s3:GetObject",
              "s3:DeleteObject",
              "s3:ListMultipartUploadParts",
              "s3:AbortMultipartUpload"
         ],
         "Resource":"arn:aws:s3:::my-docker-registry/*"
      }
   ]
}

Seems to work well now.

Next steps are to create a docker-compose file with the above docker run args, add a redis container to that, and it's a full private registry solution.

like image 144
ldg Avatar answered Sep 30 '22 23:09

ldg


Please check if the S3 IAM role is assigned to the IAM user whose access key is being used. You can also assign this role to the EC2 instance and avoid using this access key.

like image 27
Shankar P S Avatar answered Sep 30 '22 21:09

Shankar P S