I am new to gitlab CI. So I am trying to use https://gitlab.com/gitlab-org/gitlab-ce/blob/master/lib/gitlab/ci/templates/Jobs/Deploy.gitlab-ci.yml, to deploy simple test django app to the kubernetes cluster attached to my gitlab project using a custom chat https://gitlab.com/aidamir/citest/tree/master/chart. All things goes well, but the last moment it show error message from kubectl and it fails. here is output of the pipeline:
Running with gitlab-runner 12.2.0 (a987417a)
on docker-auto-scale 72989761
Using Docker executor with image registry.gitlab.com/gitlab-org/cluster-integration/auto-deploy-image:v0.1.0 ...
Running on runner-72989761-project-13952749-concurrent-0 via runner-72989761-srm-1568200144-ab3eb4d8...
Fetching changes with git depth set to 50...
Initialized empty Git repository in /builds/myporject/kubetest/.git/
Created fresh repository.
From https://gitlab.com/myproject/kubetest
* [new branch] master -> origin/master
Checking out 3efeaf21 as master...
Skipping Git submodules setup
Authenticating with credentials from job payload (GitLab Registry)
$ auto-deploy check_kube_domain
$ auto-deploy download_chart
Creating /root/.helm
Creating /root/.helm/repository
Creating /root/.helm/repository/cache
Creating /root/.helm/repository/local
Creating /root/.helm/plugins
Creating /root/.helm/starters
Creating /root/.helm/cache/archive
Creating /root/.helm/repository/repositories.yaml
Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com
Adding local repo with URL: http://127.0.0.1:8879/charts
$HELM_HOME has been configured at /root/.helm.
Not installing Tiller due to 'client-only' flag having been set
"gitlab" has been added to your repositories
No requirements found in /builds/myproject/kubetest/chart/charts.
No requirements found in chart//charts.
$ auto-deploy ensure_namespace
NAME STATUS AGE
kubetest-13952749-production Active 46h
$ auto-deploy initialize_tiller
Checking Tiller...
Tiller is listening on localhost:44134
Client: &version.Version{SemVer:"v2.14.0", GitCommit:"05811b84a3f93603dd6c2fcfe57944dfa7ab7fd0", GitTreeState:"clean"}
[debug] SERVER: "localhost:44134"
Kubernetes: &version.Info{Major:"1", Minor:"13+", GitVersion:"v1.13.7-gke.24", GitCommit:"2ce02ef1754a457ba464ab87dba9090d90cf0468", GitTreeState:"clean", BuildDate:"2019-08-12T22:05:28Z", GoVersion:"go1.11.5b4", Compiler:"gc", Platform:"linux/amd64"}
Server: &version.Version{SemVer:"v2.14.0", GitCommit:"05811b84a3f93603dd6c2fcfe57944dfa7ab7fd0", GitTreeState:"clean"}
$ auto-deploy create_secret
Create secret...
secret "gitlab-registry" deleted
secret/gitlab-registry replaced
$ auto-deploy deploy
secret "production-secret" deleted
secret/production-secret replaced
Deploying new release...
Release "production" has been upgraded.
LAST DEPLOYED: Wed Sep 11 11:12:21 2019
NAMESPACE: kubetest-13952749-production
STATUS: DEPLOYED
RESOURCES:
==> v1/Deployment
NAME READY UP-TO-DATE AVAILABLE AGE
production-djtest 1/1 1 1 46h
==> v1/Job
NAME COMPLETIONS DURATION AGE
djtest-update-static-auik5 0/1 3s 3s
==> v1/PersistentVolumeClaim
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
nginx-storage-pvc Bound nfs 10Gi RWX 3s
==> v1/Pod(related)
NAME READY STATUS RESTARTS AGE
djtest-update-static-auik5-zxd6m 0/1 ContainerCreating 0 3s
production-djtest-5bf5665c4f-n5g78 1/1 Running 0 46h
==> v1/Service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
production-djtest ClusterIP 10.0.0.146 <none> 5000/TCP 46h
NOTES:
1. Get the application URL by running these commands:
export POD_NAME=$(kubectl get pods --namespace kubetest-13952749-production -l "app.kubernetes.io/name=djtest,app.kubernetes.io/instance=production" -o jsonpath="{.items[0].metadata.name}")
echo "Visit http://127.0.0.1:8080 to use your application"
kubectl port-forward $POD_NAME 8080:80
error: arguments in resource/name form must have a single resource and name
ERROR: Job failed: exit code 1
Please help me to find the reason of the error message.
I did look to the auto-deploy script from the image registry.gitlab.com/gitlab-org/cluster-integration/auto-deploy-image:v0.1.0. There is a settings variable to disable rollout status check
if [[ -z "$ROLLOUT_STATUS_DISABLED" ]]; then
kubectl rollout status -n "$KUBE_NAMESPACE" -w "$ROLLOUT_RESOURCE_TYPE/$name"
fi
So setting
variables:
ROLLOUT_STATUS_DISABLED: "true"
prevents job fail. But I still have no answer why the script does not work with my custom chat?. When I do execution of the status checking command from my laptop it shows nothing errors.
kubectl rollout status -n kubetest-13952749-production -w "deployment/production-djtest"
deployment "production-djtest" successfully rolled out
I also found a complaint to a similar issue https://gitlab.com/gitlab-com/support-forum/issues/4737, but there is no activity on the post.
It is my gitlab-ci.yaml:
image: alpine:latest
variables:
POSTGRES_ENABLED: "false"
DOCKER_DRIVER: overlay2
ROLLOUT_RESOURCE_TYPE: deployment
DOCKER_TLS_CERTDIR: "" # https://gitlab.com/gitlab-org/gitlab-runner/issues/4501
stages:
- build
- test
- deploy # dummy stage to follow the template guidelines
- review
- dast
- staging
- canary
- production
- incremental rollout 10%
- incremental rollout 25%
- incremental rollout 50%
- incremental rollout 100%
- performance
- cleanup
include:
- template: Jobs/Deploy.gitlab-ci.yml # https://gitlab.com/gitlab-org/gitlab-ce/blob/master/lib/gitlab/ci/templates/Jobs/Deploy.gitlab-ci.yml
variables:
CI_APPLICATION_REPOSITORY: eu.gcr.io/myproject/django-test
error: arguments in resource/name form must have a single resource and name
That issue you linked to has Closed (moved)
in its status because it was moved from issue 66016, which has what I believe is the real answer:
Please try adding the following to your .gitlab-ci.yml:
variables:
ROLLOUT_RESOURCE_TYPE: deployment
Using just the Jobs/Deploy.gitlab-ci.yml
omits the variables:
block from Auto-DevOps.gitlab-ci.yml
which correctly sets that variable
In your case, I think you just need to move that variables:
up to the top, since (afaik) one cannot have two top-level variables:
blocks. I'm actually genuinely surprised your .gitlab-ci.yml
passed validation
Separately, if you haven't yet seen, you can set the TRACE
variable to switch auto-deploy into set -x
mode which is super, super helpful in seeing exactly what it is trying to do. I believe your command was trying to run rollout status /whatever-name
and with just a slash, it doesn't know what kind of name that is.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With