I'm using Azure DevOps, to handle PBI, repos, PRS, and builds, but all my infrastructure, including Kubernetes is managed by AWS.
There's not documentation, neither "the right and easy way" of how to deploy to AWS EKS using Azure DevOps Tasks.
I found this solution, its a good solution, but would be awesome to know how you guys resolve it, or if there are more approaches.
Amazon EKS cluster is publicly available and may not suit all architectures. For Private EKS cluster, please refer to Azure DevOps Self-Hosted Agents: Self-hosted Linux agents To use AWS Toolkit for Azure DevOps for accessing AWS services, you need an AWS account and AWS credentials.
steps: - task: PublishBuildArtifacts@1 displayName: 'Publish Artifact: drop' Before creating a release pipeline, you need to create an AKS (Azure Kubernetes Cluster) manually by using Azure CLI or Azure Portal. AKS is a managed Kubernetes cluster used to quickly deploy and manage the cluster.
Before creating a release pipeline, you need to create an AKS (Azure Kubernetes Cluster) manually by using Azure CLI or Azure Portal. AKS is a managed Kubernetes cluster used to quickly deploy and manage the cluster. Use the following command to create an AKS cluster with one node.
The deployment job uses the Kubernetes manifest task to create the imagePullSecret required by Kubernetes cluster nodes to pull from the Azure Container Registry resource. Manifest files are then used by the Kubernetes manifest task to deploy to the Kubernetes cluster.
After a research and try and failure, I found another way to do it, without messing around with shell scripts.
You just need to apply the following to Kubernetes, It will create a ServiceAccount and bind it to a custom Role, that role will have the permissions to create/delete deployments and pods (tweak it for services permissions).
deploy-robot-conf.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: deploy-robot
automountServiceAccountToken: false
---
apiVersion: v1
kind: Secret
metadata:
name: deploy-robot-secret
annotations:
kubernetes.io/service-account.name: deploy-robot
type: kubernetes.io/service-account-token
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: deploy-robot-role
namespace: default
rules: # ## Customize these to meet your requirements ##
- apiGroups: ["apps"]
resources: ["deployments"]
verbs: ["create", "delete"]
- apiGroups: [""]
resources: ["pods"]
verbs: ["create", "delete"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: global-rolebinding
namespace: default
subjects:
- kind: ServiceAccount
name: deploy-robot
namespace: default
roleRef:
kind: Role
name: deploy-robot-role
apiGroup: rbac.authorization.k8s.io
This will have the minimum permissions needed for Azure DevOps be able to deploy to the cluster.
Note: Please tweak the rules at the role resource to meet your need, for instance services resources permissions.
Then go to your release and create a Kubernetes Service Connection:
Fill the boxes, and follow the steps required to get your secret from the service account, remember that is deploy-robot if you didn't change the yaml file.
And then just use your Kubernetes Connection:
Another option would be to use 'kubeconf' based authentication, where 'kubeconf' file can be obtained with following AWS CLI command:
aws eks --region region update-kubeconfig --name cluster_name --kubconfig ~/.kube/AzureDevOpsConfig
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With