dear Kubernetes guru's!
I have spinned kube 1.4.1 cluster on manually created AWS hosts using 'contrib' Ansible playbook (https://github.com/kubernetes/contrib/tree/master/ansible).
My problem is that Kube doesn't attach EBS drives to minion hosts. If I define the pod as follows:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: kafka1
spec:
replicas: 1
template:
spec:
containers:
- name: kafka1
image: daniilyar/kafka
ports:
- containerPort: 9092
name: clientconnct
protocol: TCP
volumeMounts:
- mountPath: /kafka
name: storage
volumes:
- name: storage
awsElasticBlockStore:
volumeID: vol-56676d83
fsType: ext4
I get the following error in kubelet.log:
Mounting arguments: /var/lib/kubelet/plugins/kubernetes.io/aws-ebs/mounts/vol-56676d83 /var/lib/kubelet/pods/db213783-9477-11e6-8aa9-12f3d1cdf81a/volumes/kubernetes.io~aws-ebs/storage [bind]
Output: mount: special device /var/lib/kubelet/plugins/kubernetes.io/aws-ebs/mounts/vol-56676d83 does not exist
EBS volume keeps being in 'Available' state during that, so I am sure that Kube doesn't attach volume to host at all and so, doesn't mount it. I am 100% sure that this is a Kubernetes itself issue and not the permissioning issue because I can mount the same volume manually from within this minion to this minion just fine:
$ aws ec2 --region us-east-1 attach-volume --volume-id vol-56676d83 --instance-id $(wget -q -O - http://instance-data/latest/meta-data/instance-id) --device /dev/sdc
{
"AttachTime": "2016-10-18T15:02:41.672Z",
"InstanceId": "i-603cfb50",
"VolumeId": "vol-56676d83",
"State": "attaching",
"Device": "/dev/sdc"
}
Googling, hacking and trying older K8 versions didn't help me to solve this. Could anyone please point me on what else could I do to understand the problem so I can fix it? Any help is greatly appreciated.
Nobody helped me at K8 Slack channels, so after a day of pulling my hair off I found the solution by myself:
To get the K8 cluster installed by 'contrib' Ansible playbook (https://github.com/kubernetes/contrib/tree/master/ansible) mounting EBS volumes properly, except for IAM roles setup, you need to add the --cloud-provider=aws flag to your existing cluster: all kubelets, the apiserver, and the controller manager.
Without --cloud-provider=aws flag Kubernetes will give you an unfriendly 'mount: special device xxx does not exist' error instead of real cause.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With