Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

kubernetes + coreos cluster - replacing certificates

I have a coreos kubernetes cluster, which I started by following this article:

kubernetes coreos cluster on AWS

TLDR;

> kube-aws init
> kube-aws render
> kube-aws up

Everything worked good and I had a kubernetes coreos cluster on AWS. In the article there is a warning that said:

PRODUCTION NOTE: the TLS keys and certificates generated by kube-aws should not be used to deploy a production Kubernetes cluster. Each component certificate is only valid for 90 days, while the CA is valid for 365 days. If deploying a production Kubernetes cluster, consider establishing PKI independently of this tool first.

So I wanted to replace the default certificates, so I followed the following article:

coreos certificates

TLDR;

  1. created the following self signed certificates: ca.pem, ca-key.pem
  2. created the certificates for the controller: apiserver.pem, apiserver-key.pem
  3. Replaced the certificates in the controller with the certificates created above, and rebooted the controller
  4. created a worker certificates and replaced the certificates in the workers and rebooted them
  5. configured kubectl to use the new certificates i created and also configured the context and user

Im getting a communication error between kubectl and the cluster, complaining about the certificate

Unable to connect to the server: x509: certificate signed by unknown authority

I also tried to use a signed certificate for kubectl which points to the cluster DNS, I set a DNS for the cluster.

How do I make kubectl communicate with my cluster?

Thanks in advance

EDIT:

My ~/.kube/config looks like this:

apiVersion: v1
clusters:
- cluster:
    certificate-authority: /Users/Yariv/Development/workspace/bugeez/bugeez-kubernetes/credentials/ca2.pem
    server: https://kubernetes.bugeez.io
  name: bugeez
contexts:
- context:
    cluster: bugeez
    user: bugeez-admin
  name: bugeez-system
current-context: bugeez-system
kind: Config
preferences: {}
users:
- name: bugeez-admin
  user:
    client-certificate: /Users/Yariv/Development/workspace/bugeez/bugeez-kubernetes/credentials/admin2.pem
    client-key: /Users/Yariv/Development/workspace/bugeez/bugeez-kubernetes/credentials/admin-key2.pem

EDIT:

All my certificates are signed by ca2.pem, I also validated this fact by running:

openssl verify -CAfile ca2.pem <certificate-name>

EDIT:

What I think is the cause of the error is this: When I switch the keys in the controller and workers, seems like cloud-config is overwriting my new keys with the old ones. How do I replace the keys and also change cloud-config to adapt to my change?

like image 679
Yariv Katz Avatar asked Jul 17 '16 08:07

Yariv Katz


2 Answers

An alternative solution that worked for me was to start a new cluster, and use custom certificates initially, without ever relying on the default temporary credentials.

Following the same tutorial that you used, I made the following changes:

> kube-aws init
> kube-aws render

Before kube-aws up, I created the certificates by following the tutorial. The only issue with the tutorial is that it is geared toward creating new certificates for an existing cluster. Therefore, the following changes are necessary:

  • This line: $ openssl req -x509 -new -nodes -key ca-key.pem -days 10000 -out ca.pem -subj "/CN=kube-ca" needs to be replaced by: $ openssl req -x509 -new -nodes -key ca-key.pem -days 10000 -out ca.pem

  • In the openssl.cnf file, remove the lines that define the IP for the master host, and the loadbalancer, since we don't know what they will be yet. The final openssl.cnf should look something like this:

openssl.cnf

[req]
...
[req_distinguished_name]
[ v3_req ]
...
[alt_names]
DNS.1 = kubernetes
DNS.2 = kubernetes.default
DNS.3 = kubernetes.default.svc
DNS.4 = kubernetes.default.svc.cluster.local
DNS.5 = mydomain.net
IP.1 = ${K8S_SERVICE_IP} # 10.3.0.1
IP.2 = ${MASTER_IP} # 10.0.0.50

I also used the same worker certificate for all the worker nodes.

After the certificates are in place, enter kube-aws up.

I hope this helps you get off the ground

like image 60
ygesher Avatar answered Oct 08 '22 07:10

ygesher


If the keys are indeed getting overwritten by your older ones, you will need to update the CloudFormation template to use the new userdata, which contains the new keys.

https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-updating-stacks.html

like image 1
Rob Avatar answered Oct 08 '22 06:10

Rob