I'm new to Kubernetes and I'm struggling with few error. I want to create Kubernetes Cluster on my local system(mac).
My deployment.yaml --
apiVersion: apps/v1
kind: Deployment
metadata:
name: sv-premier
spec:
selector:
matchLabels:
app: sv-premier
template:
metadata:
labels:
app: sv-premier
spec:
volumes:
- name: google-cloud-key
secret:
secretName: gcp-key
containers:
- name: sv-premier
image: gcr.io/proto/premiercore1:latest
imagePullPolicy: Always
command: ["/bin/sh"]
args: ["-c", "while true; do echo Done Deploying sv-premier; sleep 3600;done"]
volumeMounts:
- name: google-cloud-key
mountPath: /var/secrets/google
env:
- name: GOOGLE_APPLICATION_CREDENTIALS
value: /var/secrets/google/key.json
ports:
- containerPort: 8080
imagePullSecrets:
- name: imagepullsecretkey
I created deployment as - kubectl apply -f deployment.yaml
kubectl get pods
NAME READY STATUS RESTARTS AGE
sv-premier-5cc8f599f6-9lrtq 1/1 Running 0 11s
kubectl describe pods sv-premier-5cc8f599f6-9lrtq
Name: sv-premier-5cc8f599f6-9lrtq
Namespace: default
Priority: 0
Node: docker-desktop/192.168.65.3
Start Time: Tue, 11 Feb 2020 19:04:21 +0530
Labels: app=sv-premier
pod-template-hash=5cc8f599f6
Annotations: <none>
Status: Running
IP: 10.1.0.54
IPs: <none>
Controlled By: ReplicaSet/sv-premier-5cc8f599f6
Containers:
sv-premier:
Container ID: docker://b8993b4fc43197947649c7409b37e6d381a8d4cbbe56e550bca83931747ddd3e
Image: gcr.io/proto/premiercore1:latest
Image ID: docker-pullable://gcr.io/proto/premiercore1@sha256:664778c72c3f79147c4c5b73914292a124009591f479a5e3acf42c444eb62860
Port: 4343/TCP
Host Port: 0/TCP
Command:
/bin/sh
Args:
-c
while true; do echo Done Deploying sv-premier; sleep 3600;done
State: Running
Started: Tue, 11 Feb 2020 19:04:24 +0530
Ready: True
Restart Count: 0
Environment:
GOOGLE_APPLICATION_CREDENTIALS: /var/secrets/google/key.json
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-s4jgd (ro)
/var/secrets/google from google-cloud-key (rw)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
google-cloud-key:
Type: Secret (a volume populated by a Secret)
SecretName: gcp-key
Optional: false
default-token-s4jgd:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-s4jgd
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 67s default-scheduler Successfully assigned default/sv-premier-5cc8f599f6-9lrtq to docker-desktop
Normal Pulling 66s kubelet, docker-desktop Pulling image "gcr.io/proto/premiercore1:latest"
Normal Pulled 64s kubelet, docker-desktop Successfully pulled image "gcr.io/proto/premiercore1:latest"
Normal Created 64s kubelet, docker-desktop Created container sv-premier
Normal Started 64s kubelet, docker-desktop Started container sv-premier
Why I am getting this --
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Somebody more experienced than me kindly help
node.kubernetes.io/unreachable:NoSchedule. node.kubernetes.io/not-ready:NoExecute. unreachable and not-ready aren't supposed to exist on the node at the same time. The not-ready taint is added when the node status is NotReady, while unreachable is added when the ready status is unknown.
You can use kubectl taint to remove taints. You can remove taints by key, key-value, or key-effect.
Node affinity is a property of Pods that attracts them to a set of nodes (either as a preference or a hard requirement). Taints are the opposite -- they allow a node to repel a set of pods. Tolerations are applied to pods. Tolerations allow the scheduler to schedule pods with matching taints.
What is Node Disk Pressure. Node disk pressure means, as the name suggests, that the disks that are attached to the node are under pressure. You're unlikely to encounter node disk pressure, as there are measures built into Kubernetes to avoid it, but it does happen from time to time.
Note Kubernetes automatically adds a toleration for
node.kubernetes.io/not-ready
andnode.kubernetes.io/unreachable
.Kubernetes automatically adds a toleration for node.kubernetes.io/not-ready with tolerationSeconds=300 unless the pod configuration provided by the user already has a toleration for node.kubernetes.io/not-ready. Likewise it adds a toleration for node.kubernetes.io/unreachable with tolerationSeconds=300 unless the pod configuration provided by the user already has a toleration for node.kubernetes.io/unreachable.
These automatically-added tolerations ensure that the default pod behavior of remaining bound for 5 minutes after one of these problems is detected is maintained.
Read Complete details here
The following taints are built in:
node.kubernetes.io/not-ready: Node is not ready
. This corresponds to the NodeCondition Ready being “False”.
node.kubernetes.io/unreachable
: Node is unreachable from the node controller. This corresponds to the NodeCondition Ready being “Unknown”.
More as below :
node.kubernetes.io/out-of-disk
: Node becomes out of disk.
node.kubernetes.io/memory-pressure
: Node has memory pressure.
node.kubernetes.io/disk-pressure
: Node has disk pressure.
node.kubernetes.io/network-unavailable
: Node’s network is unavailable.
node.kubernetes.io/unschedulable
: Node is unschedulable.
node.cloudprovider.kubernetes.io/uninitialized
: When the kubelet is started with “external” cloud provider, this taint is set on a node to mark it as unusable. After a controller from the cloud-controller-manager initializes this node, the kubelet removes this taint.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With