Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

istio routing between two pods

trying to get into istio on kubernetes but it seems i am missing either some fundamentals, or i am doing things back to front. I am quite experienced in kubernetes, but istio and its virtualservice confuses me a bit.

I created 2 deployments (helloworld-v1/helloworld-v2). Both have the same image, the only thing thats different is the environment variables - which output either version: "v1" or version: "v2". I am using a little testcontainer i wrote which basically returns the headers i got into the application. A kubernetes service named "helloworld" can reach both.

I created a Virtualservice and a Destinationrule

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: helloworld
spec:
  hosts:
  - helloworld
http:
  - route:
     - destination:
       host: helloworld
       subset: v1
     weight: 90
     - destination:
       host: helloworld
       subset: v2
     weight: 10
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
  name: helloworld
spec:
  host: helloworld
  subsets:
  - name: v1
    labels:
      version: v1
  - name: v2
    labels:
      version: v2

According to the docs not mentioning any gateway should use the internal "mesh" one. Sidecar containers are successfully attached:

kubectl -n demo get all
NAME                                 READY     STATUS    RESTARTS   AGE
pod/curl-6657486bc6-w9x7d            2/2       Running   0          3h
pod/helloworld-v1-d4dbb89bd-mjw64    2/2       Running   0          6h
pod/helloworld-v2-6c86dfd5b6-ggkfk   2/2       Running   0          6h

NAME                 TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
service/helloworld   ClusterIP   10.43.184.153   <none>        80/TCP     6h

NAME                            DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/curl            1         1         1            1           3h
deployment.apps/helloworld-v1   1         1         1            1           6h
deployment.apps/helloworld-v2   1         1         1            1           6h

NAME                                       DESIRED   CURRENT   READY     AGE
replicaset.apps/curl-6657486bc6            1         1         1         3h
replicaset.apps/helloworld-v1-d4dbb89bd    1         1         1         6h
replicaset.apps/helloworld-v2-6c86dfd5b6   1         1         1         6h

Everything works quite fine when i access the application from "outside" (istio-ingressgateway), v2 is called one times, v1 9 nine times:

curl --silent -H 'host: helloworld' http://localhost
{"host":"helloworld","user-agent":"curl/7.47.0","accept":"*/*","x-forwarded-for":"10.42.0.0","x-forwarded-proto":"http","x-envoy-internal":"true","x-request-id":"a6a2d903-360f-91a0-b96e-6458d9b00c28","x-envoy-decorator-operation":"helloworld:80/*","x-b3-traceid":"e36ef1ba2229177e","x-b3-spanid":"e36ef1ba2229177e","x-b3-sampled":"1","x-istio-attributes":"Cj0KF2Rlc3RpbmF0aW9uLnNlcnZpY2UudWlkEiISIGlzdGlvOi8vZGVtby9zZXJ2aWNlcy9oZWxsb3dvcmxkCj8KGGRlc3RpbmF0aW9uLnNlcnZpY2UuaG9zdBIjEiFoZWxsb3dvcmxkLmRlbW8uc3ZjLmNsdXN0ZXIubG9jYWwKJwodZGVzdGluYXRpb24uc2VydmljZS5uYW1lc3BhY2USBhIEZGVtbwooChhkZXN0aW5hdGlvbi5zZXJ2aWNlLm5hbWUSDBIKaGVsbG93b3JsZAo6ChNkZXN0aW5hdGlvbi5zZXJ2aWNlEiMSIWhlbGxvd29ybGQuZGVtby5zdmMuY2x1c3Rlci5sb2NhbApPCgpzb3VyY2UudWlkEkESP2t1YmVybmV0ZXM6Ly9pc3Rpby1pbmdyZXNzZ2F0ZXdheS01Y2NiODc3NmRjLXRyeDhsLmlzdGlvLXN5c3RlbQ==","content-length":"0","version":"v1"}
"version": "v1",
"version": "v1",
"version": "v2",
"version": "v1",
"version": "v1",
"version": "v1",
"version": "v1",
"version": "v1",
"version": "v1",

But as soon as i do the curl from within a pod (in this case just byrnedo/alpine-curl) against the service things start to get confusing:

curl --silent -H 'host: helloworld' http://helloworld.demo.svc.cluster.local
{"host":"helloworld","user-agent":"curl/7.61.0","accept":"*/*","version":"v1"}
"version":"v2"
"version":"v2"
"version":"v1"
"version":"v1"
"version":"v2"
"version":"v2"
"version":"v1"
"version":"v2“
"version":"v1"

Not only that i miss all the istio attributes (which i understand in a service to service communication because as i understand it they are set when the request first enters the mesh via gateway), but the balance for me looks like the default 50:50 balance of a kubernetes service.

What do i have to do to achieve the same 1:9 balance on an inter-service communication? Do i have to create a second, "internal" gateway to use instead the service fqdn? Did i miss a definition? Should calling a service fqdn from within a pod respect a virtualservice routing?

used istio version is 1.0.1, used kubernetes version v1.11.1.

UPDATE deployed the sleep-pod as suggested, (this time not relying on the auto-injection of the demo namespace) but manually as described in the sleep sample

kubectl -n demo get deployment sleep -o wide
NAME      DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE       CONTAINERS          IMAGES                                     SELECTOR
sleep     1         1         1            1           2m        
sleep,istio-proxy   tutum/curl,docker.io/istio/proxyv2:1.0.1   app=sleep

Also changed the Virtualservice to 0/100 to see if it works at first glance . Unfortunately this did not change much:

export SLEEP_POD=$(kubectl get -n demo pod -l app=sleep -o jsonpath={.items..metadata.name})
kubectl -n demo exec -it $SLEEP_POD -c sleep curl http://helloworld
{"user- agent":"curl/7.35.0","host":"helloworld","accept":"*/*","version":"v2"}
kubectl -n demo exec -it $SLEEP_POD -c sleep curl http://helloworld
{"user-agent":"curl/7.35.0","host":"helloworld","accept":"*/*","version":"v1"}
kubectl -n demo exec -it $SLEEP_POD -c sleep curl http://helloworld
{"user-agent":"curl/7.35.0","host":"helloworld","accept":"*/*","version":"v2"}
kubectl -n demo exec -it $SLEEP_POD -c sleep curl http://helloworld
{"user-agent":"curl/7.35.0","host":"helloworld","accept":"*/*","version":"v1"}
kubectl -n demo exec -it $SLEEP_POD -c sleep curl http://helloworld
{"user-agent":"curl/7.35.0","host":"helloworld","accept":"*/*","version":"v2"
like image 822
sapien99 Avatar asked Aug 30 '18 13:08

sapien99


1 Answers

Found the solution, one of the prerequisites (i forgot) is that a proper routing requires named ports: @see https://istio.io/docs/setup/kubernetes/spec-requirements/.

Wrong:

spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 3000

Right:

spec:
  ports:
  - name: http
    port: 80
    protocol: TCP
    targetPort: 3000

After using name http everything works like a charm

like image 73
sapien99 Avatar answered Nov 22 '22 23:11

sapien99