What I am trying to achieve: block all traffic to a service, containing the code to handle this within the same namespace as the service.
Why: this is the first step in "locking down" a specific service to specific IPs/CIDRs
I have a primary ingress GW called istio-ingressgateway
which works for services.
$ kubectl describe gw istio-ingressgateway -n istio-system
Name: istio-ingressgateway
Namespace: istio-system
Labels: operator.istio.io/component=IngressGateways
operator.istio.io/managed=Reconcile
operator.istio.io/version=1.5.5
release=istio
Annotations: API Version: networking.istio.io/v1beta1
Kind: Gateway
Metadata:
Creation Timestamp: 2020-08-28T15:45:10Z
Generation: 1
Resource Version: 95438963
Self Link: /apis/networking.istio.io/v1beta1/namespaces/istio-system/gateways/istio-ingressgateway
UID: ae5dd2d0-44a3-4c2b-a7ba-4b29c26fa0b9
Spec:
Selector:
App: istio-ingressgateway
Istio: ingressgateway
Servers:
Hosts:
*
Port:
Name: http
Number: 80
Protocol: HTTP
Events: <none>
I also have another "primary" GW, the K8s ingress GW to support TLS (thought I'd include this, to be as explicit as possible)
k describe gw istio-autogenerated-k8s-ingress -n istio-system
Name: istio-autogenerated-k8s-ingress
Namespace: istio-system
Labels: app=istio-ingressgateway
istio=ingressgateway
operator.istio.io/component=IngressGateways
operator.istio.io/managed=Reconcile
operator.istio.io/version=1.5.5
release=istio
Annotations: API Version: networking.istio.io/v1beta1
Kind: Gateway
Metadata:
Creation Timestamp: 2020-08-28T15:45:56Z
Generation: 2
Resource Version: 95439499
Self Link: /apis/networking.istio.io/v1beta1/namespaces/istio-system/gateways/istio-autogenerated-k8s-ingress
UID: edd46c17-9975-4089-95ff-a2414d40954a
Spec:
Selector:
Istio: ingressgateway
Servers:
Hosts:
*
Port:
Name: http
Number: 80
Protocol: HTTP
Hosts:
*
Port:
Name: https-default
Number: 443
Protocol: HTTPS
Tls:
Credential Name: ingress-cert
Mode: SIMPLE
Private Key: sds
Server Certificate: sds
Events: <none>
I want to be able to create another GW, in the namespace x
and have an authorization policy attached to that GW.
If I create the authorization policy in the istio-system
namespace, then it comes back with RBAC: access denied
which is great - but that is for all services using the primary GW.
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: block-all
namespace: istio-system
spec:
selector:
matchLabels:
app: istio-ingressgateway
action: DENY
rules:
- from:
- source:
ipBlocks: ["0.0.0.0/0"]
What I currently have does not work. Any pointers would be highly appreciated. The following are all created under the x
namespace when applying the kubectl apply -f files.yaml -n x
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
annotations:
app: x-ingress
name: x-gw
labels:
app: x-ingress
spec:
selector:
istio: ingressgateway
servers:
- hosts:
- x.y.com
port:
name: http
number: 80
protocol: HTTP
tls:
httpsRedirect: true
- hosts:
- x.y.com
port:
name: https
number: 443
protocol: HTTPS
tls:
mode: SIMPLE
privateKey: sds
serverCertificate: sds
credentialName: ingress-cert
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: x
labels:
app: x
spec:
hosts:
- x.y.com
gateways:
- x-gw
http:
- route:
- destination:
host: x
---
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: x-ingress-policy
spec:
selector:
matchLabels:
app: x-ingress
action: DENY
rules:
- from:
- source:
ipBlocks: ["0.0.0.0/0"]
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: x
labels:
app: x
spec:
hosts:
- x.y.com
gateways:
- x-gw
http:
- route:
- destination:
host: x
The above should be blocking all traffic to the GW, as it matches on the CIDR range of 0.0.0.0/0
I am entirely misunderstanding the concept of GWs/AuthorizationPolicies or have I missed something?
Edit I ended up creating another GW which had the IP restriction block on that, as classic load balancers on AWS do not support IP forwarding.
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
metadata:
namespace: istio-system
name: istiocontrolplane
spec:
profile: demo
components:
ingressGateways:
- name: istio-ingressgateway
enabled: true
- name: admin-ingressgateway
enabled: true
label:
istio: admin-ingressgateway
k8s:
serviceAnnotations:
service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
---
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: deny-all-admin
namespace: istio-system
spec:
selector:
matchLabels:
istio: admin-ingressgateway
action: ALLOW
rules:
- from:
- source:
ipBlocks: ["176.252.114.59/32"]
kubectl patch svc istio-ingressgateway -n istio-system -p '{"spec":{"externalTrafficPolicy":"Local"}}'
I then used that gateway in my workload that I wanted to lock down.
The client side Envoy and the server side Envoy establish a mutual TLS connection, and Istio forwards the traffic from the client side Envoy to the server side Envoy. The server side Envoy authorizes the request. If authorized, it forwards the traffic to the backend service through local TCP connections.
Istio Authorization Policy enables access control on workloads in the mesh. Authorization policy supports CUSTOM, DENY and ALLOW actions for access control.
ISTIO_MUTUAL: Secure connections to the upstream using mutual TLS by presenting client certificates for authentication. Compared to Mutual mode, this mode uses certificates generated automatically by Istio for mTLS authentication.
As far as I know you should rather use AuthorizationPolicy in 3 ways
I have tried to make it work on a specific gateway with annotations like you did, but I couldn't make it work for me.
e.g.
the following authorization policy denies all requests to workloads in namespace x.
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: deny-all
namespace: x
spec:
{}
the following authorization policy denies all requests on ingress gateway.
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: deny-all
namespace: istio-system
spec:
selector:
matchLabels:
app: istio-ingressgateway
the following authorization policy denies all requests on httpbin in x namespace.
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: deny-service-x
namespace: x
spec:
selector:
matchLabels:
app: httpbin
Let's say you deny all requests on x namespace and allow only get requests for httpbin service.
Then you would use this AuthorizationPolicy to deny all requests
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: deny-all
namespace: x
spec:
{}
And this AuthorizationPolicy to allow only get requests.
apiVersion: "security.istio.io/v1beta1"
kind: "AuthorizationPolicy"
metadata:
name: "x-viewer"
namespace: x
spec:
selector:
matchLabels:
app: httpbin
rules:
- to:
- operation:
methods: ["GET"]
And there is the main issue ,which is ipBlocks. There is related github issue about that.
As mentioned here by @incfly
I guess the reason why it’s stop working when in non ingress pod is because the sourceIP attribute will not be the real client IP then.
According to https://github.com/istio/istio/issues/22341 7, (not done yet) this aims at providing better support without setting k8s externalTrafficPolicy to local, and supports CIDR range as well.
I have tried this example from istio documentation to make it work, but it wasn't working for me, even if I changed externalTrafficPolicy
. Then a workaround with envoyfilter came from above istio discuss thread.
Answer provided by @hleal18 here.
Got and example working successfully using EnvoyFilters, specifically with remote_ip condition applied on httbin.
Sharing the manifest for reference.
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: httpbin
namespace: foo
spec:
workloadSelector:
labels:
app: httpbin
configPatches:
- applyTo: HTTP_FILTER
match:
context: SIDECAR_INBOUND
listener:
filterChain:
filter:
name: "envoy.http_connection_manager"
subFilter:
name: "envoy.router"
patch:
operation: INSERT_BEFORE
value:
name: envoy.filters.http.rbac
config:
rules:
action: ALLOW
policies:
"ip-premissions":
permissions:
- any: true
principals:
- remote_ip:
address_prefix: xxx.xxx.xx.xx
prefix_len: 32
I have tried above envoy filter on my test cluster and as far as I can see it's working.
Take a look at below steps I made.
1.I have changed the externalTrafficPolicy with
kubectl patch svc istio-ingressgateway -n istio-system -p '{"spec":{"externalTrafficPolicy":"Local"}}'
2.I have created namespace x with istio-injection enabled and deployed httpbin here.
kubectl create namespace x
kubectl label namespace x istio-injection=enabled
kubectl apply -f https://raw.githubusercontent.com/istio/istio/release-1.7/samples/httpbin/httpbin.yaml -n x
kubectl apply -f https://github.com/istio/istio/blob/master/samples/httpbin/httpbin-gateway.yaml -n x
3.I have created envoyfilter
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: httpbin
namespace: x
spec:
workloadSelector:
labels:
app: httpbin
configPatches:
- applyTo: HTTP_FILTER
match:
context: SIDECAR_INBOUND
listener:
filterChain:
filter:
name: "envoy.http_connection_manager"
subFilter:
name: "envoy.router"
patch:
operation: INSERT_BEFORE
value:
name: envoy.filters.http.rbac
config:
rules:
action: ALLOW
policies:
"ip-premissions":
permissions:
- any: true
principals:
- remote_ip:
address_prefix: xx.xx.xx.xx
prefix_len: 32
address_prefix is the CLIENT_IP, there are commands I have used to get it.
export INGRESS_HOST=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
export INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="http2")].port}')
curl "$INGRESS_HOST":"$INGRESS_PORT"/headers -s -o /dev/null -w "%{http_code}\n"
CLIENT_IP=$(curl "$INGRESS_HOST":"$INGRESS_PORT"/ip -s | grep "origin" | cut -d'"' -f 4) && echo "$CLIENT_IP"
4.I have test it with curl and my browser.
curl "$INGRESS_HOST":"$INGRESS_PORT"/headers -s -o /dev/null -w "%{http_code}\n"
200
Let me know if you have any more questions, I might be able to help.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With