- Automatic mutual TLS
- Before you begin
- Instructions
- Setup
- Start from PERMISSIVE mode
- Working with Sidecar Migration
- Lock down mutual TLS to STRICT
- Disable mutual TLS to plain text
- Destination rule overrides
- Cleanup
- Summary
- See also
Automatic mutual TLS
This tasks shows a simplified workflow for mutual TLS adoption.
With Istio auto mutual TLS feature, you can adopt mutual TLS by only configuring authentication policy without worrying about destination rule.
Istio tracks the server workloads migrated to Istio sidecar, and configures client sidecar to send mutual TLS traffic to those workloads automatically, and send plain text traffic to workloadswithout sidecars. This allows you to adopt Istio mutual TLS incrementally with minimal manual configuration.
Before you begin
Understand Istio authentication policy and relatedmutual TLS authentication concepts.
Install Istio with the
global.mtls.enabled
option set to false andglobal.mtls.auto
set to true.For example, using thedemo
configuration profile:
$ istioctl manifest apply --set profile=demo \
--set values.global.mtls.auto=true \
--set values.global.mtls.enabled=false
Instructions
Setup
Our examples deploy httpbin
service into three namespaces, full
, partial
, and legacy
.Each represents different phase of Istio migration.
full
namespace contains all server workloads finishing the Istio migration. All deployments havesidecar injected.
ZipZip
$ kubectl create ns full
$ kubectl apply -f <(istioctl kube-inject -f @samples/httpbin/httpbin.yaml@) -n full
$ kubectl apply -f <(istioctl kube-inject -f @samples/sleep/sleep.yaml@) -n full
partial
namespace contains server workloads partially migrated to Istio. Only migrated one hassidecar injected, able to serve mutual TLS traffic.
Zip
$ kubectl create ns partial
$ kubectl apply -f <(istioctl kube-inject -f @samples/httpbin/httpbin.yaml@) -n partial
$ cat <<EOF | kubectl apply -n partial -f -
apiVersion: apps/v1
kind: Deployment
metadata:
name: httpbin-nosidecar
spec:
replicas: 1
selector:
matchLabels:
app: httpbin
template:
metadata:
labels:
app: httpbin
version: nosidecar
spec:
containers:
- image: docker.io/kennethreitz/httpbin
imagePullPolicy: IfNotPresent
name: httpbin
ports:
- containerPort: 80
EOF
legacy
namespace contains the workloads and none of them have Envoy sidecar.
ZipZip
$ kubectl create ns legacy
$ kubectl apply -f @samples/httpbin/httpbin.yaml@ -n legacy
$ kubectl apply -f @samples/sleep/sleep.yaml@ -n legacy
Last we deploy two sleep
workloads, one has sidecar and one does not.
ZipZip
$ kubectl apply -f <(istioctl kube-inject -f @samples/sleep/sleep.yaml@) -n full
$ kubectl apply -f @samples/sleep/sleep.yaml@ -n legacy
You can confirm the deployments in all namespaces.
$ kubectl get pods -n full
$ kubectl get pods -n partial
$ kubectl get pods -n legacy
NAME READY STATUS RESTARTS AGE
httpbin-dcd949489-5cndk 2/2 Running 0 39s
sleep-58d6644d44-gb55j 2/2 Running 0 38s
NAME READY STATUS RESTARTS AGE
httpbin-6f6fc94fb6-8d62h 1/1 Running 0 10s
httpbin-dcd949489-5fsbs 2/2 Running 0 12s
NAME READY STATUS RESTARTS AGE
httpbin-54f5bb4957-lzxlg 1/1 Running 0 6s
sleep-74564b477b-vb6h4 1/1 Running 0 4s
You should also verify that there is a default mesh authentication policy in the system, which you can do as follows:
$ kubectl get policies.authentication.istio.io --all-namespaces
$ kubectl get meshpolicies -o yaml | grep ' mode'
NAMESPACE NAME AGE
istio-system grafana-ports-mtls-disabled 2h
mode: PERMISSIVE
Last but not least, verify that there are no destination rules that apply on the example services. You can do this by checking the host:
value ofexisting destination rules and make sure they do not match. For example:
$ kubectl get destinationrules.networking.istio.io --all-namespaces -o yaml | grep "host:"
host: istio-policy.istio-system.svc.cluster.local
host: istio-telemetry.istio-system.svc.cluster.local
You can verify setup by sending an HTTP request with curl
from any sleep
pod in the namespace full
, partial
or legacy
to either httpbin.full
, httpbin.partial
or httpbin.legacy
. All requests should succeed with HTTP code 200.
For example, here is a command to check sleep.full
to httpbin.full
reachability:
$ kubectl exec $(kubectl get pod -l app=sleep -n full -o jsonpath={.items..metadata.name}) -c sleep -n full -- curl http://httpbin.full:8000/headers -s -w "response %{http_code}\n" | egrep -o 'URI\=spiffe.*sa/[a-z]*|response.*$'
URI=spiffe://cluster.local/ns/full/sa/sleep
response 200
The SPIFFE URI shows the client identity from X509 certificate, whichindicates the traffic is sent in mutual TLS. If the traffic is in plain text, no client certificatewill be displayed.
Start from PERMISSIVE mode
In the setup, we start with PERMISSIVE
for all services in the mesh.
- All
httpbin.full
workloads and the workload with sidecar forhttpbin.partial
are able to serveboth mutual TLS traffic and plain text traffic. - The workload without sidecar for
httpbin.partial
and workloads ofhttpbin.legacy
can only serveplain text traffic.Automatic mutual TLS configures the client,sleep.full
, to send mutual TLS to the first type ofworkloads and plain text to the second type.
You can verify the reachability as:
$ for from in "full" "legacy"; do for to in "full" "partial" "legacy"; do echo "sleep.${from} to httpbin.${to}";kubectl exec $(kubectl get pod -l app=sleep -n ${from} -o jsonpath={.items..metadata.name}) -c sleep -n ${from} -- curl http://httpbin.${to}:8000/headers -s -w "response code: %{http_code}\n" | egrep -o 'URI\=spiffe.*sa/[a-z]*|response.*$'; echo -n "\n"; done; done
sleep.full to httpbin.full
URI=spiffe://cluster.local/ns/full/sa/sleep
response code: 200
sleep.full to httpbin.partial
URI=spiffe://cluster.local/ns/full/sa/sleep
response code: 200
sleep.full to httpbin.legacy
response code: 200
sleep.legacy to httpbin.full
response code: 200
sleep.legacy to httpbin.partial
response code: 200
sleep.legacy to httpbin.legacy
response code: 200
Working with Sidecar Migration
The request to httpbin.partial
can reach to server workloads with or without sidecar. Istioautomatically configures the sleep.full
client to initiates mutual TLS connection to workloadwith sidecar.
$ for i in `seq 1 10`; do kubectl exec $(kubectl get pod -l app=sleep -n full -o jsonpath={.items..metadata.name}) -c sleep -nfull -- curl http://httpbin.partial:8000/headers -s -w "response code: %{http_code}\n" | egrep -o 'URI\=spiffe.*sa/[a-z]*|response.*$'; echo -n "\n"; done
URI=spiffe://cluster.local/ns/full/sa/sleep
response code: 200
response code: 200
URI=spiffe://cluster.local/ns/full/sa/sleep
response code: 200
response code: 200
URI=spiffe://cluster.local/ns/full/sa/sleep
response code: 200
URI=spiffe://cluster.local/ns/full/sa/sleep
response code: 200
response code: 200
URI=spiffe://cluster.local/ns/full/sa/sleep
response code: 200
response code: 200
response code: 200
Without automatic mutual TLS feature, you have to track the sidecar migration finishes, and thenexplicitly configure the destination rule to make client send mutual TLS traffic to httpbin.full
.
Lock down mutual TLS to STRICT
Imagine now you need to lock down the httpbin.full
service to only accept mutual TLS traffic. Youcan configure authentication policy to STRICT
.
$ cat <<EOF | kubectl apply -n full -f -
apiVersion: "authentication.istio.io/v1alpha1"
kind: "Policy"
metadata:
name: "httpbin"
spec:
targets:
- name: httpbin
peers:
- mtls: {}
EOF
All httpbin.full
workloads and the workload with sidecar for httpbin.partial
can only servemutual TLS traffic.
Now the requests from the sleep.legacy
starts to fail, since it can’t send mutual TLS traffic.But the client sleep.full
is automatically configured with auto mutual TLS, to send mutual TLSrequest, returning 200.
$ for from in "full" "legacy"; do for to in "full" "partial" "legacy"; do echo "sleep.${from} to httpbin.${to}";kubectl exec $(kubectl get pod -l app=sleep -n ${from} -o jsonpath={.items..metadata.name}) -c sleep -n ${from} -- curl http://httpbin.${to}:8000/headers -s -w "response code: %{http_code}\n" | egrep -o 'URI\=spiffe.*sa/[a-z]*|response.*$'; echo -n "\n"; done; done
sleep.full to httpbin.full
URI=spiffe://cluster.local/ns/full/sa/sleep
response code: 200
sleep.full to httpbin.partial
URI=spiffe://cluster.local/ns/full/sa/sleep
response code: 200
sleep.full to httpbin.legacy
response code: 200
sleep.legacy to httpbin.full
response code: 000
command terminated with exit code 56
sleep.legacy to httpbin.partial
response code: 200
sleep.legacy to httpbin.legacy
response code: 200
Disable mutual TLS to plain text
If for some reason, you want service to be in plain text mode explicitly, we can configure authentication policy as plain text.
$ cat <<EOF | kubectl apply -n full -f -
apiVersion: "authentication.istio.io/v1alpha1"
kind: "Policy"
metadata:
name: "httpbin"
spec:
targets:
- name: httpbin
EOF
In this case, since the service is in plain text mode. Istio automatically configures client sidecarsto send plain text traffic to avoid breakage.
$ for from in "full" "legacy"; do for to in "full" "partial" "legacy"; do echo "sleep.${from} to httpbin.${to}";kubectl exec $(kubectl get pod -l app=sleep -n ${from} -o jsonpath={.items..metadata.name}) -c sleep -n ${from} -- curl http://httpbin.${to}:8000/headers -s -w "response code: %{http_code}\n" | egrep -o 'URI\=spiffe.*sa/[a-z]*|response.*$'; echo -n "\n"; done; done
sleep.full to httpbin.full
response code: 200
sleep.full to httpbin.partial
response code: 200
sleep.full to httpbin.legacy
response code: 200
sleep.legacy to httpbin.full
response code: 200
sleep.legacy to httpbin.partial
response code: 200
sleep.legacy to httpbin.legacy
response code: 200
All traffic are now in plain text.
Destination rule overrides
For backward compatibility, you can still use destination rule to override the TLS configuration asbefore. When destination rule has an explicit TLS configuration, that overrides the client sidecars’TLS configuration.
For example, you can explicitly configure destination rule for httpbin.full
to enable ordisable mutual TLS explicitly.
$ cat <<EOF | kubectl apply -n full -f -
apiVersion: "networking.istio.io/v1alpha3"
kind: "DestinationRule"
metadata:
name: "httpbin-full-mtls"
spec:
host: httpbin.full.svc.cluster.local
trafficPolicy:
tls:
mode: ISTIO_MUTUAL
EOF
Since in previous steps, we already disable the authentication policy for httpbin.full
to disablemutual TLS, we should see the traffic from sleep.full
starting to fail.
$ for from in "full" "legacy"; do for to in "full" "partial" "legacy"; do echo "sleep.${from} to httpbin.${to}";kubectl exec $(kubectl get pod -l app=sleep -n ${from} -o jsonpath={.items..metadata.name}) -c sleep -n ${from} -- curl http://httpbin.${to}:8000/headers -s -w "response code: %{http_code}\n" | egrep -o 'URI\=spiffe.*sa/[a-z]*|response.*$'; echo -n "\n"; done; done
sleep.full to httpbin.full
response code: 503
sleep.full to httpbin.partial
URI=spiffe://cluster.local/ns/full/sa/sleep
response code: 200
sleep.full to httpbin.legacy
response code: 200
sleep.legacy to httpbin.full
response code: 200
sleep.legacy to httpbin.partial
response code: 200
sleep.legacy to httpbin.legacy
response code: 200
Cleanup
$ kubectl delete ns full partial legacy
Summary
Automatic mutual TLS configures the client sidecar to send TLS traffic by default between sidecars.You only need to configure authentication policy.
As aforementioned, automatic mutual TLS is a mesh wide Helm installation option. You have tore-deploy Istio to enable or disable the feature. When disabling the feature, if you already relyon it to automatically encrypt the traffic, then traffic can fall back to plain text, whichcan affect your security posture or break the traffic, if the service is already configured asSTRICT
to only accept mutual TLS traffic.
Currently, automatic mutual TLS is an Alpha stage feature, please be aware of the risk, and theadditional CPU cost for TLS encryption.
We’re considering to make this feature the default enabled. Please consider to send your feedbackor encountered issues when trying auto mutual TLS via Git Hub.
See also
DNS Certificate Management
Provision and manage DNS certificates in Istio.
Introducing the Istio v1beta1 Authorization Policy
Introduction, motivation and design principles for the Istio v1beta1 Authorization Policy.
Secure Webhook Management
A more secure way to manage Istio webhooks.
Multi-Mesh Deployments for Isolation and Boundary Protection
Deploy environments that require isolation into separate meshes and enable inter-mesh communication by mesh federation.
App Identity and Access Adapter
Using Istio to secure multi-cloud Kubernetes applications with zero code changes.
Change in Secret Discovery Service in Istio 1.3
Taking advantage of Kubernetes trustworthy JWTs to issue certificates for workload instances more securely.