Istio service mesh
Use your K8sGateway proxy as the ingress gateway to control and secure traffic that enters your service mesh.
About service mesh
A service mesh is a dedicated infrastructure layer that you add your apps to, which ensures secure service-to-service communication across cloud networks. With a service mesh, you can solve problems such as service identity, mutual TLS communication, consistent L7 network telemetry gathering, service resilience, secure traffic routing between services across clusters, and policy enforcement, such as to enforce quotas or rate limit requests. To learn more about the benefits of using a service mesh, see What is a service mesh in Solo.io’s Gloo Mesh Enterprise documentation.
About Istio
The open source project Istio is the leading service mesh implementation that offers powerful features to secure, control, connect, and monitor cloud-native, distributed applications. Istio is designed for workloads that run in one or more Kubernetes clusters, but you can also extend your service mesh to include virtual machines and other endpoints that are hosted outside your cluster. The key benefits of Istio include:
- Automatic load balancing for HTTP, gRPC, WebSocket, MongoDB, and TCP traffic
- Secure TLS encryption for service-to-service communication with identity-based authentication and authorization
- Advanced routing and traffic management policies, such as retries, failovers, and fault injection
- Fine-grained access control and quotas
- Automatic logs, metrics, and traces for traffic in the service mesh
About the Istio integration
K8sGateway comes with an Istio integration that allows you to configure your gateway proxy with an Istio sidecar. The Istio sidecar uses mutual TLS (mTLS) to prove its identity and to secure the connection between your gateway and the services in your Istio service mesh. In addition, you can control and secure the traffic that enters the mesh by applying all the advanced routing, traffic management, security, and resiliency capabilities that K8sGateway offers.
Before you begin
-
Follow the Get started guide to install K8sGateway, set up a gateway resource, and deploy the httpbin sample app.
-
Get the external address of the gateway and save it in an environment variable.
export INGRESS_GW_ADDRESS=$(kubectl get svc -n gloo-system gloo-proxy-http -o jsonpath="{.status.loadBalancer.ingress[0]['hostname','ip']}") echo $INGRESS_GW_ADDRESS
kubectl port-forward deployment/gloo-proxy-http -n gloo-system 8080:8080
Set up an Istio service mesh
Use Solo.io’s Gloo Mesh Enterprise product to install a managed Istio version by using the built-in Istio lifecycle manager, or manually install and manage your own Istio installation.
Gloo Mesh Enterprise is a service mesh management plane that is based on hardened, open-source projects like Envoy and Istio. With Gloo Mesh, you can unify the configuration, operation, and visibility of service-to-service connectivity across your distributed applications. These apps can run in different virtual machines (VMs) or Kubernetes clusters on premises or in various cloud providers, and even in different service meshes.
Follow the Gloo Mesh Enterprise get started guide to quickly install a managed Solo distribution of Istio by using the built-in Istio lifecycle manager.
Set up Istio. Choose between the following options to set up Istio:
- Manually install a Solo distribution of Istio. The Solo distribution of Istio is a hardened Istio enterprise image, which maintains
n-4
support for CVEs and other security fixes. - Install an open source distribution of Istio by following the Istio documentation.
Enable the Istio integration
Upgrade your K8sGateway installation to enable the Istio integration.
-
Get the name of the istiod service. Depending on how you set up Istio, you might see a revisionless service name (
istiod
) or a service name with a revision, such asistiod-1-21
.kubectl get services -n istio-system
Example output:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE istiod-1-21 ClusterIP 10.102.24.31 <none> 15010/TCP,15012/TCP,443/TCP,15014/TCP 3h49m
-
Derive the Kubernetes service address for your istiod deployment. The service address uses the format
<service-name>.<namespace>.svc:15012
. For example, if your service name isistiod-1-21
, the full service address isistiod-1-21.istio-system.svc:15012
. -
Get the Helm values for your current K8sGateway installation.
helm get values gloo-gateway -n gloo-system -o yaml > gloo-gateway.yaml open gloo-gateway.yaml
-
Add the following values to the Helm value file. Make sure that you change the
istioProxyContainer
values to the service address and cluster name of your Istio installation.global: istioIntegration: enableAutoMtls: true enabled: true istioSDS: enabled: true kubeGateway: enabled: true gatewayParameters: glooGateway: istio: istioProxyContainer: istioDiscoveryAddress: istiod-1-21.istio-system.svc:15012 istioMetaClusterId: mycluster istioMetaMeshId: mycluster
Setting Description istioDiscoveryAddress
The address of the istiod service. If omitted, istiod.istio-system.svc:15012
is used.istioMetaClusterId
istioMetaMeshId
The name of the cluster where K8sGateway is installed. -
Upgrade your K8sGateway installation.
helm upgrade -n gloo-system gloo-gateway gloo/gloo \ -f gloo-gateway.yaml \ --version=1.18.0-beta34
-
Verify that your
gloo-proxy-http
pod is restarted with 3 containers now:gateway-proxy
,istio-proxy
, andsds
.kubectl get pods -n gloo-system | grep gloo-proxy-http
Example output:
gloo-proxy-http-f7cd596b7-tv5z7 3/3 Running 0 3h31m
-
Optional: Review the GatewayParameters resource and verify that the
istioDiscoveryAddress
,istioMetaClusterId
, andistioMetaMeshId
are set to the values from your Helm chart.kubectl get gatewayparameters gloo-gateway -n gloo-system -o yaml
Example output:
apiVersion: gateway.gloo.solo.io/v1alpha1 kind: GatewayParameters metadata: annotations: meta.helm.sh/release-name: gloo-gateway meta.helm.sh/release-namespace: gloo-system ... spec: kube: deployment: replicas: 1 ... istio: istioProxyContainer: image: pullPolicy: IfNotPresent registry: docker.io/istio repository: proxyv2 tag: 1.22.0 istioDiscoveryAddress: istiod-1-21.istio-system.svc:15012 istioMetaClusterId: mycluster istioMetaMeshId: mycluster logLevel: warning podTemplate: extraLabels: gloo: kube-gateway ...
-
Optional: Review the Settings resource and verify that
appendXForwardedHost
,enableAutoMtls
, andenableIntegration
are all set totrue
.kubectl get settings default -n gloo-system -o yaml
Example output:
apiVersion: gloo.solo.io/v1 kind: Settings metadata: annotations: meta.helm.sh/release-name: gloo-gateway meta.helm.sh/release-namespace: gloo-system spec: consoleOptions: apiExplorerEnabled: true readOnly: false discovery: fdsMode: WHITELIST discoveryNamespace: gloo-system gloo: ... istioOptions: appendXForwardedHost: true enableAutoMtls: true enableIntegration: true ...
Set up mTLS routing to httpbin
-
Label the httpbin namespace for Istio sidecar injection.
export REVISION=$(kubectl get pod -L app=istiod -n istio-system -o jsonpath='{.items[0].metadata.labels.istio\.io/rev}') echo $REVISION kubectl label ns httpbin istio.io/rev=$REVISION --overwrite=true
-
Perform a rollout restart for the httpbin deployment so that an Istio sidecar is automatically added to the httpbin app.
kubectl rollout restart deployment httpbin -n httpbin
-
Verify that the httpbin app comes up with a fourth container.
kubectl get pods -n httpbin
Example output:
NAME READY STATUS RESTARTS AGE httpbin-f46cc8b9b-f4wbm 4/4 Running 0 10s
-
Send a request to the httpbin app. Verify that you get back a 200 HTTP response and that an
x-forwarded-client-cert
header is returned. The presence of this header indicates that the connection from the gateway to the httpbin app is now encrypted with mutual TLS.curl -vik http://$INGRESS_GW_ADDRESS:8080/headers -H "host: www.example.com:8080"
curl -vik localhost:8080/headers -H "host: www.example.com"
Example output:
{ "headers": { "Accept": [ "*/*" ], "Host": [ "www.example.com:8080" ], "User-Agent": [ "curl/7.77.0" ], "X-B3-Sampled": [ "0" ], "X-B3-Spanid": [ "92744e97e79d8f22" ], "X-B3-Traceid": [ "8189f0a6c4e3582792744e97e79d8f22" ], "X-Forwarded-Client-Cert": [ "By=spiffe://gloo-edge-docs-mgt/ns/httpbin/sa/httpbin;Hash=3a57f9d8fddea59614b4ade84fcc186edeffb47794c06608068a3553e811bdfe;Subject=\"\";URI=spiffe://gloo-edge-docs-mgt/ns/gloo-system/sa/gloo-proxy-http" ], "X-Forwarded-Proto": [ "http" ], "X-Request-Id": [ "7f1d6e38-3bf7-44fd-8298-a77c34e5b865" ] } }
Exclude a service from mTLS
You can exclude a service from requiring to communicate with the gateway proxy via mTLS by adding the disableIstioAutoMtls
option to the Upstream that represents your service.
-
Create an Upstream resource that represents the httpbin app and add the
disableIstioAutoMtls: true
option to it. This option excludes the httpbin Upstream from communicating with the gateway proxy via mTLS.kubectl apply -f- <<EOF apiVersion: gloo.solo.io/v1 kind: Upstream metadata: name: httpbin namespace: gloo-system spec: disableIstioAutoMtls: true kube: serviceName: httpbin serviceNamespace: httpbin servicePort: 8000 EOF
-
Create an HTTPRoute resource that routes traffic to the httpbin Upstream that you created.
kubectl apply -f- <<EOF apiVersion: gateway.networking.k8s.io/v1beta1 kind: HTTPRoute metadata: name: exclude-automtls namespace: gloo-system spec: parentRefs: - name: http namespace: gloo-system hostnames: - disable-automtls.example rules: - backendRefs: - name: httpbin kind: Upstream group: gloo.solo.io EOF
-
Send a request to the httpbin app on the
disable-automtls.example
domain. Verify that you do not get back thex-forwarded-client-cert
header.curl -vik http://$INGRESS_GW_ADDRESS:8080/headers \ -H "host:disable-automtls.example:8080"
curl -vik localhost:8080/headers -H "host: disable-automtls.example"
Example output:
{ "headers": { "Accept": [ "*/*" ], "Host": [ "disable-automtls.example:8080" ], "User-Agent": [ "curl/7.77.0" ], "X-Forwarded-Proto": [ "http" ], "X-Request-Id": [ "47c4dcc8-551b-4c93-8aa3-1cd1e15b137c" ] } }
-
Repeat the request to the httpbin app on the
www.example.com
domain that is enabled for mTLS. Verify that you continue to see thex-forwarded-client-cert
header.curl -vik http://$INGRESS_GW_ADDRESS:8080/headers \ -H "host: www.example.com:8080"
curl -vik localhost:8080/headers -H "host: www.example.com"
Example output:
{ "headers": { "Accept": [ "*/*" ], "Host": [ "www.example.com:8080" ], "User-Agent": [ "curl/7.77.0" ], "X-Forwarded-Client-Cert": [ "By=spiffe://gloo-edge-docs-mgt/ns/httpbin/sa/httpbin;Hash=3a57f9d8fddea59614b4ade84fcc186edeffb47794c06608068a3553e811bdfe;Subject=\"\";URI=spiffe://gloo-edge-docs-mgt/ns/gloo-system/sa/gloo-proxy-http" ], "X-Forwarded-Proto": [ "http" ], "X-Request-Id": [ "7f1d6e38-3bf7-44fd-8298-a77c34e5b865" ] } }
Cleanup
You can remove the resources that you created in this guide.-
Follow the Uninstall guide in the Gloo Mesh Enterprise documentation to remove Gloo Mesh Enterprise.
-
Follow the upgrade guide to upgrade your K8sGateway Helm installation values. Remove the Helm values that you added as part of this guide.
-
Remove the Istio sidecar from the httpbin app.
-
Remove the Istio label from the httpbin namespace.
kubectl label ns httpbin istio.io/rev-
-
Perform a rollout restart for the httpbin deployment.
kubectl rollout restart deployment httpbin -n httpbin
-
Verify that the Istio sidecar container is removed.
kubectl get pods -n httpbin
Example output:
NAME READY STATUS RESTARTS AGE httpbin-7d4965fb6d-mslx2 3/3 Running 0 6s
-
-
Remove the Upstream and HTTPRoute that you used to exclude a service from mTLS.
kubectl delete upstream httpbin -n gloo-system kubectl delete httproute exclude-automtls -n gloo-system