- Overview
- Project setup
- Setup Kubernetes and Istio
- Install and configure Gatekeeper
- Enforcing structural policies
- Cleanup
This repo contains a set of example policies that can be used to enforce specic service mesh structure. Specifically, the policies are managed by OPA Gatekeeper and used to enforce specific production-friendly Istio behaviors.
- Install the Google Cloud SDK
- Create a Google Cloud project (with billing)
- Enable the Kubernetes Engine APIs:
gcloud services enable container.googleapis.com
- Create a GKE cluster
gcloud container clusters create [CLUSTER-NAME] \
--cluster-version=latest \
--machine-type=n1-standard-2
- Grab the cluster credentials so you can run
kubectlcommands
gcloud container clusters get-credentials [CLUSTER-NAME]
- Create a
cluster-adminrole binding so you can deploy Istio and Gatekeeper (later)
kubectl create clusterrolebinding cluster-admin-binding \
--clusterrole=cluster-admin \
--user=$(gcloud config get-value core/account)
- Download and unpack a recent version of Istio (e.g.
1.3.3)
curl -L https://git.io/getLatestIstio | ISTIO_VERSION=1.3.3 sh -
cd $ISTIO_VERSION
- Create the
istio-systemNamespace
kubectl create ns istio-system
- Use
helmto install the Istio CRDs
helm template install/kubernetes/helm/istio-init \
--name istio-init \
--namespace istio-system | kubectl apply -f -
- Use
helmto install the Istio control plane
helm template install/kubernetes/helm/istio \
--name istio \
--namespace istio-system \
--set kiali.enabled=true \
--set grafana.enabled=true \
--set tracing.enabled=true | kubectl apply -f -
Refer to the OPA Gatekeeper repo for docs and additional background on Constraint and ConstraintTemplate objects.
- Install the
gatekeepercontroller
kubectl apply -f https://raw.githubusercontent.com/open-policy-agent/gatekeeper/master/deploy/gatekeeper.yaml
- Configure Gatekeeper to sync selected objects into it's cache
- Required for
Constraintsthat usenamespaceSelectorto match against - Required for multi-object policies that evaluate existing cluster- or namespace-scoped objects
- Required for auditing existing resources
- Required for
kubectl apply -f gatekeeper-config.yaml
This repo contains 5 example policies in templates and constraints:
Checks Service objects in Namespaces labeled with istio-injection: enabled, and throws a violation if ports aren't named using Istio conventions.
Upload the ConstraintTemplate and Constraint:
kubectl apply -f templates/port-name-template.yaml
kubectl apply -f constraints/port-name-constraint.yaml
Test the Constraint with the sample object:
kubectl apply -f sample-objects/bad-port-name.yaml
This Constraint set enforcementAction: dryrun so the object should be admitted to the cluster, and appear as an audit violation in the status field:
kubectl get allowedserviceportname.constraints.gatekeeper.sh port-name-constraint -o yaml
Checks incoming VirtualService objects and compares them against existing VirtualService objects, and throws a violation if there are hostname/URI match collisions.
Upload the ConstraintTemplate and Constraint:
kubectl apply -f templates/vs-same-host-template.yaml
kubectl apply -f constraints/vs-same-host-constraint.yaml
Test the Constraint with the sample object:
kubectl apply -f sample-objects/bad-vs-host.yaml
This Constraint set enforcementAction: dryrun so the object should be admitted to the cluster, and appear as an audit violation in the status field:
kubectl get uniquevservicehostname.constraints.gatekeeper.sh unique-vs-host-constraint -o yaml
Checks incoming DestinationRule objects and compares their mTLS settings against Policy object mTLS settings, and throws a violation if they don't match.
Upload the ConstraintTemplate and Constraint:
kubectl apply -f templates/mismatched-mtls-template.yaml
kubectl apply -f constraints/mismatched-mtls-constraint.yaml
Test the Constraint with the sample object:
kubectl apply -f sample-objects/mismatched-policy.yaml
kubectl apply -f sample-objects/mismatched-dr.yaml
This Constraint set enforcementAction: dryrun so the object should be admitted to the cluster, and appear as an audit violation in the status field:
kubectl get mismatchedmtls.constraints.gatekeeper.sh mismatched-mtls-constraint -o yaml
Checks ServiceRoleBinding objects and throws a violation if they are set to allow unauthenticated access.
Upload the ConstraintTemplate and Constraint:
kubectl apply -f templates/source-all-template.yaml
kubectl apply -f constraints/source-all-constraint.yaml
Test the Constraint with the sample object:
kubectl apply -f sample-objects/bad-role-binding.yaml
This Constraint set enforcementAction: deny so the object should not be admitted to the cluster, and should return an error message.
Checks Policy objects and throws a violation if they attempt to disable mTLS for a specific service.
Apply a bad Policy sample object:
kubectl apply -f sample-objects/bad-policy-1.yaml
Upload the ConstraintTemplate and Constraint:
kubectl apply -f templates/policy-strict-template.yaml
kubectl apply -f constraints/policy-strict-constraint.yaml
Test the Constraint with another sample object:
kubectl apply -f sample-objects/bad-policy-2.yaml
This Constraint set enforcementAction: deny so bad-policy-2.yaml should not be admitted to the cluster, and should return an error message. And because there was a pre-existing object that now violates the Constraint you can check the status field to see that violation:
kubectl get policystrictonly.constraints.gatekeeper.sh policy-strict-only -o yaml
gcloud container clusters delete [CLUSTER-NAME]