Granting permissions within the namespace
Now users are able to create their own namespace, we can configure a ClusterPolicy
that grants them cluster-admin
permissions within that namespace. To accomplish this, we'll generate a RoleBinding
between the default ServiceAccount
in that namespace and the cluster-admin
ClusterRole
.
Using a RoleBinding
in combination with a ClusterRole
grants the subject the permissions of that ClusterRole
, restricted within the specific namespace the RoleBinding
is created in.
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: create-cluster-admin-rolebindings
spec:
background: false
rules:
- name: serviceaccount-namespaced-cluster-admin
match:
any:
- resources:
kinds:
- Namespace
preconditions:
all:
- key: "{{request.operation || 'BACKGROUND'}}"
operator: AnyIn
value:
- CREATE
- key: "{{request.userInfo.groups}}"
operator: AnyIn
value:
- software-engineers
generate:
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
name: default-namespaced-cluster-admin
synchronize: true
namespace: "{{request.object.metadata.name}}"
data:
subjects:
- kind: ServiceAccount
name: default
namespace: "{{request.object.metadata.name}}"
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: rbac.authorization.k8s.io
With this ClusterPolicy
, a RoleBinding
for the default ServiceAccount
will automatically be created within the namespace upon creation. This means that the default ServiceAccount
will have permission to do anything within that specific namespace.
A keen eye will have noticed that we have configured a Kubernetes internal ServiceAccount
, rather than the "authenticated" user. The reason for this is that although it is possible to mount the local ~/.kube
directory to the DevPod, this complicates authentication flow using kubelogin
, which is used by many providers to request a token to authenticate users with against a cluster. Since users are limited to their own namespace, and they'll be the only one using the default ServiceAccount
within that namespace, this level of auditing is sufficient for the purpose of this guide.
Configuring kubectl
We do need to create a ~/.kube/config
file to communicate with the Kubernetes API server from inside the IDE terminal. Using a bit of configuration in the project's DevContainer configuration, we can provide a shell script that automatically generates this upon starting the DevPod.
{
"name": "Symfony API - Helm - Skaffold",
"image": "mcr.microsoft.com/devcontainers/base:bullseye",
"features": {
"ghcr.io/devcontainers/features/kubectl-helm-minikube:1": {
"version": "latest",
"helm": "none",
"minikube": "none"
}
},
"containerEnv": {
"REMOTE_USER": "${localEnv:USER}"
},
"postCreateCommand": "sh scripts/generate-in-cluster-kubeconfig.sh"
}
As the user inside the remote IDE server will be vscode
, we keep track of the actual user by setting the REMOTE_USER
environment variable based on the local USER
environment variable. The accompanying shell script uses kubectl
to generate the configuration, and set the namespace to the name of the user. We'll configure DevPod to use the username in a later step.
#!/bin/sh
export TOKEN=$(sudo cat /var/run/secrets/kubernetes.io/serviceaccount/token)
kubectl config set-cluster in-cluster --server https://kubernetes.default.svc --insecure-skip-tls-verify=true
kubectl config set-credentials in-cluster --token $TOKEN
kubectl config set-context in-cluster --user in-cluster --cluster in-cluster --namespace $REMOTE_USER
kubectl config use-context in-cluster
Note that we also configure the namespace for this user automatically, so appending -n username
is not needed in the IDE terminal.