Open Source Blogs
Immerse yourself in SAP open source! Discover collaborative projects, insights into the latest technologies, and best practices in open source development.
cancel
Showing results for 
Search instead for 
Did you mean: 
gabrielt
Product and Topic Expert
Product and Topic Expert

This blog post provides a comprehensive guide on how to utilize Kubernetes service accounts and their OIDC tokens to establish secure communication between two Kubernetes clusters, referred to as “upstream” and “downstream” clusters. Imagine you want to deploy or edit an application inside one cluster from another one. Tools like ArgoCD support this pattern with running the ArgoCD components in one cluster, while running the actual workload in the downstream cluster. The last section of this blog provides a detailed walkthrough on how to best implement this setup in ArgoCD.


Understanding OpenID Connect (OIDC)


OIDC, or OpenID Connect, is a protocol that builds on OAuth2 and offers a standardized way to identify clients. It involves an authority creating signed identity tokens, which can then be verified by third parties using the authority’s publicly available OIDC metadata and public signing keys.


In Kubernetes, service accounts use OIDC. These accounts are represented by identity tokens that the Kubernetes API-server verifies, thus granting the service accounts access to the Kubernetes APIs. Moreover, external services can use the identity tokens to authenticate whether a request originated from a specific Kubernetes cluster, and to gain additional information such as the namespace and service account name.


Decoding a service account token, which gets injected at /var/run/secrets/kubernetes.io/serviceaccount/token in a pod, shows what information is available:


{
"aud": ["kubernetes", "gardener"],
"exp": 1693292880,
"iat": 1661756880,
"iss": "https://api.cluster.project.gardener.cloud",
"kubernetes.io": {
"namespace": "default",
"pod": {
"name": "test-pod",
"uid": "b38f5a1e-87c3-4009-b2c6-755d83c4283d"
},
"serviceaccount": {
"name": "default",
"uid": "97c400e9-fd0c-4d6d-a456-79c4fe27ac39"
},
"warnafter": 1661760487
},
"nbf": 1661756880,
"sub": "system:serviceaccount:default:default"
}

The interesting information is:



  • Issuer (iss😞 who created the identity token

  • Subject (sub😞 whom the identity token represents

  • Audience (aud😞 for whom these tokens are intended


Instead of the Kubernetes API-server of the upstream cluster itself, other Kubernetes API-Servers can validate the identity tokens as well. This is commonly called identity federation. A simple request from the upstream cluster to another downstream cluster looks like this:



Exposing and Adjusting the OIDC Metadata


For an external service, like the downstream Kubernetes cluster, to validate identity tokens, it needs to be able to query the public OIDC metadata. Kubernetes exposes the OIDC metadata under <API-server url>/.well-known/openid-configuration and the associated public signing keys under <API-server url>/openid/v1/jwks by default. Depending on the upstream Kubernetes API-server configuration these endpoints require authentication or, if your API-server runs in a corporate network, are not accessible at all from the outside. If your OIDC metadata is already available anonymously over the internet you can continue with Configuring Workload Identity Federation.


There are multiple options to ensure that an external service can retrieve them without authentication:





Expand for details of hosting metadata with a Google Cloud Storage Bucket

We use the third option as our API-servers are hosted in an internal network and couldn’t be exposed either directly or via a proxy. To set this up the OIDC metadata needs to be exposed on a public static page. An easy way to do this is to host them in a public Google Cloud Storage bucket as that allows them to be directly consumable without additional infrastructure.


Before uploading the configuration you need to update the OIDC issuer URL in the cluster. Kubernetes expects that the issuer URL matches the URL which it retrieves the configuration from. This can be easily done in Kubernetes with the setting --service-account-issuer <issuer-url> for the API-server to the desired issuer URL. In Gardener this can be done via the .spec.kubernetes.kubeAPIServer.serviceAccountConfig.issuer <issuer-url> for the cluster. For Google Cloud Storage the URL is https://storage.googleapis.com/<public_oidc_bucket>/<our_cluster>; and this URL can then be set as issuer in the cluster.


After the issuer is configured, you can retrieve the OIDC metadata with kubectl get --raw /.well-known/openid-configuration. They should look like this:


{
"issuer": "https://storage.googleapis.com/<public_oidc_bucket>/<our_cluster>",
"jwks_uri": "https://api.cluster.project.gardener.cloud:443/openid/v1/jwks",
"response_types_supported": ["id_token"],
"subject_types_supported": ["public"],
"id_token_signing_alg_values_supported": ["RS256"]
}

Before uploading them to the bucket, modify the jwks_uri to match the bucket URL, where the signing keys will be stored. The final oidc-configuration then should look like this.


{
"issuer": "https://storage.googleapis.com/<public_oidc_bucket>/<our_cluster>",
"jwks_uri": "https://storage.googleapis.com/<public_oidc_bucket>/<our_cluster>/openid/v1/jwks",
"response_types_supported": ["id_token"],
"subject_types_supported": ["public"],
"id_token_signing_alg_values_supported": ["RS256"]
}

And can be finally uploaded to the the bucket at https://storage.googleapis.com/<public_oidc_bucket>/<our_cluster>/.well-known/oidc-configuration.


Afterwards the signing keys (jwks) can be retrieved with kubectl get --raw /openid/v1/jwks and uploaded unmodified to https://storage.googleapis.com/<public_oidc_bucket>/<our_cluster>/openid/v1/jwks. Notice that when the signing keys are rotated in the Kubernetes API-server the new signing keys need to be uploaded again otherwise the OIDC federation will break.


The OIDC configuration is now publicly available and can be consumed from the OIDC federation service of the downstream Kubernetes cluster



Setting Up OIDC Trust in Kubernetes


The Kubernetes API server must trust the upstream Kubernetes API Server as an OIDC identity provider. For this, configure the trust in the Kubernetes API server using the --oidc-flags.


--oidc-issuer-url=<upstream_issuer_url>
--oidc-client-id=downstream-cluster
--oidc-username-claim=sub
--oidc-username-prefix=upstream-cluster-oidc:


  • issuer-url: Unique identifier for the OIDC identity provider (the Kubernetes upstream API server issuer).

  • client-id: Unique identifier for the Kubernetes cluster (for example your Kubernetes downstream API server URL).

  • username-claim: Identity token attribute to use as a username. It should uniquely represent the workload to allow granular authorization. The Kubernetes subject with system:serviceaccount:<namespace>:<service account name> is a good fit.

  • username-prefix: The prefix used for all identities issued by this OIDC provider. A unique prefix prevents unwanted impersonation of users inside your Kubernetes cluster.


After configuring the OIDC trust inside your Kubernetes API server, workloads can use the injected identity token to authenticate against this Kubernetes API server. Kubernetes extracts the user information from the identity token and uses the mapped Kubernetes username to determine authorization.



Note: Multiple OIDC-issuers, e.g., separate ones for user accounts and automation, can not be configured in the Kubernetes API server. However, the Gardener project provides a “Webhook Authenticator for dynamic registration of OpenID Connect providers”, which you can deploy inside a generic Kubernetes cluster.


If you have a Gardener Kubernetes cluster, the OIDC webhook authenticator exists as well as a managed shoot service and you can enable it with adding .spec.extensions[].type: shoot-oidc-service to your shoot configuration YAML.


With the OIDC Webhook Authenticator, you can create an OpenIDConnect resource to establish the trust relationship.


apiVersion: authentication.gardener.cloud/v1alpha1
kind: OpenIDConnect
metadata:
name: upstream-cluster-oidc
spec:
issuerURL: <upstream_issuer_url>
clientID: downstream-cluster
usernameClaim: sub
usernamePrefix: "upstream-cluster-oidc:"


Providing Identities Access in Kubernetes


Authorizations via roles and rolebindings are required in Kubernetes for any user to perform any action. Following the principle of least privilege to provide the best security, roles should have as few permissions as possible.


For this example we allow our identity to only modify deployments and list pods in the demo namespace.


apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: demo
name: kubernetes-oidc-role
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "watch", "list"]
- apiGroups: ["apps"]
resources: ["deployments"]
verbs: ["get", "watch", "list", "create", "update", "delete"]

After you create the role, bind it to the mapped workload user. The username consists of the username-prefix followed by the extracted username-claim attribute from the identity token.


apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: kubernetes-oidc-binding
namespace: demo
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: kubernetes-oidc-role
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: "upstream-cluster-oidc:system:serviceaccount:<namespace>:<service account name>"

The workload identity now has permission to perform actions inside the specific Kubernetes namespace.


Using a Workload Identity Token


We have now configured all required pieces and workload on our “upstream cluster” can now authenticate against the downstream API servers and perform operations.


To test this we can deploy this pod. Important is, that the audience of the projected service account token matches the clientID of the OpenIDConnect configuration of the downstream cluster.


apiVersion: v1
kind: Pod
metadata:
name: openid-test-pod
spec:
containers:
- name: k8s
image: alpine/k8s:1.26.5
command: ["sleep"]
args: ["3600"]
volumeMounts:
- name: federated-token
mountPath: /var/run/secrets/tokens
volumes:
- name: federated-token
projected:
sources:
- serviceAccountToken:
path: federated-token
audience: downstream-cluster

After deploying this pod, we can execute into the pod with kubectl exec -it openid-test-pod bash and run kubectl commands to perform actions in our downstream cluster. Besides the token, we need as well the URL of the upstream API server and it’s public certificate authority data as a file. Both can be manually specified on the command line to then execute our first command against the downstream cluster:


kubectl \
--server <upstream_api_server_url> \
--token $(cat /var/run/secrets/tokens/federated-token) \
--certificate-authority ca.crt \
--namespace demo \
get pods

With this command succeeding we are now able to interact from the upstream cluster with the downstream cluster. However specifying all these things always on the command line is tedious and we can optimize it by crafting an own KUBECONFIG which contains all important information and uses a execCredential Provider to retrieve the credentials on the fly from the file:


apiVersion: v1
clusters:
- cluster:
certificate-authority-data: <base64 certificate authority data>
server: <upstream_api_server_url>
name: my-cluster
contexts:
- context:
cluster: my-cluster
namespace: default
user: oidc
name: my-context
current-context: my-context
kind: Config
preferences: {}
users:
- name: oidc
user:
exec:
apiVersion: "client.authentication.k8s.io/v1"
interactiveMode: Never
command: "bash"
args:
- "-c"
- |
set -e -o pipefail
IDTOKEN=$(cat /var/run/secrets/tokens/federated-token)

# return token back to the credential plugin
cat << EOF
{
"apiVersion": "client.authentication.k8s.io/v1",
"kind": "ExecCredential",
"status": {
"token": "$IDTOKEN"
}
}
EOF

With this KUBECONFIG either placed at ~/.kube/config or the path to the file exported to KUBECONFIG you just run any kubectl and it will be executed against the federated cluster. The earlier kubectl command would then just be:


kubectl \
--namespace demo \
get pods

Using the Workload Identity Tokens in ArgoCD


A common application of workload identity tokens can be found in ArgoCD, where they enable the management of deployments on a downstream cluster from a centralized ArgoCD cluster.


To implement this in ArgoCD, we need to make slight adjustments to the standard Kubernetes OIDC.


Integrating the Workload Identity Token into ArgoCD Deployment


To leverage workload identity federation in ArgoCD, an identity token must be incorporated into the argocd-application-controller (which performs updates and manages the downstream cluster) and the argocd-server (which displays logs on the UI).


The following patch injects a projected service account at /var/run/secrets/tokens/federated-token with a target audience of downstream-cluster into a deployment:


# argocd-application-controller-token-patch.yaml
spec:
template:
spec:
containers:
- name: argocd-application-controller
- volumeMounts:
- name: federated-token
mountPath: /var/run/secrets/tokens
volumes:
- name: federated-token
projected:
sources:
- serviceAccountToken:
path: federated-token
audience: downstream-cluster

# argocd-server-token-patch.yaml
spec:
template:
spec:
containers:
- name: argocd-server
volumeMounts:
- name: federated-token
mountPath: /var/run/secrets/tokens
volumes:
- name: federated-token
projected:
sources:
- serviceAccountToken:
path: federated-token
audience: downstream-cluster

You can apply this patch to both the standard ArgoCD controller and server deployment using the following commands:


kubectl patch statefulset argocd-application-controller --patch-file argocd-application-controller-token-patch.yaml
kubectl patch deployment argocd-server --patch-file argocd-server-token-patch.yaml

Next, in your downstream cluster, you need to provide the two identities associated with the workload service accounts access to your resources. This can be done via:


apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: argocd-controller-binding
namespace: demo
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: kubernetes-oidc-role
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: "argocd-cluster-oidc:system:serviceaccount:argocd:argocd-application-controller"
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: argocd-server-binding
namespace: demo
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: kubernetes-oidc-role
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: "argocd-cluster-oidc:system:serviceaccount:argocd:argocd-server"

Adding the cluster to the ArgoCD


To allow ArgoCD to deploy to our downstream cluster, we need to provide the connection details in form of a special secret. In addition to the cluster’s name, API server URL, and the public certificate authority data of the downstream cluster, it also includes instructions on where to obtain the token. Despite being classified as a secret, it does not contain any confidential data and therefore, can be safely committed to your version control system.


A complete secret, which retrieves the token from /var/run/secrets/tokens/federated-token, will look like this:


apiVersion: v1
kind: Secret
metadata:
name: downstream-cluster-secret
labels:
argocd.argoproj.io/secret-type: cluster
type: Opaque
stringData:
name: downstream-cluster
server: https://downstream-cluster.example.com
config: |
{
"execProviderConfig": {
"command": "bash",
"args": [
"-c",
"echo -n '{\"apiVersion\":\"client.authentication.k8s.io/v1\",\"kind\":\"ExecCredential\",\"status\":{\"token\":\"'; cat /var/run/secrets/tokens/federated-token; echo -n '\"}}'"
],
"apiVersion": "client.authentication.k8s.io/v1"
},
"tlsClientConfig": {
"insecure": false,
"caData": "<base64 encoded certificate authority data>"
}
}


Note: Once the ArgoCD pull request #13476 has been approved and merged, the execProviderConfig can be simplified to:


{
"execProviderConfig": {
"command": "argocd-k8s-auth",
"args": [
"token-file",
"--file",
"/var/run/secrets/tokens/federated-token"
],
"apiVersion": "client.authentication.k8s.io/v1"
},
"tlsClientConfig": {
"insecure": false,
"caData": "<base64 encoded certificate authority data>"
}
}


Summary


In conclusion, this blog post has taken you through a comprehensive guide on how to leverage Kubernetes service accounts and their OIDC tokens to establish secure communication between two Kubernetes clusters, known as “upstream” and “downstream.” We’ve covered the essence of OpenID Connect (OIDC), the procedure for exposing and adjusting the OIDC Metadata, and how to set up OIDC trust in Kubernetes. We’ve also shown you how to provide identities access in Kubernetes, use a workload identity token, and even optimize the process by crafting your own KUBECONFIG. This method enables seamless interaction between clusters, thereby eliminating the need for copying long-lived credentials around. We hope you find this guide useful in your Kubernetes journey, and encourage you to share your thoughts in the comments below. Happy hacking!


Top kudoed authors