Beliebte Suchanfragen
//

Using External Secrets with Crossplane & ArgoCD

30.9.2024 | 15 minutes of reading time

Most Crossplane providers need to authenticate themself against Cloud infrastructure providers. But how do we store these Secrets in a GitOps fashion? If external secret stores are a great way of doing this: How do we successfully integrate them with ArgoCD and Crossplane?

Crossplane & ArgoCD – blog series

1. From Classic CI/CD to GitOps with ArgoCD & Crossplane
2. Bootstrapping Crossplane with ArgoCD
3. Going full GitOps with Crossplane & ArgoCD
4. Using External Secrets with Crossplane & ArgoCD

The ultimate GitOps Kryptonite: Secrets

Did we really go full GitOps with Crossplane & ArgoCD as described in the last post? Nearly, but there's a tiny bit missing: To get our Crossplane AWS Provider working, we need to create a Kubernetes Secret containing our AWS Secrets. The question is: Who creates this Secret? Currently this is done by our CI/CD pipeline, which resembles a Push pattern as described here. Isn't there a more elegant solution to the problem, that adhere's further to a GitOps style approach?

I stumbled upon this great post where one quote brings the issue right to the point:

At first, you feel progress [with GitOps] is inevitable, and success awaits a few commits around the corner, then you hit the ultimate GitOps foil: secrets.

After reading through lot's of "How to manage Secrets with GitOps" articles (like this, this and this), I found that there's currently no widly accepted way of doing it. But there are some recommendations. E.g. checking Secrets into Git (although encrypted) using Sealed Secrets or SOPS & KSOPS might seem like the easiest solution in the first place. But they have their own caveats in the long therm. Think of multiple secrets defined in multiple projects used by multiple teams all over your Git repositories - and now do a secret or key rotation...

The TLDR; of most (recent) articles and GitHub discussions I distilled for me is: Use an external secret store and connect that one to your ArgoCD managed cluster. With an external secret store you get key rotation, support for serving secrets as symbolic references, usage audits and so on. Even in the case of secret or key compromisation you mostly get proven mitigations paths.

How to integrate ArgoCD & Crossplane with external secret stores

That brings us to the question: how can we integrate external secret providers with ArgoCD and Crosspane? Furtunately there's a huge list of possible plugins or operators helping to achieve this integration. You can find a good overview featured in the Argo docs. I had a look on some promising candidates:

The argocd-vault-plugin could make for a good starting point. It supports multiple backends like AWS Secrets Manager, Azure Key Vault, Hashicorp Vault etc. But I found that the installation isn't that lightweight, because we need to download the Argo Vault Plugin in a volume and inject it into the argocd-repo-server (although there are pre-build Kustomize manifests available) by creating a custom argocd-repo-server image with the plugin and supporting tools pre-installed... Also a newer sidecar option is available, which nevertheless has it's own caveats.

There's also Hashicorps own Vault Agent and the Secrets Store CSI Driver, which both handle secrets without the need for Kubernetes Secrets. The first works with a per-pod based sidecar approach to connect to Vault via the agent and the latter uses the Container Storage Interface. Both tools sound promising. But for me the most promising solution is the External Secrets Operator (ESO). Not only featuring a lot of GitHub stars the External Secrets Operator simply creates a Kubernetes Secret for each external secret. According to the docs:

"ExternalSecret, SecretStore and ClusterSecretStore that provide a user-friendly abstraction for the external API that stores and manages the lifecycle of the secrets for you."

Also the ESO community seems to be growing rapidly with "multiple people and organizations are joining efforts to create a single External Secrets solution based on existing projects". Pretty nice. So let's integrate it into our setup!

Using External Secrets together with Doppler

The External Secrets Operator supports a multitude of tools for secret management! Just have a look at the docs & you'll see more than 20 tools supported, featuring the well known AWS Secretes Manager, Azure Key Vault, Hashicorp Vault, Akeyless, GitLab Variables and so on. But while I like to show solutions that are fully comprehensible - ideally without a creditcard - I was on the lookout for a tool that had a small free plan. Still without the need to selfhost the solution, since that would be out of scope for this project.

At first glance I thought that Hashicorp's Vault Secrets as part of the Hashicorp Cloud Platform (HCP) would be a great choice since so many projects love and use Vault. But sadly External Secrets Operator currently doesn't support HCP Vault Secrets and I would have been forced to switch to Hashicorp Vault Secrets Operator (VSO), which is for sure also an interesting project. But I wanted to stick with the External Secrets Operator since it's wide support for providers and it looks as it could develop into the defacto standard in external secrets integration in Kubernetes.

Entering Doppler :) Luckily there's an external secrets provider offering a generous free Developer plan. I also preferred Doppler since I trust my readers that they will choose the provider that suites them the most. And the exact provider doesn't change much to our setup, since the External Secrets Operator encapsulates the external secrets store for us transparently. So here's a sketch on how our setup will look like:

crossplane argocd bootstrap external secrets sketchnote

As usual you can find the fully comprehensible code for everything mentioned in this blog post on GitHub.

Create a multiline Secret in Doppler

So let's create our first secret in Doppler. If you haven't already done so sign up at dashboard.doppler.com. Then click on Projects on the left navigation bar and on the + to create a new Doppler Project. In this example I named it according to the example project: crossplane-argocd.

doppler project stages

Doppler automatically creates well known environments for us: development, staging and production. To create a new Secret, choose a environment and click on Add First Secret. Give it the key CREDS and leave the datatype to the default String. The value of our Doppler Secret will inherit a multiline value. Just like it is stated in the crossplane docs, we should have an aws-creds.conf file created already (that we don't want to check into source control):

1echo "[default]
2aws_access_key_id = $(aws configure get aws_access_key_id)
3aws_secret_access_key = $(aws configure get aws_secret_access_key)
4" > aws-creds.conf

Copy the contents of the aws-creds.conf into the value field in Doppler. The Crossplane AWS Provider or rather it's ProviderConfig will later consume the secret just like it is as multiline text:

doppler aws creds multiline

Don't forget so click on Save.

Create Service Token in Doppler & corresponding Kubernetes Secret

As stated in the External Secrets docs, we need to create a Doppler Service Token in order to be able to connect to Doppler from our management cluster. In Doppler Service Tokens are created on project level inside a specific environment. This way a Doppler project environment matches an environment we create based on ArgoCD and Crossplane. So as I created my secret in the dev environment, I create the Doppler Service Token there also.

To create a Service Token, head over to your Doppler project, select the environment you created your secrets in and click on Access. Here you should find a button called + Generate to create a new Service Token. Click the button and create a Service Token with read access and no expiration and copy it somewhere locally.

doppler project service token

In order to be able to let the External Secrets Operator access Doppler, we need to create a Kubernetes Secret containing the Doppler Service Token:

1kubectl create secret generic \
2    doppler-token-auth-api \
3    --from-literal dopplerToken="dp.st.xxxx"

But wait! Didn't we want to omit the kubectl create secret part? Yeah, we can omit the actual Kubernetes Secrets creation that the Crossplane providers will use. Since they will be generated by the External Secrets Operator. But as we already have a chicken and egg problem with our second Kubernetes cluster needed to run GitOps: we still need one Secret for the connection of our external secret store.

Install External Secrets Operator as ArgoCD Application

As the Doppler configuration is now finished, we can head over to the installation of the External Secrets Operator. As we're already used to from the installation of ArgoCD and Crossplane we want to do the installation in a way that supports automatic updates managed by Renovate. Therefore we can use the method already applied to Crossplane and explained in this stackoverflow Q&A. All we have to do is to create a simple Helm chart inside the new directory external-secrets/install called Chart.yaml:

1apiVersion: v2
2type: application
3name: external-secrets
4version: 0.0.0 # unused
5appVersion: 0.0.0 # unused
6dependencies:
7  - name: external-secrets
8    repository: https://charts.external-secrets.io
9    version: 0.10.3

We also need to tell ArgoCD where to find this simple Helm Chart. This can be done elegantly by using Argo's Application manifest in argocd/crossplane-eso-bootstrap/external-secrets-operator.yaml:

1# The ArgoCD Application for external-secrets-operator
2---
3apiVersion: argoproj.io/v1alpha1
4kind: Application
5metadata:
6  name: external-secrets-operator
7  namespace: argocd
8  finalizers:
9    - resources-finalizer.argocd.argoproj.io
10  annotations:
11    argocd.argoproj.io/sync-wave: "0"
12spec:
13  project: default
14  source:
15    repoURL: https://github.com/jonashackt/crossplane-argocd
16    targetRevision: HEAD
17    path: external-secrets/install
18  destination:
19    server: https://kubernetes.default.svc
20    namespace: external-secrets
21  syncPolicy:
22    automated:
23      prune: true    
24    syncOptions:
25    - CreateNamespace=true
26    retry:
27      limit: 1
28      backoff:
29        duration: 5s 
30        factor: 2 
31        maxDuration: 1m

We define the SyncWave to deploy external-secrets before every other Crossplane component via annotations: argocd.argoproj.io/sync-wave: "0".

As you may already spotted in the example project's code on GitHub I splitted the files to feature the setup without the External Secrets Operator inside the directory argocd/crossplane-bootstrap that fully implements all the components from the previous blog posts. And I created a new directory for this blog post inside argocd/crossplane-eso-bootstrap featuring the External Secrets Operator components also. This way you can determine how the setups differ in complexity and decide which way suits your needs best. The argocd/crossplane-eso-bootstrap manifests use different sync-wave configurations!

At this moment you could sneak-peak the External Secrets Operator installation using the central bootstrapping manifest residing in argocd/crossplane-eso-bootstrap.yaml via a kubectl apply -n argocd -f argocd/crossplane-eso-bootstrap.yaml. You would see a new ArgoCD Application featuring a bunch of CRDs, some roles and three Pods: external-secrets, external-secrets-webhook and external-secrets-cert-controller like this:

external secrets argo app

But as there are some parts missing to our External Secrets Operator configuration, I will describe in the following sections. Therefore we will do the installation afterwards.

Create ClusterSecretStore that manages access to Doppler

Diving into the External Secrets Operator configuration we have to distinguish between two concepts. As the docs state:

The idea behind the SecretStore resource is to separate concerns of authentication/access and the actual Secret and configuration needed for workloads. The ExternalSecret specifies what to fetch, the SecretStore specifies how to access.

So we first need to configure the External Secrets Operator via a SecretStore to be able to access Doppler. In the example project I opted for the similar ClusterSecretStore, which can be seen as an enhanced SecretStore:

"The ClusterSecretStore is a global, cluster-wide SecretStore that can be referenced from all namespaces. You can use it to provide a central gateway to your secret provider."

A central gateway to a secret provider sounded like a good fit for our setup. But you can also opt for the namespaced SecretStore too. Our ClusterSecretStore definition resides in the file external-secrets/config/cluster-secret-store.yaml:

1apiVersion: external-secrets.io/v1beta1
2kind: ClusterSecretStore
3metadata:
4  name: doppler-auth-api
5spec:
6  provider:
7    doppler:
8      auth:
9        secretRef:
10          dopplerToken:
11            name: doppler-token-auth-api
12            key: dopplerToken
13            namespace: default

Don't forget to configure a namespace inside the above manifest for the doppler-token-auth-api Secret we created earlier. Otherwise we'll run into errors like:

1admission webhook "validate.clustersecretstore.external-secrets.io" denied the request: invalid store: cluster scope requires namespace (retried 1 times).

Create ExternalSecret to access AWS credentials

We already defined how the external secret store (Doppler) could be accessed using our ClusterSecretStore CRD. Now we should specify which secrets to fetch using the ExternalSecret CRD. Therefore let's create a external-secrets/config/external-secret.yaml:

1apiVersion: external-secrets.io/v1beta1
2kind: ExternalSecret
3metadata:
4  name: auth-api-db-url
5spec:
6  secretStoreRef:
7    kind: ClusterSecretStore
8    name: doppler-auth-api
9
10  # access our 'CREDS' key in Doppler
11  dataFrom:
12    - find:
13        path: CREDS
14
15  # Create a Kubernetes Secret just as we're used to without External Secrets Operator
16  target:
17    name: aws-secrets-from-doppler

Since we created a CREDS secret in Doppler, the ExternalSecret needs to look for this exact path!

When configured the External Secrets Operator will create a regular Kubernetes Secret that's similar to the one mentioned in the Crossplane docs (if you decode it), but with the uppercase CREDS key we used in Doppler:

1CREDS: |+
2[default] 
3aws_access_key_id = yourAccessKeyIdHere
4aws_secret_access_key = yourSecretAccessKeyHere

And the great news is: If we change our credentials in the external secrets store, the External Secrets Operator will automatically sync these credentials into our Secret residing in our management cluster.

Deploy ClusterSecretStore & ExternalSecret through an Argo Application

We also need to create a Application manifest to let Argo deploy both ClusterSecretStore and ExternalSecret for us. Therefore I created crossplane-eso-bootstrap/external-secrets-config.yaml:

1# The ArgoCD Application for external-secrets-operator
2---
3apiVersion: argoproj.io/v1alpha1
4kind: Application
5metadata:
6  name: external-secrets-config
7  namespace: argocd
8  finalizers:
9    - resources-finalizer.argocd.argoproj.io
10  annotations:
11    argocd.argoproj.io/sync-wave: "1"
12spec:
13  project: default
14  source:
15    repoURL: https://github.com/jonashackt/crossplane-argocd
16    targetRevision: HEAD
17    path: external-secrets
18  destination:
19    server: https://kubernetes.default.svc
20    namespace: external-secrets
21  syncPolicy:
22    automated:
23      prune: true    
24    syncOptions:
25    - CreateNamespace=true
26    retry:
27      limit: 1
28      backoff:
29        duration: 5s 
30        factor: 2 
31        maxDuration: 1m

Our ClusterSecretStore and ExternalSecrets deployment in Argo looks like this:

external secrets configuration in argo

But the deployment doesn't run flawless, although configured as argocd.argoproj.io/sync-wave: "1" right AFTER the external-secrets Argo Application, which deployes the External Secrets components:

1Failed sync attempt to 603cce3949c2a916f51f3917e87aa814698e5f92: one or more objects failed to apply, reason: Internal error occurred: failed calling webhook "validate.externalsecret.external-secrets.io": failed to call webhook: Post "https://external-secrets-webhook.external-secrets.svc:443/validate-external-secrets-io-v1beta1-externalsecret?timeout=5s": dial tcp 10.96.42.44:443: connect: connection refused,Internal error occurred: failed calling webhook "validate.clustersecretstore.external-secrets.io": failed to call webhook: Post "https://external-secrets-webhook.external-secrets.svc:443/validate-external-secrets-io-v1beta1-clustersecretstore?timeout=5s": dial tcp 10.96.42.44:443: connect: connection refused (retried 1 times).

It seems that our external-secrets-webhook isn't yet healthy, but the ClusterSecretStore & the ExternalSecret already want to access the webhook. So we may need to wait for the external-secrets-webhook to be really available before we deploy our external-secrets-config.

In order to fix that error we need to give our external-secrets-config a higher syncPolicy.retry.limit like this:

1syncPolicy:
2    ...
3    retry:
4      limit: 5
5      backoff:
6        duration: 5s 
7        factor: 2 
8        maxDuration: 1m

Now the External Secrets components should be deployed correctly.

Point the Crossplane AWS ProviderConfig to our External Secret created Secret from Doppler

The final step to let the Crossplane AWS provider use the Secret provided by the External Secrets Operator is to change it's ProviderConfig accordingly. Therefore I created a separate ProviderConfig in the example project that resides in upbound/provider-aws/provider-eos/provider-config-aws.yaml, which uses the External Secrets Operator generated Secret:

1apiVersion: aws.upbound.io/v1beta1
2kind: ProviderConfig
3metadata:
4  name: default
5spec:
6  credentials:
7    source: Secret
8    secretRef:
9      namespace: external-secrets
10      name: aws-secrets-from-doppler
11      key: CREDS

With this final piece our setup should be complete to be able to provision some infrastructure with ArgoCD and Crossplane!

As already described in the paragraph Combining SyncWaves with the App of Apps Pattern of the last blog post we included the External Secrets Operator deployment into our bootstrap setup defined in the directory argocd/crossplane-eso-bootstrap. That means applying the central bootstrapping manifest residing in argocd/crossplane-eso-bootstrap.yaml we can tell ArgoCD to deploy all our components configured so far including the External Secrets Operator, Crossplane, the Crossplane Providers and all their needed configuration via a simple:

1kubectl apply -n argocd -f argocd/crossplane-eso-bootstrap.yaml

Inside the ArgoCD UI the deployed components look like this:

bootstrap finalized argo crossplane eso

You may even want to double check if the setup really works by provisioning infrastructure like an AWS S3 Bucket as described in the previous blog post. Using our already defined manifest at argocd/infrastructure/aws-s3.yaml this should also work as expected:

1kubectl apply -f argocd/infrastructure/aws-s3.yaml

If everything went fine the Argo Application will finally get Healthy like this:

first s3 bucket provisioned with argo crossplane

Using the External Secrets Operator together with ArgoCD and Crossplane is great

We saw that handling Secrets in a GitOps fashion is more complex that we may thought in the first place. Additionally there seems to be no widely agreed way of doing it. But there are some great solutions out there like the External Secrets Operator that could develop into a defacto standard for Secrets management in the Cloud Native world. And thus the Operator would make for a great choice in our setup with ArgoCD and Crossplane.

Based on the setup from the previous blog posts we went from choosing an external secret store like Doppler to actively using the credentials stored there with the Crossplane AWS provider. We learned that it is essential to create a multiline secret in Doppler and prepare the authentication to Doppler itself with a Doppler Service Token. The next step featured the installation of the External Secrets Operator as part of our overall setup with ArgoCD.

Configuring the External Secrets Provider was our next move. We learned that the ClusterSecretStore manages the access to Doppler itself, the ExternalSecret cares for the synchronization of the AWS credentials stored in Doppler. The latter is represented as a regular Kubernetes Secret and is automatically updated, if the credentials change in the external secrets store. Deployed as ArgoCD Application both the ClusterSecretStore and the ExternalSecret do their work. The final bit was to point the Crossplane AWS Provider via it's ProviderConfig to the External Secret Operator generated Secret. We double checked our setup works with the provisioning of a S3 Bucket in AWS.

As always there are still pieces left in this article serie's puzzle. We finally want to see some more complex infrastructure beeing provisioned by Crossplane together with ArgoCD. And also want to register that infrastructure as an ArgoCD deploy target automatically. Stay tuned for upcoming posts :)

share post

//

More articles in this subject area

Discover exciting further topics and let the codecentric world inspire you.

//

Gemeinsam bessere Projekte umsetzen.

Wir helfen deinem Unternehmen.

Du stehst vor einer großen IT-Herausforderung? Wir sorgen für eine maßgeschneiderte Unterstützung. Informiere dich jetzt.

Hilf uns, noch besser zu werden.

Wir sind immer auf der Suche nach neuen Talenten. Auch für dich ist die passende Stelle dabei.