One curl command to a GitOps-ready RKE2 cluster
December 2025 — because “fresh cluster” should not take a day
Every time I’ve needed to spin up a new Kubernetes cluster — new hardware, new lab environment, disaster recovery test — I’ve gone through the same ritual. RKE2 install. Wait. Get the kubeconfig. Install ArgoCD. Wait. Bootstrap the application of applications. Configure SSH keys for GitLab access. Wire up the GitOps repo.
None of these steps is complicated. All of them together take the better part of a day, and every time I’ve either forgotten a step or done something slightly differently from the last time.
fresh-cluster solves this by making the entire sequence reproducible in a single command.
What it does
curl -sfL https://gitlab.com/djieno/fresh-cluster/-/raw/main/fresh-cluster.sh | bash
This single line:
- Installs RKE2 on the host
- Waits for the control plane to be ready
- Sets up the kubeconfig
- Installs ArgoCD via Helm
- Waits for ArgoCD to be ready
- Creates the SSH key secret for GitLab access
- Bootstraps the root ArgoCD application (the “app of apps”)
- Reports status
The result is a cluster that immediately starts managing itself from a Git repository. No manual follow-up steps. No forgotten configuration.
The VM mode: test before you commit
Before running this on real hardware, you probably want to test it. The script supports a --vm flag that provisions a Multipass VM and runs the bootstrap inside it:
# Format: curl -sfL URL | bash -s -- --vm [NAME] [CPU] [MEMORY] [DISK] [UBUNTU_VERSION]
curl -sfL https://gitlab.com/djieno/fresh-cluster/-/raw/main/fresh-cluster.sh | bash -s -- \
--vm test-cluster 4 8G 30G 22.04
This creates a test-cluster VM with 4 CPUs, 8GB RAM, 30GB disk, running Ubuntu 22.04, and runs the full bootstrap sequence inside it. You can SSH in, poke around, verify everything is working, and then tear it down.
I use this to test changes to the bootstrap script before applying them to a production node. It also means that if you’re on a laptop and want to try this without touching any real infrastructure, you can.
The kubeconfig and SSH key requirements
Two prerequisites before running:
kubeconfig: if you’re bootstrapping a cluster that will manage applications from a private Git repo, you need a kubeconfig that grants cluster-admin. The bootstrap script uses ~/.kube/config by default.
SSH key: the GitOps workflow requires the cluster to clone the ArgoCD repository. Place an SSH key at ~/.ssh/argocd-gitlab before running. The script creates a Kubernetes secret from this key and configures ArgoCD to use it.
# Generate a deploy key for your GitLab repo
ssh-keygen -t ed25519 -f ~/.ssh/argocd-gitlab -C "argocd-deploy-key"
# Add the public key to your GitLab repo as a deploy key (read-only)
cat ~/.ssh/argocd-gitlab.pub
What “app of apps” means
ArgoCD’s “app of apps” pattern is simple: one root Application resource points to a directory in your Git repo. That directory contains Application manifests for everything else — monitoring stack, ingress controller, cert-manager, your actual services.
# root-app.yaml — bootstrapped by fresh-cluster
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: root
namespace: argocd
spec:
project: default
source:
repoURL: git@gitlab.com:djieno/fluxcd.git # your GitOps repo
targetRevision: HEAD
path: argocd/apps
destination:
server: https://kubernetes.default.svc
namespace: argocd
syncPolicy:
automated:
prune: true
selfHeal: true
Once this is running, ArgoCD reads argocd/apps/ from your Git repository and creates all the Application objects defined there. Each of those points to a Helm chart or Kustomize directory. The cluster is now fully Git-driven.
Adding a new application to your cluster: add a file to argocd/apps/ in Git. ArgoCD sees the change within 3 minutes and creates the Application. No kubectl apply. No manual Helm installs.
Comprehensive logging and status updates
The bootstrap script outputs structured status as it runs:
[INFO] Installing RKE2...
[INFO] Waiting for control plane nodes...
[OK] Control plane ready (3 nodes)
[INFO] Installing ArgoCD via Helm...
[OK] ArgoCD ready
[INFO] Creating GitLab SSH secret...
[OK] argocd-gitlab secret created
[INFO] Bootstrapping root application...
[OK] Root application synced
[INFO] Bootstrap complete. ArgoCD UI: https://192.168.1.50:30443
[INFO] Initial admin password: kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d
If anything fails — RKE2 installation error, node not ready within timeout, ArgoCD Helm install fails — the script exits with a clear error and the last meaningful log line that indicates what went wrong.
The recovery scenario
The real test of any bootstrap procedure is: can you use it to recover from a failure?
I ran this on a node that had a previous failed cluster installation. The script detected the leftover RKE2 state, cleaned it up, and proceeded with a fresh install. Recovery time from “node with a broken cluster” to “clean GitOps-ready cluster” was about 12 minutes.
That number matters. In a disaster recovery scenario, “12 minutes to a working cluster” is a recovery point you can plan around. “I’ll need most of the day to remember how I set this up” is not.
What it doesn’t do
Certificate management, storage class setup, networking configuration beyond the RKE2 defaults — these are cluster-specific and belong in your GitOps repo, not in the bootstrap script. The script gets you to “ArgoCD is running and watching your Git repo.” What your Git repo contains is your responsibility.
That separation is intentional. The bootstrap is the ignition. The GitOps repo is the engine.


