Ending the commit storm: validating FluxCD manifests locally before they hit the cluster
February 2026 — on the commit history that nobody wants to show their colleagues
Every GitOps practitioner has a section of their git history they’d rather not talk about.
a3f1c2d fix typo in kustomization
b8e0a1c fix: yaml indentation
c2d9f3b fix: variable substitution not working
d1a8e2b fix: wrong namespace
e5c7f0a hopefully this one works
f4b3d1c fix: actually fix the namespace this time
This happens because the feedback loop in FluxCD is slow and one-directional. You commit, push, wait for the controller to reconcile, see the error in flux get kustomization, fix it, commit, push, wait again. Six commits for a change that should have been one.
The root cause is that FluxCD does a lot of work between your YAML files and the actual Kubernetes resources: Kustomize build, post-build variable substitution, Helm template rendering, schema validation. None of these are visible until the controller runs them on your behalf, in the cluster, after you’ve already committed.
flux-preflight runs all of that locally, in milliseconds, before you commit.
What FluxCD actually does
When Flux reconciles a Kustomization, it runs a specific sequence:
Each step can fail independently, and the error from each step looks different. A failed substitution (${VARIABLE} left unresolved) produces a valid YAML file with a literal ${VARIABLE} string in it — Kubernetes accepts it, but it’s wrong. A Helm template error might not surface until the HelmRelease controller tries to render it. A schema validation error appears in the Flux controller logs.
flux-preflight runs steps 1–4 of this pipeline locally with 100% fidelity, using the same libraries Flux uses: native Go krusty for Kustomize, fluxcd/pkg/envsubst for variable substitution, helm/v3 for templates, and kubeconform for schema validation.
Basic usage
# Validate a specific path
flux-preflight build apps/base/my-service/
# Validate recursively
flux-preflight build --recursive apps/
# Mock variables that come from ConfigMaps/Secrets in the cluster
flux-preflight build --config .flux-preflight.yaml apps/base/my-service/
The output is a consolidated report. Green means ready. Red means there’s a problem, and the problem description is specific enough to act on immediately.
✓ apps/base/monitoring/kustomization.yaml — 12 resources, 0 errors
✓ apps/base/cert-manager/kustomization.yaml — 6 resources, 0 errors
✗ apps/base/my-service/kustomization.yaml — 3 errors
ERROR: Unresolved variable ${DATABASE_URL} in apps/base/my-service/deployment.yaml:42
ERROR: Service 'my-service' selects label app=my-service but no Pod in hydrated output has this label
ERROR: HelmRelease 'my-service' references chart 'my-chart' version '2.1.0' — not found in repo
Three errors. Three lines. All actionable before anything reaches the cluster.
Variable mocking
The most common source of commit storms is post-build variable substitution. Flux substitutes variables from ConfigMaps and Secrets that exist in the cluster — but you don’t have those locally.
flux-preflight solves this with a local mock file:
# .flux-preflight.yaml
variables:
CLUSTER_DOMAIN: "homelab.local"
DATABASE_HOST: "postgres.database.svc.cluster.local"
ENVIRONMENT: "production"
# You can also point to real ConfigMaps/Secrets if you have a local kubeconfig
configMapRefs:
- name: cluster-vars
namespace: flux-system
With this file, flux-preflight substitutes variables using your mocked values. The validation runs against the fully-hydrated manifests as they’ll appear in the cluster — not the raw templates with ${VARIABLE} placeholders.
Semantic integrity checks
Schema validation (is this valid YAML for a Kubernetes resource?) is necessary but not sufficient. flux-preflight also checks semantic integrity:
Service selector → Pod label consistency
A Service that selects app=my-service but has no Pod with that label in the hydrated output means: this Service will never route traffic. This is a common mistake when renaming a Deployment without updating the Service selector.
ERROR: Service 'my-service' selects {app: my-service}
but no Pod in namespace 'default' has matching labels.
Pods found: [{app: myservice}, {app: my-svc}]
The error shows you the actual Pod labels that exist, so you can see immediately whether you renamed the Deployment and forgot the Service, or misspelled the label.
Naming convention enforcement
If your organisation (or personal repo) has naming conventions — service names must match ^[a-z][a-z0-9-]+$, namespaces must have a specific prefix, resources must have specific annotations — you can define these in .flux-preflight.yaml and they’ll be checked on every run.
# .flux-preflight.yaml
naming:
services:
pattern: "^[a-z][a-z0-9-]+$"
namespaces:
required_labels:
- "team"
- "environment"
Zero-config variable discovery
For Flux repos with complex variable structures, flux-preflight can auto-detect which variables are referenced in your manifests and which ConfigMaps/Secrets in the cluster provide them:
flux-preflight discover apps/base/
Output:
Variables referenced in manifests:
${CLUSTER_DOMAIN} — provided by ConfigMap/cluster-vars in flux-system
${DATABASE_URL} — NOT FOUND in any ConfigMap or Secret
${IMAGE_TAG} — provided by Kustomization post-build subs inline
${DATABASE_URL} not found means either it’s missing from the cluster config (real problem) or you need to add it to .flux-preflight.yaml for local testing.
The workflow that prevents the storm
# Before every commit:
flux-preflight build --recursive apps/
# If errors: fix locally, re-run
flux-preflight build apps/base/my-service/
# Clean: commit
git add apps/base/my-service/
git commit -m "feat(my-service): update resource limits"
git push
The feedback loop goes from “commit → push → wait 2 minutes → see error → fix → repeat” to “fix locally in seconds → commit once → done.”
The git history goes from e5c7f0a hopefully this one works to a single meaningful commit that describes what actually changed.
Installing
go install gitlab.com/djieno/flux-prefilght/cmd/flux-preflight@latest
Or download a pre-built binary from the GitLab release page. The tool is a single static binary with no runtime dependencies — Kustomize, Helm, kubeconform are all compiled in.
What it doesn’t catch
flux-preflight can’t validate Kubernetes admission webhook rejections — those only run in a live cluster. It can’t test that a Helm chart’s rendered output produces Pods that actually start successfully. It can’t verify that a Service can actually reach its backend in your specific cluster network.
These are cluster-runtime concerns. The tool’s scope is the hydration and schema layer. That’s where most commit storms originate, and that’s where it focuses.
For the rest, there’s no substitute for a staging cluster that mirrors production closely enough to catch the things that only show up at runtime.


