I built an AI to stop the wrong recruiters from wasting my time
April 2026 — on replacing an inbox full of irrelevant opportunities with a system that actually thinks
If you’ve worked in IT for more than a few years in Europe, you know the pattern. A recruiter reaches out. The message contains your name (sometimes), a job description (loosely relevant), and an offer (usually well below your rate). They’re matching on keywords. “Kubernetes” in your profile, “Kubernetes” in the job description — match. The fact that the role is junior, six timezones away, pays 40% less than your current work, and requires a technology you haven’t touched in three years is irrelevant. The keyword matched.
This is not a solvable problem by replying to recruiters or adjusting your LinkedIn settings. It’s a volume problem: there are more messages than any human can process thoughtfully, and the marginal cost of sending each message is essentially zero. The only way to fix it is to change the system you use to engage with it.
So I built one.
What the problem actually is
Let me apply the 5 WHYs to my own situation:
Why am I getting low-quality job opportunities? Because recruiters are sending them.
Why are recruiters sending low-quality opportunities? Because keyword matching on job boards produces false positives at scale.
Why does keyword matching produce false positives? Because it has no understanding of context — rate history, preferred technologies, location, actual fit.
Why doesn’t the recruiter add context? Because doing so for every candidate at volume is too expensive.
Why is it too expensive? Because they’re optimising for reach over precision — spray and pray.
The solution isn’t to make them more precise. It’s to make my side of the process smarter. The platform is a job search assistant that reads my incoming mail, understands the context, evaluates each opportunity against my actual preferences, and tells me which ones deserve a response.
The architecture
Incoming email
│
↓
ClassifyEmail (DeepSeek)
│ rejected / interview_request / offer / autoresponder
│ extracts: stated_reason, skill_gaps, tone
│
↓
SQLite (job_outcomes table)
│
├── Qdrant ingestion (job_outcomes collection)
│ └── nomic-embed-text (768-dim dense vectors)
│
└── Follow-up draft (if rejection + reason)
Incoming job application
│
↓
RAG evaluation
├── Top-3 similar past outcomes from Qdrant
│ "Last time I applied to a fintech startup with similar stack,
│ interview stage but ultimately rejected for not enough Go experience"
├── Top-3 CV chunks (what I'm actually good at)
└── DeepSeek Evaluate()
→ score, reasoning, go/no-go recommendation
Everything runs self-hosted. DeepSeek runs via API (their pricing is very reasonable). Ollama runs on the cluster for embedding with nomic-embed-text. Qdrant is the shared vector store, same instance that powers the cluster monitoring RAG.
The email classifier
The background worker runs every 5 minutes. It pulls unclassified received emails from the database, sends each one to DeepSeek with a structured prompt, and stores the result.
type EmailClassification struct {
Category string // rejected | interview_request | offer | autoresponder
StatedReason string // "we went with a candidate with more Python experience"
SkillGaps []string // ["Python", "ML pipelines"]
Tone string // professional | impersonal | personal
FollowUp string // draft follow-up email if appropriate
}
For rejections with a stated reason, the classifier drafts a follow-up. Not to argue — to thank the recruiter, acknowledge the gap, and ask if they’d keep me in mind for future roles that are a better fit. These are sent one-shot via the API; the system doesn’t send automatically without review.
The follow-up angle is practical: rejection reasons are signal. If three rejections in a row mention “not enough Python experience,” that’s feedback worth acting on. The skill_gaps field feeds into the next evaluation.
The RAG evaluation pipeline
When I apply to a new role, the system evaluates it before I spend time on a cover letter or interview prep:
func buildEvalCriteria(jobID int64) string {
// Fetch the job description
job := db.GetJob(jobID)
// Retrieve similar past outcomes from Qdrant
pastOutcomes := qdrant.SearchOutcomes(job.Description, topK=3)
// e.g.: "Applied to similar DevOps role at Series B startup,
// reached final round, rejected for timezone mismatch"
// Retrieve relevant CV chunks
cvChunks := qdrant.SearchCV(job.Description, topK=3)
// e.g.: "5 years Kubernetes production experience,
// designed multi-cluster GitOps architecture at [company]"
// Build context for DeepSeek
return buildPrompt(job, pastOutcomes, cvChunks)
}
The evaluation is grounded in reality. Not “is this job description appealing” — “given that a similar role previously led to X outcome, and given these specific CV strengths, what’s the realistic fit and what are the risks?”
The output is a structured score and reasoning:
{
"score": 72,
"recommendation": "apply",
"reasoning": "Strong technical fit on K8s and FluxCD requirements.
Rate expectation may be at upper end of their likely range
based on company size. Similar role at comparable company
previously reached final round.",
"risks": ["Rate negotiation may be difficult", "No mention of remote policy"],
"suggested_focus": ["Emphasise GitOps expertise", "Ask about remote flexibility early"]
}
That’s useful. It’s not a yes/no — it’s a second opinion from something that has read every previous outcome and can compare them to this one.
CV chunking and versioning
The CV is split into overlapping chunks (~500 runes, sentence-boundary aware) and stored in Qdrant’s cv_profile collection. When I update my CV — new project, new technology, ended engagement — triggering a re-ingest via POST /api/rag/ingest-cv updates the collection with fresh chunks.
The ingestion deletes old chunks by source before upserting, so the collection always reflects the current CV. No accumulation of stale data.
What I actually learned from the data
After running this for a few months:
Rejection pattern: roles at companies with more than 500 employees and a procurement process longer than 2 rounds consistently ended in rejection at the commercial stage, not the technical one. The skill fit was there; the commercial fit wasn’t. I now filter these out earlier.
Interview conversion: direct approaches (recruiter who clearly read my profile, specific role, personalised message) converted to interviews at roughly 3× the rate of mass outreach. The classifier’s tone: personal field correlates strongly with outcome quality.
Skill gap signal: “Python” appeared in 11 rejection reasons over 4 months. I added a Python refresher to my learning queue — not because I need Python for the roles I actually want, but because its absence is creating friction with technical screens that include it as a baseline expectation.
None of this required a data analyst. It required having the data and a way to query it.
The part that isn’t built yet
The job card UI in the web application doesn’t yet show outcome badges — rejected/interviewed/offered — or the follow-up thread. The backend is complete. The UI still shows jobs without the classification context. That’s the next thing to build.
I’m writing this partly as motivation to finish it. The data is there. The analysis is there. The UI is the last step.
The bigger point
The job market in IT, especially for senior/freelance work in Switzerland and Western Europe, is not a meritocracy sorted by skill. It’s a volume game sorted by visibility and timing, with enormous amounts of noise. You can’t fix the noise. You can build a better filter on your side.
Building that filter isn’t just about job searching. It’s the same instinct that drives building your own mail server, your own monitoring stack, your own K8s cluster. The tools exist. The data is yours. The outcomes improve when you take control of the system rather than accepting its default behaviour.
The recruiter problem is the same problem as every other noise problem: the default output is mediocre, and the people willing to build something better get better results.


