AI Applyd does three things:
1. Scores your resume against a job description using the same keyword / skill / semantic extraction that Greenhouse, Lever, Ashby, and Workday run. You get a number and the specific missing keywords.
2. Rewrites the resume bullets to hit the missing keywords. You approve every change. No AI-slop passthrough.
3. Submits the application through a real cloud browser session. Not a form sniffer. Login, file upload, multi-step screening questions, cover letter — the full flow.
Stack, since HN is here:
- Cloudflare Workers + D1 (Drizzle) for everything. One region, global edge. No Postgres, no Redis, no queue runner outside CF. - Hono for the API, TanStack Start for the web, Better Auth with D1 adapter. - OpenRouter as the AI gateway. Most flows on Gemini 2.5 Flash Lite ($0.10/$0.40 per M). Free models for background enrichment. - Browserbase for the browser sessions, Stagehand on top for LLM-guided actions. Login-required platforms are gated behind user-initiated connections so we never hold plaintext credentials. - Remotion for the landing page demo videos, rendered ahead of time. - Turborepo monorepo, Wrangler deploys, Cloudflare Queues + cron for the scraper orchestrator (12h refresh, concurrency 5, 5-min hard timeout per job).
Everything LLM is cost-tracked. I set a daily spend cap after I fat-fingered a prompt loop once and burned $40 before my alert fired.
Free tier: 20K starter tokens, 10 ATS scores/mo, 5 resume tailors, 5 cover letters, 1 AI apply/mo, 15 documents, 25 job listings. No credit card. Paid: $39/mo for up to 15 applies/day (100/mo) or $79/mo for up to 50/day (300/mo).
Two gotchas worth mentioning for anyone building similar:
- Stagehand v3 uses AsyncLocalStorage.enterWith() for its logger. That API does not work in CF Workers. I patched Stagehand with 'bun patch' to swap the logger to a passthrough. - Wiring Stagehand to OpenRouter via the {modelName, apiKey, baseURL} object form breaks on the second request because @ai-sdk/openai@2.x has a broken isReasoningModel path for non-OpenAI endpoints. Use CustomOpenAIClient + llmClient instead.
Demo (90s): https://www.youtube.com/watch?v=rbet3wFpGak
Also live on Peerlist Launchpad Week 17 (runs through Apr 26): https://peerlist.io/firstexhotic/project/ai-that-applies-for...
Would love feedback on: whether the auto-apply flow breaks on any specific job platforms you've seen, whether the ATS scoring feels accurate, and whether 10 free scores is enough to decide if it's useful. Ask anything technical — I'll go deep on the architecture.