SEO linters exist, but they're either paid SaaS, not automatable, or ignore the structural patterns that LLMs use when deciding what to cite. So I built one.
geo-lint is a CLI with 92 rules: 32 SEO (table stakes — titles, descriptions, headings, slugs, canonical URLs, schema, the works), 35 GEO rules specifically for AI citation readiness, 14 content quality checks inspired by Yoast, and the rest for technical and i18n validation. We extensively researched the current state of GEO and AEO to make sure the rules reflect what actually gets content cited by ChatGPT, Perplexity, and Google AI Overviews — not outdated advice.
The design principle: the linter is deterministic, the AI agent is creative. Same content in, same violations out, every time. The agent does the fixing.
In practice, I install it in a project, paste one prompt into Claude Code or Cursor:
Run npx geo-lint --format=json, fix every violation using the suggestion
field, re-run until the output is [].
And walk away. The agent reads the JSON violations, edits the files, re-lints,
loops — no manual input. When it finishes, the content is validated for both
traditional search ranking and AI citation. One command, both outcomes, across
an entire site.Every rule outputs a machine-readable suggestion field that tells the agent exactly what to change. The JSON has no formatting, no ANSI colors — pure structured data. Works with Markdown/MDX out of the box, extensible to Astro, HTML, or any CMS via custom adapters.
This is extracted from the toolchain I use on production client content at my agency. MIT licensed, zero peer dependencies, TypeScript.
Happy to answer questions about GEO patterns, what makes content citable by LLMs, or how the agentic loop works in practice.