2 pointsby oldeucryptoguy7 hours ago2 comments
  • oldeucryptoguy7 hours ago
    I built this because half my LinkedIn feed sounds like the same person. LAID runs 14 heuristic signals entirely in-browser — no API key, no data leaving your machine.

    The research angle: Signal design draws on Kobak et al.'s "excess vocabulary" study in Science Advances, which analyzed 14M PubMed abstracts and found post-ChatGPT spikes in words like "delve" (practically a fingerprint), "meticulous" (34.7x increase in ICLR peer reviews per Liang et al.), and "tapestry." Jakesch et al. showed in PNAS that human gut-feel detection is unreliable across 4,600 participants — so I went looking for signals with statistical backing instead.

    What it measures: Buzzword density (100+ words, 20 multi-word phrases), em dash frequency (AI uses ~10x more than humans), contraction rate (AI writes "does not" instead of "doesn't" — human informal writing runs 60-90%, AI runs 10-40%), epistemic texture (fake hedges like "it is important to note" vs genuine ones like "idk" or "i could be wrong"), sentence length uniformity, vague anecdote detection ("a colleague once told me" with no names or dates), structural formulas (hook → listicle → CTA), and more.

    The LLM layer: Optionally switch to Claude, OpenAI, or Gemini. The local heuristic results get sent alongside the post text so the model can cross-reference quantitative signals with its own qualitative reading. The prompt uses an 8-category rubric with particular attention to "engagement-optimized executive voice" — pop culture hooks, invented frameworks, clean problem-insight-moral arcs, and statistics without sourcing.

    There's a "Report a bug" button right in the extension popup if you run into any issues. Install takes 30 seconds (load unpacked in Chrome).

  • anigbrowl6 hours ago
    I bet you'd reach an even bigger market if you made this for Twitter, which is awash in this BS.