The split is roughly: 30% all-in (Claude Code or Cursor for everything), 50% selective users (use it for boilerplate, tests, docs but still hand-write core logic), 20% holdouts.
What I've noticed on PR velocity: it went up initially, then plateaued. The PRs got bigger, which means reviews take longer. We actually had to introduce a "max diff size" policy because AI-assisted PRs were becoming 800+ line monsters that nobody could review meaningfully.
The quality concern that keeps coming up: security. AI-generated code tends to take shortcuts on auth, input validation, error handling. We've started running dedicated security scans specifically tuned for patterns that AI likes to produce. That's been the biggest process change.
Net effect: probably 20-30% faster on feature delivery, but we're spending more time on review and security validation than before.