1. Exploration: LLM first, docs second—cuts discovery time by ~3×.
2. Boilerplate: AI generates, I refactor on the spot; never merged blindly.
3. CR: bot leaves a first-pass checklist, humans focus on architecture.
4. Legacy spelunking: 200k-context summary + mermaid call-graph.
5. Rule of three: AI writes glue, I write core, tests cover both.
Result: 30-40% more features shipped per quarter without quality drop.
I use GitHub CoPilot everyday. I usually limit it to a fancy auto-complete. It's really helpful for repetitive refactor tasks, I just have to go to each line and it updates. This way I still see what's happening, but it is faster than manually making those changes that can't be solved with a find and replace.
Sometimes I fight CoPilot because it will continue to suggest something I do not want to do. In those instances I code faster as to outpace the AI's latency.
Other devs are building MCP servers to help access our tools for AI integration. Devops seems heavy on spec driven and test driven development through AI. The spec driven development looks interesting, but a bit of overhead to get started.
Our company has an AI first directive right now where we're supposed to use AI for everything and see what works. I somewhat disdain it, but it's also fun to have a directive to try new things indiscriminately (using AI). The more I drink the Kool aid, the better it tastes.