Spotify built Fleet Management back in 2022 to apply changes across thousands of repos. Half their PRs were already flowing through it before any AI was involved, but only for mechanical stuff like dep bumps and config updates. Claude Code is what let it understand what the code means, not just its structure.
The thing nobody's talking about is that none of this AI automation works without Backstage doing the boring work of tracking who owns what, how things build, and what depends on what.
I went through the Anthropic case study, Spotify's engineering blog series, and the full earnings call transcript here https://www.everydev.ai/p/blog-spotify-built-an-ai-coding-ag...
Statements like this raise fair questions. Is there code duplication across 1,000s of repos, and why respond by increasing surface area further with bespoke tooling?
Do you mean that Backstage has the metadata like what services call which other services, and AI needs that to make changes safely? Sounds helpful to both AI and human developers ;-)
It's good at non-critical things like logging or brute force debugging where I can roll back after I figure out what's going on. If it's something I know well, I can coax a reasonable solution out of it. If it's something I don't know, it's easy to get it hallucinating.
It really goes off the rails once the context gets some incorrect information and, for things that I don't understand thoroughly, I always find myself poisoning the context by asking questions about how things work. Tools like the /ask mode in Aider help and I suspect it's a matter of learning how to use the tooling, so I keep trying.
I'd like to know if AI is writing code their best developers couldn't write on their own or if it's only writing code they could write on their own because that has a huge impact on efficiency gains, right? If it can accelerate my work, that's great, but there's still a limit to the throughput which isn't what the AI companies are selling.
I do believe there are gains in efficiency, especially if we can have huge contexts the AI can recall and explain to us, but I'm extremely skeptical of who's going to own that context and how badly they're going to exploit it. There are significant risks.
If someone can do the work of 10 people with access to the lifetime context of everyone that's worked on a project / system, what happens if that context / AI memory gets taken away? In my opinion, there needs to be a significant conversation about context ownership before blindly adopting all these AI systems.
In the context of Spotify in this article, who owns the productivity increase? Is it Spotify, Anthropic, or the developers? Who has the most leverage to capture the gains from increasing productivity?
They did however, remove many useful ones.