According to MIT Technology Review, the software development industry is undergoing a significant shift in 2025, moving from Andrej Karpathy’s viral “vibe coding” concept toward more structured “context engineering.” The change is documented in the latest Thoughtworks Technology Radar, which tracks technologies used by teams on client projects. Back in February 2025, Karpathy coined the term “vibe coding” that took the industry by storm, but by April, Thoughtworks was already expressing skepticism on their technology podcast. The report notes that antipatterns have been proliferating, with complacency about AI-generated code becoming a major concern. This has driven increasing interest in engineering context properly, especially as prompts grow larger and model reliability falters. The industry is now reckoning with context management as AI agents and agentic systems become more prevalent.
The vibe coding hangover
Here’s the thing about vibe coding – it sounded cool and revolutionary when Karpathy first tweeted about it, but the reality has been messy. Basically, developers got a bit too comfortable just vibing with AI assistants rather than properly engineering their prompts. And guess what happened? The same pattern we’ve seen with every new technology – initial excitement followed by the realization that quality matters.
Thoughtworks was cautious from the start, and their concerns proved valid. They’ve documented what they call “complacency with AI-generated code” as a real problem. It’s not that the code is bad – it’s that developers stopped questioning it. They stopped providing the right context. And when you’re working with complex systems, that’s a recipe for disaster.
Why context engineering matters now
So we’re seeing this pivot toward what Thoughtworks calls “context engineering” or “knowledge priming.” It’s basically the recognition that throwing bigger prompts at AI isn’t the solution. You need to be strategic about what context you provide and how you provide it.
Tools like Claude Code and Augment Code are showing that when you properly prime AI with the right context, you get dramatically better results. But here’s the counterintuitive part – sometimes less is more. Thoughtworks found that for forward engineering, AI actually performs better when it’s further abstracted from legacy code specifics. The solution space becomes wider, letting the AI be more creative.
The legacy code goldmine
One of the most exciting applications is using generative AI to understand legacy systems. Thoughtworks has seen good results when using AI to understand legacy codebases, and they’ve even explored rebuilding applications without full source code access.
Think about what that means for enterprises sitting on decades of poorly documented legacy systems. This isn’t just about writing new code faster – it’s about understanding and modernizing the mountains of existing code that businesses depend on. That’s where the real productivity gains might come from.
The agent wake-up call
Now here’s what’s really driving this shift: AI agents. Everyone wants to build them or use them, but agents forced the industry to confront a hard truth. They need significant human intervention to handle complex, dynamic contexts. You can’t just vibe your way through building reliable agentic systems.
So we’re seeing the maturation of AI in software development. The initial “wow, it writes code!” phase is over. Now we’re in the “how do we make this actually work in production?” phase. And that requires engineering discipline, not just good vibes. The industry is growing up, and honestly, it’s about time.
