Will LLMs and Vibe Coding Fuel a Developer Renaissance?
While the actual usefulness of LLMs is still debated, one thing is certain: engineers are being asked to do more with less.
Between companies increasingly using large language models (LLMs) in their development process, such as Microsoft writing up to 30% of its codebase using AI, and site reliability engineers (SREs) adopting incident vibing, it is clear that software practices are evolving.
Despite model makers pushing the narrative that fully autonomous AI development agents are coming very soon, the consensus remains that having a human in the loop is here to stay, at least for some time. So will AI fuel a developer renaissance?
The Shift to Multi-Agent Workflows
I recently moderated a Rootly AI Labs panel on the topic. Solomon Hykes, CEO of Dagger and founder of Docker, argued that while the industry has been busy figuring out single-agent approaches, multi-agent setups represent the next frontier.
For example, the startup Factory introduced “Droids,” software agents designed to handle parallel development tasks. One agent could manage code refactoring, while another could conduct a code review, and yet another could handle the task backlog on Linear, prioritizing and assigning tickets.
These setups shift a developer’s role from direct technical tasks to managing and verifying the work of these agents, turning developers into engineering managers.
Anthropic recently released a blueprint on building multi-agent systems, based on lessons from its Research feature, which coordinates multiple Claude agents to explore complex topics.
The report highlights that in agentic systems, small issues that would be minor in traditional software can compound and derail workflows entirely, making the gap between prototype and production wider than expected.
Getting multi-agent systems to work reliably turns the “last mile” into most of the journey, and developers are the ones responsible for making it happen.
Developer Roles Are Expanding
As developers transition into managing teams of AI agents, their roles naturally broaden beyond purely technical tasks. Malika Aubakirova, partner on the AI infrastructure team at Andreessen Horowitz, highlighted the rise of nano unicorns; fast-growing, high-revenue startups with small teams, like Cursor, which reached $300M ARR with just 20 employees.
She noticed consistent patterns across these companies. First, they augment their teams with AI agents across engineering, product development and customer-facing functions. In this model, AI isn’t a side tool; it’s treated as infrastructure and is central to how work gets done.
Second, these startups frequently employ generalists rather than specialists. For example, in such environments, engineers aren’t limited to backend or frontend tasks; they would contribute across the entire application life cycle and even assist with go-to-market initiatives. This shift is redefining team structures, tooling and what it means to scale a modern software company.
A panel discussion at AWS Builder Loft on the future of developers in the AI era.
This trend isn’t limited to startups; it’s also playing out inside large, established tech companies. A senior engineering leader at LinkedIn, who asked to remain anonymous, noted that role expectations have significantly expanded. Engineers are now expected to operate across multiple functions, acting not just as developers but also as project managers, data scientists, and SREs, while leveraging AI agents to execute across these domains.
While the actual usefulness of LLMs is still debated, one thing is certain: engineers are being asked to do more with less.
Challenges for Reliability and SRE Teams
Despite the productivity boosts from AI agents, their adoption poses reliability challenges. Kevin Van Gundy, CEO at Hypermode and former COO at Vercel, emphasizes that the non-deterministic nature of LLMs, which can produce hallucinations, is causing very unusual incidents. Handling incidents in deterministic systems was already complex; now imagine doing it when the system itself can’t be trusted to behave the same way twice.
Hykes noted that as LLMs become embedded at every stage of the software development life cycle, from authoring application code and creating other agents, to running tests, provisioning infrastructure and handling monitoring, the number of places where things can go wrong is increasing.
SREs, as the last line of defense between sanity and chaos, may need to be concerned about the volume and complexity of incidents heading their way. The good news, however, is that they’ll also become more sought-after talent, and platform teams, more specifically, will hold the keys to providing the infrastructure for these agentic workflows at scale.
Navigating the AI-Driven Market: Skills for Developers
So, what should engineers do to stay up to date? Van Gundy encourages engineers to keep building using the latest and hottest tools, such as Replit and Lovable. The focus should be on expanding beyond purely technical skills and developing strong product intuition and UX expertise. Rapid development capabilities alone won’t guarantee success without a polished product.
Conversely, Mike Chambers, specialist developer advocate for machine learning at Amazon Web Services, recommends that developers have a deep understanding of the underlying technology behind LLMs. Learning foundational AI concepts, such as transformers, can significantly improve engineers’ effectiveness when leveraging these tools. Like any other system, LLMs have strengths and weaknesses, and you shouldn’t use a hammer for a screw.
The Developer Renaissance Is Underway
The panel consensus was that LLMs indeed offer a potential renaissance for developers by significantly expanding their roles. Success in this new era will likely be highly based on blending human oversight with AI capabilities, balancing technical depth with product sensibility.
In the future, engineers’ performance reviews may focus less on individual execution and more on how effectively engineers manage and direct their agent workforce.
Topics