Will there still be professional software developers?

March 02, 2026

If AI agents comprehensively outclass humans at writing code, is there anything left for humans to do in the process of making software? In other words, once we have super-human AI coding agents, is there still a place for professional human software developers?

I don’t mean to take the premise for granted. It’s not hard to find AI-written code that is worse than typical human-written code, or human-written code that is better, cleaner, more performant, more beautiful than essentially all AI-written code yet produced. If not in the small, then in the large. I know of no predominantly AI-written codebase as beautiful as, say, the Clojure codebase.

But software development is perhaps the single most learnable domain for LLMs. It checks every box:

  • Wide range of fully automated verifiers: test suites, type checkers, linters, benchmarks
  • Vast amount of available training data
  • Important context is easy to gather and serialize
  • AI researchers understand the domain deeply and are regularly bottlenecked by it, motivating them to focus attention on it
  • It’s commercially valuable and multiple well resourced, highly talented teams are aggressively competing to tackle it

So whether or not we’re there yet, conditions are ripe for AI agents to take over as Earth’s most capable coders. If that happens, will there still be a role for professional software developers in software development?

Let’s start by looking at three things software developers do now besides writing code.

Gathering context

Above I said that in software development, “important context is easily serializable and readily available”. This is true. For code to run at all, it all has to make its way to one place, usually one machine, where it can be built and executed. And the software engineering community has put much work into organizing code and making it machine-accessible.

But code isn’t the only relevant context. Much of the important context is information about the human environment in which the software is created and used. Good code solves problems or satisfies desires. To write good code, you need knowledge about those problems and desires.

Some of this context is machine-accessible. Codex can connect to Slack and Google Docs and Linear, but still, even in software companies, our techno-organizational systems are much better optimized to concentrate context in humans than in machines.

Much of the important context is also fairly difficult to serialize. When you talk to a user and they say “yeah, this new feature seems really useful, thanks, I’ll definitely try it out”, is that a positive signal? Is it even possible to tell from a transcript? If a feature makes any sense at all, users almost always tell you that it “seems really useful”. They don’t want to hurt your feelings. But they might also say that if it really is useful. Distinguishing an atta-boy from a genuine positive reaction requires non-textual communication context like tone and body language, as well as wider context about the person: how much of a straight shooter are they? What’s their social relationship to the person they’re talking to? How they’ve reacted to prior demos?

Understanding business and organizational context is even more complex. It requires absorbing this kind of partially explicit, partially implicit, multi-channel context from possibly many other human and non-human sources and integrating it effectively to make decisions about what problems to solve, how to allocate resources, and how to prioritize work. Missing any of this context can mean costly mistakes.

Even in environments where a lot of this context is digitally accessible, where everyone is bought into the idea of making context available to agents, the ecosystem of context is constantly changing, which means you need meta-context, context about where the juiciest context lives, in order to make it available to agents. This meta-context tends to be communicated in hallway conversations and through networks of knowing-who-knows, venues that are relatively inaccessible to agents.

Interacting with other humans

A lot of what I do in my professional work is talking to people. I need bits of attention from lots of different people for my team and I to be able to do our work effectively. It’s not just to gather context, though that’s part of it. It’s also getting people to try out our demos, to help us get access to data sources and other technical resources, to vouch for us with other users, and many other things like this. Humans have a distinct advantage here, not just because we can ingest and understand the information entailed, but because people take other people more seriously than they’ll take agents.

I’ve spent most of my career building internal tools. Maybe it’s different when building for external customers, but it’s very difficult to imagine my users being willing to accept meeting requests from fully autonomous agents. Maybe they’d do it for the novelty effect the first time, but it seems to me that when people are busy, as they almost always are, they’re going to prioritize the humans in their inbox (or who walk up to them in person) over the agents.

Having people working on a project is a costly signal. Tokens just cost money, and not very much. Human software developers cost human time and attention.

You could imagine a world with hundreds of fully autonomous software development projects for every project that has human software developers involved. To people outside the project, that would signal that this project is being taken more seriously, at least by the people working on it, than the fully autonomous projects that may have been spawned by some thoughtless “figure out how to make money” prompt.

Beyond just being a signal that more costly resources are being expended on the project, human involvement can also mean that important non-monetary assets are being staked on the project’s success. The human software developers have reputations and careers on the line. If they put out bad software, it might not be the end of the world, but it harms them more deeply than if they were just wasting money on tokens.

You can buy human involvement with money, but it’s much more expensive than tokens, and there are forms of human involvement you can’t really buy, like genuine passion and enthusiasm for what is being built. I think people can feel when someone they’re talking to is deeply invested in something personally. We’re social creatures, and when we see signs that other people are interested in something, it makes us more interested too.

And then there’s legal liability. When you can identify specific people involved in the project, it’s easier to feel confident that if they’re negligent or even malicious, you’ll have legal recourse. In high stakes domains, this could easily disqualify software without humans in the loop from consideration.

People will also tend to fuck with fully autonomous agents. AI agents are generally trained to be helpful assistants. They’re very deferential to users and will usually take what the user says very seriously. They’re gullible. Eventually you’ll probably be able to prompt this out of them, but in the meantime, I can’t help but think we’ll see the equivalent of human drivers cutting off Waymos.

Specification

When we decide to spend tokens to have an agent write software, we generally have some intention for that software, some goal we want it to achieve, some itch we want it to scratch. It might feel like we can easily describe this goal, if not with a short prompt, then with a few pages of specification at most.

Human software engineers, who have for decades started projects with detailed specs, can tell you that while getting clear about your specifications up front can certainly help get a project started off on the right foot, this kind of informal spec doesn’t come anywhere close to actually specifying every detail of the finished piece of software.

There actually are domains of software where we do write such detailed specs up front that all relevant properties of the software are well defined before we implement. You see this in fields like networking protocols and cryptography. But these specifications are so tedious and formal that they’re essentially a form of code, and they require as much or more effort to write than the software they specify.

You could delegate the specification to an agent, providing a rough sketch of what you want and then just accepting the decisions it makes to fill in the gaps, but unless you’re really lucky, you’ll find that the agent doesn’t get particularly close to the vision you had in your head. The design space you’re trying to navigate is too vast, and there are too many quite different interpretations of any given short spec.

There’s no way around the fact that if you want to realize any specific vision for a large piece of software, you’ll need to communicate a large volume of details about what that vision is.

This doesn’t mean you need to write an exhaustive specification document up front. For most forms of software, the bulk of the specification happens as you build it. There’s an important reason for this. Often you can’t make many of the design decisions until you’ve already built much of the system. You need to flesh out enough of the UI components and see how they fit together to know if your interface makes sense. You need to process some real data before you know if your data model works. It’s a bit like decorating a home, it’s hard to do before you’ve spent some time there.

Specification in this sense is an interactive process, a conversation with the material1. Seeing the partially completed software creates a new, deeper understanding of your goal, and that new understanding enables you to see new opportunities to specify more details. Then those new details suggest further new understandings and opportunities for details.

But again, if you want to have any control over what software is eventually made, you need to be there. There are no shortcuts in specification. You need to see those intermediate states, you need to feel their weight in your hand. You need to be the one specifying details and having your understanding of the problem deepened. Otherwise you’ll get software that reflects the agent’s preferences, not yours.

Will software development still be a career?

I’ve made the case that there are important roles for humans to play in software development besides coding, roles that are structurally more difficult for AI agents to play in our place, but will “software developer” be a job title like it is today? Will you be able to make a living doing it?

I think yes, at least until economic activity in general becomes something quite unrecognizable.

Specification is the most important reason. As long as people care about getting the software that meets their preferences, they’ll need to say what those preferences are. This is more fundamental than the other reasons for human involvement. We could digitize enough that agents could access enough context without us, and humans could be involved just enough to create a signaling effect and accept legal liability without having much to do with the creation of the software itself. But if nobody says what they want, that information simply won’t exist.

Still, this doesn’t necessarily imply that specification will be done by professional developers. Can’t end users do it?

Professional developers have a deeper background of experience with the material of software. They have a large repertoire of solutions from past work to draw on for inspiration and analogy. They have a richer representation of the design space, and can see possibilities that end users can’t. Not just implementation possibilities, but design and functional possibilities.

It also takes a lot of time to specify things, and some people have much greater appetites for spending hours on end describing and evaluating software. Despite the fact that non-expert end users can vibe code now, it’s the software developers who have adopted AI coding agents with the most enthusiasm. For many professional software developers, myself included, coding agents have made coding much more fun. That’s because to us, building software already was fun, and now we can do it faster and spend less time on the particular parts we find less fun.

You might have fantasized about one day building a dream home. You might have even sketched some blueprints. Still, if you were going to actually set about to build the home, you would probably seek out a professional architect. Why? It’s not because only they have the technical skill of drawing precise blueprints. They use computers for that. It’s not just to make sure your design will be structurally sound. They rely on engineers for that. You go to them because they can help you design a better house than you could have on your own. Even starting from a vision you brought them, they’ll do a better job of working out all the nitty gritty details. And they’ll enjoy it more than you would.

But if the end user is delegating specification anyway, can’t they delegate it to an agent rather than an expensive human software developer? Can’t they simply describe the software they want in their non-expert way and then the agent can undergo this process of iterative design, development, and criticism. Can’t agents even schedule interviews and demos where they ask end users questions and get their feedback?

Well, yeah, perhaps software could be built that way. One of my takeaways from thinking through the issues in this essay was that I should be on the lookout for exactly that. Not just agents getting even better at writing code, but agents that identify opportunities for software, set out to build it on their own, eliciting requirements from end users and getting their feedback. I don’t see much of this happening today.

I assume things will move in that direction over time, but because of our structural advantages in the two other functions we discussed, gathering context and interacting with other humans, I think human professionals will be much more competitive in this specification delegate role than we are in writing code. Human professionals will be more expensive, but at least you won’t need many of them given that they’ll be working with increasingly capable AI coding agents.

At the high end, I expect that there will be a premium for human design, and even a kind of auteur effect, where the particular taste of particular software developers becomes something we value intrinsically.

It’s also quite plausible that the humans involved in the process will stay deeply involved in the material of software, but less involved in code specifically. We may live permanently at a higher level of the stack, thinking about interfaces, data models, and abstract control flow, but not about functions and classes. This is how most of my team works today.

I don’t mean to be proclaiming that humans will be involved in making software for all of eternity. As I noted, perhaps we’ll gradually build agents that can do all three of these activities better than the best humans. Or perhaps we’ll build superintelligence, at which point all economically valuable activity might be done by agents at the direction of other agents and nobody will need to do anything “for a living” again. And before that, surely the role of software developers will change quite a lot.

I recently became an engineering manager. Most days, I’m much further from the code than I used to be. Most of my time is spent gathering and sharing context, interacting with other humans, and trying to figure out what my team should focus on. My team doesn’t currently have a product manager, but these are largely the things product managers do too. Maybe in the short-to-medium term, the job of IC2 engineers will start to look more like product management or more like the job of an engineering manager who happens not to manage any (human) engineers.

You never know what the future might hold, but I don’t think that even radically better coding agents would imply a world without professional software developers. There’s much more to the job than coding, and there always has been. Our task then is to learn to lean harder into parts of the job where we have structural advantages over AI agents, particularly our ability to understand the problems of the people around us and build things they love.


  1. An expression I’m borrowing from The Reflective Practitioner: How Practitioners Think in Action by Donald Schön, which describes how professionals in different domains frame and then solve problems. Highly recommended—particularly for the fascinating set of case studies showing how real architects, clinical psychologists, and city planners do their work.
  2. IC = individual contributor, an employee who doesn’t manage other employees.