Articles

    Opinion - AI Is Replacing 80% of Coding. These Are the Skills That I Think Will Still Matter... at least for a while longer

    AI Is Replacing 80% of Coding. These Are the Skills That Will Still Matter.

    AI has replaced roughly 80% of what we traditionally called “coding skills.” It will keep replacing more. A handful of capabilities remain human. I’m not panicking, but I am paying attention.

    According to Harness’s 2025 State of AI in Software Engineering report, 72% of organizations have experienced at least one production incident directly caused by AI-generated code. AI writes code faster than any human ever could. It also breaks things faster than any human ever could.

    I’m writing more code now than I have in the past twenty years. When I say “writing,” I mean guiding the process, shepherding syntax, reviewing output. The actual generation isn’t me anymore.

    This November marks 30 years in infrastructure, operations, DevOps, and platform engineering. Red Hat. ThoughtWorks. AWS. KodeKloud. I’ve watched every “this will replace engineers” wave come and go. Mainframes to client-server. Waterfall to Agile. On-prem to cloud. VMs to Kubernetes. Internal developer platforms to platform engineering.

    Each transition killed certain tasks while making others more valuable. AI-assisted coding follows the same pattern. The code is being automated. The engineering is not.

    Some lower-level engineering skills are disappearing. Some design decisions that once required years of experience are now handled by AI in seconds. But there are capabilities that AI genuinely cannot replicate. These are worth mastering now because they’re becoming scarcer.


    Human Connection

    The most critical moments in any engineering organization aren’t technical. They’re human. The production incident where someone needs to make the call to roll back or push forward. The architecture review where senior engineers have incompatible visions. The retrospective where a team needs to acknowledge failure without assigning blame, where you’re either building a culture of safety or a culture of fear.

    I’ve trained hundreds of engineers across multiple organizations. The ones who become truly senior aren’t distinguished by their technical knowledge. Technical knowledge can be acquired. They’re distinguished by their ability to build trust, navigate conflict, create psychological safety, and communicate under pressure.

    AI can answer technical questions. It cannot sit with a junior developer who just caused their first production incident and help them process the experience without shame. It can’t read the exhaustion or worry in a team standup. It can’t advocate for sustainable pace based on nonverbal cues it will never perceive.

    Engineering is a team sport. The human skills are the sport itself.


    AI cannot be sued. You can.

    If you blindly accept AI-generated code and it causes a data breach, you’re liable. If AI hallucinates a GPL-licensed snippet into your proprietary codebase, you’re liable. If an AI-generated algorithm introduces bias that harms users, you’re liable. Engineers are the accountability shield between AI capabilities and organizational risk.

    I’m not talking about paranoia. AI operates without consequences. Humans operate within systems of professional responsibility, legal liability, and ethical obligation. That’s just reality.

    When I review AI-generated code, I’m not just checking for bugs. I’m checking for license compliance, security vulnerabilities, privacy implications, and alignment with documented architecture decisions. The AI may not know we’re in a regulated industry with specific audit requirements. It may not be aware of institutional dependencies that matter to how the codebase actually functions.

    Can we provide that context to AI? Yes, and we should. But accountability requires understanding consequences at a strategic level. AI generates outputs. Humans own outcomes.


    Strategic Systems Thinking

    AI optimizes for today. Good systems designers think about evolutionary architectures. Who maintains 10,000 AI-generated test cases when the schema changes?

    Hopefully AI. But just because we can do something fast and repetitively doesn’t mean we should.

    I see teams falling into this trap constantly. AI generates tests faster than humans can read them. Teams generate thousands of tests, achieve 95% coverage, declare victory. Six months later, they’re modifying 400 tests because the codebase changed. Will AI handle that maintenance? Probably. But is that approach strategically sound? Is testing being thoughtfully applied, or just blindly applied because we now have the capability?

    Strategic thinking asks uncomfortable questions:

    • How do we audit code produced faster than it can be reviewed?
    • What’s our plan when the model we depend on gets deprecated or changes behavior?
    • Who owns the technical debt that AI generates at scale?
    • What happens when the engineer who understands the AI-generated codebase leaves?

    AI is an incredible force multiplier for producing artifacts. It has no concept of maintaining them. Every line of code, human or AI-generated, is a liability that someone will have to understand, modify, and debug for years to come.

    Velocity without sustainability is just faster accumulation of technical debt.


    Translating Business Needs

    When a stakeholder says “make it faster,” AI starts coding. A human asks: “How much are you willing to pay for that speed?”

    That’s what architects do. They translate business needs into technical reality while surfacing the tradeoffs. When someone says they want 100% uptime, the architect asks what that means, what it costs, and what it implies for security and operations. When someone wants more resilience, the architect might respond: “That’s a million-dollar DR plan. Here’s what you get for that investment.”

    “Make it faster” could mean any number of things:

    • Our competitor launched a faster product and I’m panicking
    • One customer complained and they happen to be loud
    • I don’t understand why this takes time and I need education, not optimization
    • We’re actually willing to spend $500,000 on infrastructure to shave off 200 milliseconds

    AI cannot read the room. It can’t notice that the stakeholder’s real concern is job security, not system performance. It can’t recognize when the correct answer is “your current speed is actually fine, here’s the data” rather than immediately jumping to implementation.

    Requirements translation is a human skill because it requires understanding human motivations, organizational politics, and the difference between stated preferences and revealed preferences. AI takes the ticket at face value. The engineer investigates what’s actually being asked.

    Can we teach AI to do this? Yes. But the subtleties here will likely remain in the human domain for years.


    Understanding Legacy Code

    AI sees messy code and wants to refactor it. A human engineer knows that messy code has survived for a reason.

    That function with 47 parameters and a comment that says “DO NOT TOUCH - see incident #4521”? AI wants to clean it up. The senior engineer knows that function handles an edge case that only appears under specific conditions. Maybe a particular customer in Japan submitting an order at exactly midnight UTC.

    Legacy code is an archaeological record. Every production incident, every business pivot, every 3am hotfix that kept the company alive. The mess isn’t incompetence. It’s institutional memory encoded in syntax.

    AI-assisted refactors can reintroduce bugs that were fixed a decade ago. The original fix was ugly, and AI optimizes for elegance. But the original developers weren’t writing ugly code. They were writing defensive code against threats the AI has never encountered.

    Understanding legacy systems requires humility. It requires asking “why is this here?” before asking “how do I fix this?” AI only knows how to ask the second question. We still need humans for the first.


    Architectural Reasoning

    AI suggests textbook solutions. It doesn’t know about hidden constraints, regulatory requirements, political landscapes, or someone’s inexplicable preference for Redis over Memcached.

    When you ask AI to design a system, it gives you the Stack Overflow consensus answer. It doesn’t know that your CTO has a vendetta against MongoDB from a previous job. It doesn’t know that your compliance team will reject anything storing data outside your home region. It doesn’t know that the “obviously correct” microservices architecture will get blocked because your ops team has three people and they’re already drowning.

    Architecture isn’t about knowing the best solution. It’s about knowing the best viable solution. The one that accounts for organizational capacity, team skills, budget constraints, and the political capital required to actually ship it.

    I’ve watched AI suggest Kubernetes deployments to teams that can barely manage a single EC2 instance. Technically correct. Organizationally catastrophic.

    The architect’s job isn’t to find the optimal solution in a vacuum. It’s to find the optimal solution in your vacuum, with all its dust, debris, and hidden obstacles.


    What To Do About It

    If you’re an engineer watching AI transform your field, stop competing with AI at code generation. You’ll lose that race, and it’s not a race worth winning.

    Instead, invest in the skills that make code generation valuable. Build trust. Understand accountability. Think strategically. Translate business needs. Respect legacy systems. Reason about architecture in context.

    The engineers who thrive won’t be the ones who can prompt the best code. They’ll be the ones who can take that code and turn it into reliable, maintainable systems that actually serve human needs.

    The code is being automated. The engineering never will be.


    Michael Rishi Forrester is a Principal Training Architect at KodeKloud and founder of The Performant Professionals. November 2026 marks 30 years in infrastructure, operations, and DevOps. His focus: preparing tomorrow’s innovators while elevating the average.