If you’ve been studying for the AWS AI Practitioner exam, you’ve heard the phrase “AI Development Life Cycle” in one context. If you’ve been following AWS DevOps content since mid-2025, you’ve heard the same phrase in a completely different one. Both are real. Both matter. They’re not the same thing, and AWS doesn’t always make the distinction obvious.

This is the article I wish existed when I started sorting through them.


Why the Confusion Exists

AWS uses “AI Development Life Cycle” to describe two distinct methodologies. One is a model-building framework baked into the Well-Architected ML Lens and the AWS AI Practitioner certification curriculum. The other is a software development methodology created by Raja SP at AWS that reimagines how engineering teams build production software with AI as the central collaborator.

Same abbreviation, different domains, different audiences, different timelines. The model-building framework was formalized for the AIF-C01 certification launch in June 2024. The software development methodology was published on the AWS DevOps blog in July 2025 and open-sourced in November 2025. If you encountered this content in 2024, you almost certainly encountered the first one. If you encountered it in late 2025 or 2026, you may have seen both.

Here’s what each actually is.


Framework 1: The ML Development Lifecycle

This is the one tested in certifications. The AWS Well-Architected Machine Learning Lens defines a six-phase model-building lifecycle that covers everything from framing a business problem to monitoring a deployed model in production. It’s the foundation for both the AWS Certified AI Practitioner (AIF-C01) and the AWS Certified Machine Learning Engineer Associate (MLA-C01).

The six phases are:

Business Goal Identification — Define the problem, success criteria, and what business value looks like. This sounds obvious and is consistently where ML projects fail. Most teams skip this and pay for it later.

ML Problem Framing — Translate the business problem into an analytical framework. Is this a classification problem, a regression problem, a clustering problem? The choice here drives every architectural decision downstream.

Data Processing — Collection, preprocessing, feature engineering, and feature store management. Google Cloud publicly states this phase consumes 70-80% of total project budget. AWS’s own documentation reflects the same reality without quoting that number directly.

Model Development — Training, hyperparameter tuning, evaluation, and CI/CD pipeline creation. This is the phase most people think of when they imagine “ML work.” It’s also usually less than a third of the actual work.

Model Deployment — Staging validation through to production deployment: real-time endpoints, batch transform, serverless inference. AWS maps each option to SageMaker Endpoints, Batch Transform, and Bedrock depending on the use case.

Model Monitoring — Drift detection, alarm management, retraining triggers, and feedback loops. This phase connects back to the beginning: monitoring outputs feed back into Data Processing and Model Development as the model degrades or the underlying data distribution shifts.

The phases are iterative and explicitly cyclical, not a waterfall. The Well-Architected ML Lens was substantially updated at re:Invent 2025 with 100+ cloud-agnostic best practices mapped across all six Well-Architected pillars.

For certification purposes: this lifecycle appears in AIF-C01 under Task Statement 1.3 (Domain 1, 20% of scored content) and the Foundation Model Lifecycle variation under Task Statement 2.1 (Domain 2, 24% of scored content). If you’re studying for either AWS AI certification, this six-phase structure is not optional knowledge.

AWS also published a separate Well-Architected Generative AI Lens (April 2025) with a parallel six-phase lifecycle specifically for foundation model applications: Scoping, Model Selection, Model Customization, Development and Integration, Deployment, and Continuous Improvement. The Generative AI Lens and the ML Lens address different technical patterns, but the underlying lifecycle logic is the same. A third lens, the Well-Architected Responsible AI Lens, provides cross-cutting governance across both.


Framework 2: AI-Driven Development Life Cycle (AI-DLC)

This is the newer one, and it’s a fundamentally different concept. The AI-Driven Development Life Cycle is not about how to build an ML model. It’s about how to build software, with AI handling most of the execution while humans maintain decision authority at every stage.

Created by Raja SP, Principal Solutions Architect and Head of Developer Transformation Programs at AWS, AI-DLC was built on a specific diagnosis from 100+ enterprise engagements. Raja SP and his team observed two failure patterns. The first was teams throwing complex requirements at AI agents and expecting autonomous production-ready output. This works for simple prototypes and fails for anything real. The second was teams using AI as a narrow code-completion assistant, keeping humans in tight control. This preserves quality but limits productivity gains to 10-15%.

AI-DLC proposes a third path: AI as the primary driver of execution across the entire development workflow, from requirements analysis through architecture, design, coding, testing, and deployment, while humans focus on business context, critical decisions, and outcome validation.

The methodology has three phases.

Inception answers the question of what to build and why. When a developer types “Using AI-DLC, …” into their AI coding agent, the AI analyzes the workspace to determine whether it’s a new project or an existing codebase, analyzes requirements through a structured nine-step process, generates verification questions in multiple-choice format, and produces a complete execution plan as a Mermaid diagram showing which stages it recommends, which it recommends skipping, and why. The defining team ritual is Mob Elaboration: the entire cross-functional team, product managers, developers, QA, and operations, assembles to validate AI’s questions and proposals in real time. What traditionally takes weeks of back-and-forth between a PM and stakeholders gets done in 2-3 hours.

Construction covers how to build it. AI proposes architecture, domain models, code, and tests. Teams provide real-time technical guidance at Mob Construction checkpoints while AI generates implementation. The phase is intentionally structured to keep humans from becoming passive approvers: mandatory human checkpoints are built into the workflow, and all critical decisions are surfaced explicitly rather than silently absorbed into AI output.

Operations covers deployment and monitoring using the context accumulated through the prior phases. This phase is still marked as evolving in the GitHub repository and is the least documented of the three. Don’t treat it as production-ready methodology yet.

AI-DLC also replaces some Agile terminology deliberately. Sprints become Bolts, measured in hours or days rather than weeks. Epics become Units of Work. The methodology explicitly rejects fixed two-week cycles in favor of AI-recommended execution paths that adapt to project complexity. The underlying logic: if AI can collapse a requirements analysis from three weeks to three hours, forcing the output into a two-week sprint structure is just cargo-culting Agile rituals.

The open-source implementation lives at github.com/awslabs/aidlc-workflows under the MIT-0 license. It currently supports Amazon Q Developer, Kiro, Cursor, Cline, and four other AI coding platforms. The implementation is rules files and steering files that are loaded as context into the AI agent. v0.1.3 was released February 11, 2026.


What the Adoption Numbers Actually Show

AI-DLC is not theoretical. The adoption timeline from 2025 forward is unusually fast for an AWS methodology.

Wipro compressed three months of development work into 20 hours using AI-DLC with Amazon Q Developer. Wipro CTO Sandhya Arun demonstrated this live at AWS DevSphere Bengaluru in August 2025 alongside Swami Sivasubramanian’s keynote. Dhan, a fintech company, launched a new application in 48 hours.

The Unicorn Gym program, where AWS engineers work hands-on with customer teams implementing AI-DLC, has produced results that would look like typos if multiple sources didn’t confirm them. At the January 2026 joint Unicorn Gym in Tokyo, 11 companies and 87 participants ran for two days. One team compressed an 8-week project into 2 days. Another compressed a 6-month project into 2 days. AWS Japan published a blog post with specifics on February 5, 2026. The participating companies included Hitachi, Mitsubishi Electric, Panasonic, JR Central, and Daiichi Sankyo, which are not organizations that publish case studies loosely.

A February 2026 post on the Ministry of Testing forum shows a QA professional describing company-wide AI-DLC rollout in progress, asking about integration with testing workflows. That’s a meaningful signal: the methodology has moved past early-adopter enterprise pilots into the kind of broad deployment where QA teams are trying to figure out how it affects their jobs.

Does this mean AI-DLC delivers 5x-20x productivity gains universally? No. Those numbers come from controlled environments, often greenfield projects, with AWS engineers in the room. Real-world results in brownfield enterprise codebases, under normal team conditions, without a Unicorn Gym, will be lower. The methodology is also less than a year old outside of controlled pilots. The Operations phase is incomplete. The tooling is at v0.1.3. Anyone applying it to a regulated production environment right now is taking on early-adopter risk.

What the numbers do show is that the underlying premise, collapsing development timelines by having AI drive execution while humans validate rather than the other way around, produces real results at real companies under real conditions. That’s a different category than a re:Invent demo.


How They Fit Together

The ML Development Lifecycle and AI-DLC are not competing frameworks. They address different parts of the problem.

The ML Development Lifecycle answers: how do I build, deploy, and operate a machine learning model? It’s the technical foundation for anyone working with SageMaker, Bedrock, or any ML pipeline on AWS. It’s what the Well-Architected Framework uses as its organizing principle for AI workloads. It’s what the certifications test.

AI-DLC answers: how do I build software in a world where AI can handle most of the implementation? It’s a development methodology, not an ML architecture framework. A team using AI-DLC could be building a fintech application, an internal tool, or an AI-powered product. The fact that they’re using AI coding agents is the constant. What they’re building is irrelevant to the framework.

In practice, a team building an ML-powered product might use both. They’d use the ML Development Lifecycle to think about the model-building pipeline, data processing, evaluation, and monitoring. They’d use AI-DLC to structure how the team collaborates with AI agents to build the surrounding application, integrations, and infrastructure.

The AWS ecosystem now has enough lifecycle thinking to create confusion if you’re not careful about which layer you’re working in. The Well-Architected ML Lens governs architectural decisions. The Generative AI Lens governs foundation model application decisions. The Responsible AI Lens governs governance and ethics. The CAF-AI framework governs organizational transformation. And AI-DLC governs how your development team actually works day to day. These are not duplicates. They’re different lenses on the same problem at different altitudes.


What to Do With This

If you’re preparing for AIF-C01 or MLA-C01, master the ML Development Lifecycle. Understand all six phases, know the AWS services that map to each one, and understand the feedback loops. This will appear on the exam.

If you’re a DevOps or platform engineer wondering whether AI-DLC applies to your team: the open-source repository at github.com/awslabs/aidlc-workflows is worth reviewing. The method definition paper at prod.d13rzhkk8cj2z0.amplifyapp.com is the canonical technical reference. If your team is already using Amazon Q Developer or Cursor, the implementation cost is low. The rules files are just context loaded into the agent. You’re not adopting a new tool. You’re adopting a structured approach to prompting.

The honest caveat: AI-DLC works best on greenfield or well-scoped brownfield work, with teams that understand the underlying systems well enough to validate AI output at the mandatory checkpoints. Using it to accelerate work on a system nobody fully understands will accelerate the production of output nobody can evaluate. The human-in-the-loop pieces are not optional, and they require people who actually know what they’re looking at.

Raja SP’s framing at DevSparks Hyderabad gets it right: “In the AI world, your sprints are supposed to be days or even hours. Nothing should be sequential anymore.” The methodology is built around that premise. If your organization’s development process is built around the opposite premise, the tooling won’t save you. The cultural change has to come first.


Michael Rishi Forrester is Principal Training Architect at KodeKloud and founder of The Performant Professionals. With 25+ years in operations and DevOps across Red Hat, ThoughtWorks, AWS, and beyond, he focuses on preparing tomorrow’s innovators while elevating the average.

Bluesky | Mastodon | Hachyderm | LinkedIn | YouTube | X