Most organizations think they're doing AI. They've bought the licenses, rolled out the tools, and told the team to start using Copilot. But adding AI on top of a 40-year-old process isn't transformation. It's decoration.
Andre Kaminski, Director of Advanced Technology Solutions at WorkSafeBC and author of "The AI-Native Software Development Lifecycle," joins Peter and Dave to talk about what it actually means to rebuild your delivery process around AI, not just bolt it on.
They get into why optimizing code generation alone is the wrong focus, what the six phases of an AI-native SDLC look like in practice, and why the biggest challenge isn't the technology at all. It's the identity shift that comes with it.
If your organization is asking "which AI tool should we use?" this episode will help you realize that's probably the wrong question.
In this episode:
- Why AI-augmented and AI-native are very different things
- The compounding learning effect and why early adopters are pulling further ahead every month
- What prompt architecture actually means and why it matters more than code
- How to think about governance when prompts become your new source of truth
Want to keep the conversation going? Drop us a line at feedback@definitelymaybeagile.com or find us at definitelymaybeagile.com. If this episode got you thinking, share it with someone who needs to hear it.
New episodes released every Thursday to challenge your thinking and inspire action.
Listen and subscribe:
Welcome And Guest Introduction
Peter (0:04): Welcome to Definitely Maybe Agile, the podcast where Peter Maddison and David Sharrock discuss the complexities of adopting new ways of working at scale. Hello and welcome to another episode of Definitely Maybe Agile. Dave and I are joined today by Andre. Andre, would you like to go ahead and introduce yourself?
Andre Kaminski (0:27): Hi, thank you very much for inviting me. My name is Andre Kaminski and I'm Director of Advanced Technology Solutions at WorkSafeBC. I have over 35 years of experience in IT. I started as a developer and ended up as CTO of a Nasdaq-listed organization. I've worked across three continents, lived and worked in many countries, not just as a tourist but actually putting down roots. So my experience really comes from everything I've learned across those organizations and cultures. And Canada, of course, is no different.
I'm also the author of two books. The latest one, which is what we're talking about today, is about the AI-native software delivery lifecycle. My previous book is called Blending with Dragons. That one is really my life philosophy about resolving conflict in a way that preserves your relationship with the other person. It matters especially in organizations, because every single day there's some kind of conflict, but you still have to face those same people tomorrow. You can't burn bridges. The book is really about how you achieve your outcomes without damaging those relationships. And why is that relevant right now? Because when you're implementing an AI-native software development lifecycle, change management becomes the biggest issue. That's where leaders need to focus.
Dave (2:34): What strikes me as you're introducing that, Andre, is that we have a lot of conversations on this podcast about the technology side of things, about agents and AI tools and so on. And yet, as you're highlighting, the real challenge is the change itself. The shift in roles, in identity, for software developers and for organizations more broadly. Something Peter and I have come back to many times.
Why AI Change Hits People
Andre Kaminski (3:07): Just to add to that, what's different this time? I started my career when you could barely think about computers in the way we do today. Mainframes, punch cards, assembly language. I've lived through the PC revolution, the internet, agile, a lot of shifts. But none of them moved this fast, and none of them hit people the way this one does. Every previous revolution was about tools or processes. This one is about humans. Who we are, where we fit, how we make sense of ourselves in this new world. The technology exists. You can use it today. The bigger question is: are we ready for it?
Peter (4:03): Especially in a knowledge-work culture like North America's. This hits hard. How AI changes the nature of knowledge work, how we interact, how we get things done. There's a lot to explore there.
Andre Kaminski (4:25): It's almost turning everything upside down. This isn't incremental improvement. It's a complete rethink of how we work. And that's genuinely hard for a lot of people.
Peter (4:55): It's very hard, especially when you consider the cognitive biases at play. One of my favorites is what's called the IKEA effect. We attach more value to things we built ourselves. All that time and energy we put into crafting systems and environments, it's really difficult to step back and say, "Actually, I don't need that anymore."
Identity Shock And Emotional Resistance
Andre Kaminski (5:20): To let it go, yes. And I empathize with that deeply. When I was early in my career, my mentor told me: you'll need to reinvent yourself at least once in your life. I've done it three or four times. The new generation entering the workforce now? They'll be reinventing themselves almost every year, I think.
And it's emotionally threatening, especially for people who have invested so much of themselves, their learning, their sweat, their identity, into becoming who they are today. Suddenly they're asking: what is my role if AI can do the same thing?
In the book, I draw on the Kübler-Ross emotional journey, the stages of grief. Denial, anger, bargaining, depression, acceptance, and then finally optimization. We all have to move through those stages. Right now you can see a lot of resistance out there. Some of it has valid arguments, but much of it is people searching for answers. We all have to work through it before we can really find ourselves in this new world.
Dave (6:54): What's interesting about what you're saying is the focus on identity. Previous technology changes, agile included, were about helping people do what they do better. But this is different. This attacks the identity of who we are. The skills, the experience, the creativity we spent years building. That's what feels like it's being handed over. And I don't think we have the language yet for what that identity shift really means.
Peter (8:13): A lot of our work in agile required that kind of shift too. We just didn't quite frame it that way. But I think the difference now is you don't really have a choice. It's everywhere. You can't find a safe space and hide.
Andre Kaminski (8:31): Absolutely. And the speed is incredible. What we know today will most likely be obsolete within a month. I've seen organizations still debating which tool to start with, 18 months later.
Dave (8:51): Eighteen months in, yeah.
Andre Kaminski (8:52): Right. Think about what AI could do 18 months ago compared to now. In mid-2024, according to one benchmark, AI could reliably perform the same work as a human for tasks taking up to 10 minutes. By December 2025, that had grown to 14 and a half hours. And it's accelerating. Meta estimated last year that AI capability was doubling every seven months. Since January, that cycle has shrunk to about four months.
The numbers are staggering. GitHub had a record year for commits in 2025, over one billion. By April of this year, they were estimating 14 billion by year end. The reason? Developers and organizations adopting AI to build and improve products. That's the reality we're in.
Speed Curves And Tool Adoption Traps
Dave (10:42): I want to bring us back to the book. The title is The AI-Native Software Development Lifecycle. You're mixing two things there: AI-native, and the SDLC, which people in this industry have been talking about for decades. Can you unpack what AI-native actually means, and how it changes the SDLC?
Andre Kaminski (11:22): You can look at it from two angles. A lot of organizations today are buying GitHub Copilot licenses for their developers, giving them AI support for coding, and calling themselves AI-native. They're not. They're adding AI on top of existing processes. And those processes, in many organizations, were designed 30 or 40 years ago to work around human limitations.
AI-native flips that. Instead of building processes around what humans can't do, you build them around what AI can do. Then you ask: where do humans actually add the most value?
The bigger misunderstanding is thinking that improving the coding step is the transformation. It isn't. You have to think about the entire delivery lifecycle, from the moment an idea appears to the moment it's running in production. If you only optimize code generation, you create a bottleneck everywhere else. Architecture reviews, security reviews, CAB meetings, deployment processes. None of that changes if you only touch the coding step.
You'll be faster at writing code and slower overall, because testers can't keep up, and downstream processes are still designed for the old pace.
If developers spend 7 to 15% of their time coding, and a tool makes them 30% faster at that, you've improved the overall system by maybe 6%. That's not transformation. Buying GitHub Copilot doesn't make you AI-native, just like buying Jira doesn't make you agile.
AI Native Versus AI Augmented
Dave (16:21): So coding is almost off the table as the focus. What I'm reading in the book is that the whole delivery lifecycle, from understanding the problem to design to testing to deployment, becomes a hierarchy of prompts. The work of the delivery team shifts toward prompt management. Getting the definition of what needs to be built right, rather than writing the code itself.
Andre Kaminski (17:44): Exactly. And that's the biggest shift for developers. For the last 60 or 70 years, the way we communicated with computers was to tell them how to do something, step by step, because computers were, honestly, pretty dumb. That's not the case anymore. You tell AI what you want, not how to get there.
In an AI-native system, you can be more or less prescriptive depending on context. In regulated industries like mine, certain areas require very specific, controlled instructions. Other areas have more room to breathe. The prompt architecture is what gives you that granularity.
The Six Phases Of AI Native
Peter (20:17): So how does that work in practice? How do you build structure and repeatability into this? What's the difference between vibe coding and AI-native?
Andre Kaminski (20:17): Vibe coding is fine for prototypes or hobby projects. But in an enterprise, you need repeatability. With vibe coding, your prompts are ephemeral, the output is ephemeral, and if you ask the system to do the same thing again, you'll get something different. Your auditors will ask how you built this, and you won't have a clear answer.
AI-native provides structure. My outcome is always the same, even if the output varies slightly. Let me walk through the six phases.
Phase one is discovery and context engineering. The output of this phase is a context prompt, which becomes the single source of truth for the entire system. Every agent, every human, works from this prompt. It defines the boundaries of the system. I recommend a two-day workshop with all key stakeholders, business, architecture, security, everyone. The context prompt captures all knowledge at that point in time, and everything built afterward has to satisfy it.
Phase two is prompt architecture and design. This is the full set of prompts, starting with the context prompt, then moving into architecture, using what I call progressive disclosure. You give the AI enough information to complete each task, without overloading it. Prompt budgets matter here too, because token costs can spiral quickly, just like cloud costs did for a lot of organizations.
Phase three is AI-orchestrated development. This is what most people are talking about when they talk about AI and coding. It's only phase three.
Phase four is intelligent quality engineering. And I use the word engineering deliberately, because testing happens from the very beginning. When the system starts building in phase three, it starts with failing tests based on the context prompt. Agents in phase three satisfy those tests. Then phase four validates there's been no drift from the original context prompt, from a business requirements or architecture perspective.
Phase five is automated deployment and operations. Many organizations are already doing some of this. But what AI-native adds is that the agent can decide how to deploy. Canary releases, dark launches, rolling out to 5% of users, monitoring, and either expanding or rolling back. It becomes standard.
Phase six is continuous learning and evolution. The system observes behavior in production and can suggest changes. I define four tiers of governance here. Tier one: low risk, the AI can make the change and notify the human. For example, auto-scaling when processor load is too high. Tier two: the AI proposes the change, notifies the human, and has 24 hours to act. If there's no response, it proceeds. Tier three: a subject matter expert must review before anything is applied. Tier four: certain actions are prohibited. AI never touches them.
The reason that matters is this: in AI-native, you separate your prompts from your code. The code of the agent becomes infrastructure. Today, when you deploy an application to production, it's static for months until the next release. With AI-native, the agent's code doesn't change often, but the natural language prompts can change every day. You can adjust the behavior of an agent based on what you're observing, multiple times a day if needed. The overhead of deploying a prompt change is far smaller than a full code deployment.
In my organization, we built what we call DocNexus, a class of agents that guides conversations in Microsoft Teams to produce specific documents. The first one we built was for statements of work. That process used to take several weeks, multiple meetings, back and forth between stakeholders and technical leads. Today, one one-hour conversation with the agent produces the same output. The agent asks the right questions in business language, and fills the document automatically.
And here's what's powerful: I can repurpose that same agent for business cases, contract development, privacy impact assessments, whatever I need, just by changing the prompt set. No redeployment. And because it's natural language, business analysts and product owners can eventually own those prompts themselves.
Dave (31:50): Peter, jump in before I keep going.
Peter (31:52): Thank you for walking through all of that. One thing we're hearing from organizations we work with is that the SDLC now applies to more than just developers. If we're treating prompts as code and checking them into a repository, how do you handle the people who aren't developers trying to manage those prompts?
Governance, Prompt Skills And Vocabulary
Andre Kaminski (32:44): Governance is the foundation of AI-native. Changing a prompt is not trivial. You need to understand the model you're working with. A set of prompts tuned for one model may not work well with another. And the prompts are connected in a hierarchy. If you change one, you can affect the output that downstream prompts are expecting.
There's also decay. Over time, prompts drift. The context prompt may become less effective. Token usage is actually one of the easiest ways to monitor this. If the system starts misbehaving, rising token costs are often an early signal.
Prompt engineering is a skill. It's not something you pick up without real learning. Communicating with AI is different from communicating with humans. Here's a simple example. At WorkSafeBC, the word "claim" has a very specific meaning that's different from how any standard insurance company would define it. If I write a prompt that just says "claim," the AI will default to whatever definition it was trained on, which is probably not ours. We get confused results, and we wonder why.
You have to be explicit. You have to provide positive and negative examples so the AI learns what you actually mean. In our context prompts, we include a vocabulary section, defining the terms that matter to us. That vocabulary then carries forward into every project built from that foundation.
Dave (36:28): That's a good lead into something I wanted to ask. When you bring business stakeholders into that context-engineering process, there's always a lot of implicit knowledge that never gets articulated. People know what "claim" means in your organization, but they've never had to write it down. How have you seen that shift happen? What does it take to get business stakeholders to make all of that implicit knowledge explicit?
Compounding Learning And Industry Disruption
Andre Kaminski (37:35): That's the role of the context engineer. You don't expose business stakeholders to the technical mechanics. You have a conversation with them, and you draw out what they mean by the terms they use every day.
But here's where it gets really interesting, and why AI-native organizations are going to pull ahead. It's the compounding learning effect.
Your first project develops some prompts, a vocabulary, a context. That work isn't lost. The next project reuses those same definitions. I don't need to redefine what "claim" means every time we build something new. The AI agents discover what already exists and reuse it, with modifications where needed.
So projects get faster and faster. You build consistency across the organization, not just in what gets built, but in how it gets built. In the past, different product owners would define the same term slightly differently, and those small differences would compound into large problems downstream. With AI-native, there's no ambiguity. One definition, used everywhere.
This matters enormously for competitive dynamics. I've referenced Geoffrey Moore's Crossing the Chasm for years. The 18-month window in the book title comes partly from him. In the past, laggards could eventually catch up with early adopters. Here, they can't. Because early adopters are compounding their learning, they're getting faster every quarter. Organizations that wait are falling further behind, not holding steady.
I'll give you a concrete example. It's tax season in Canada right now, and TurboTax ads are everywhere. This year, I collected all my tax documents in a folder and, before sending them to my accountant, I ran them through an AI tool. Not only did it analyze everything, it downloaded the CRA tax form, filled it in, and gave me my expected return. My accountant came back with the same number.
So what happens next year? What happens to TurboTax? A lot of industries will be disrupted unless they rethink not just their software delivery, but their entire product and value proposition. I focused on the software development lifecycle in the book because it's specific enough to be useful. The product development lifecycle is a much bigger conversation.
Peter (43:03): We have talked a bit about that side. I think we're getting close to the end, and we always like to leave our audience with three takeaways. It's been a great conversation. Andre, would you like to go first?
Three Takeaways And Closing
Andre Kaminski (43:33): Don't focus on software development in isolation. Code generation is not your game changer. It's not the transformation. Whether you're using GitHub Copilot, Claude Code, or any other tool, buying the license doesn't solve the real problem. Start asking the right question: how do I get from idea to production faster, and where does AI help with that? Think about the entire system. That's where the value is.
Dave (44:10): For me, it's the shift in thinking that Andre describes in the book. Moving away from developers writing code, toward building the prompts that in turn build the code. That changes the conversation about skills, about responsibilities, about how you influence what gets built. There's a lot to work through in that shift, and it's genuinely worth sitting with.
Peter (44:49): For my part, it's the people element. Systems within systems. If you look at a typical SDLC and count 50 steps from requirement to production, maybe three of them are about coding. If all your AI focus is on those three steps, you're not going to get the value you're hoping for. The end-to-end delivery system is the thing. That's what needs to change.
Thank you, Andre. Really enjoyed this. And thank you as always, Dave. If you're listening, don't forget to subscribe and share the show. You can reach us at feedback@definitelymaybeagile.com.
Dave (45:30): Thank you, Andre. Until next time.
Andre Kaminski (45:33): Thank you both. Really enjoyed it.
Peter (45:36): You've been listening to Definitely Maybe Agile, the podcast where your hosts Peter Maddison and David Sharrock focus on the art and science of digital, agile, and DevOps at scale.



