Agreeing on an idea doesn't mean you both understood the same thing. Dave Sharrock and Peter Maddison dig into why shared context breaks down in practice, and how AI makes that problem harder to ignore.
This week's takeaways:
- Intent is always imperfect. Define how you'll validate it, not just what it is.
- Ambiguity in context isn't a bug. It's necessary. Validation is how you confirm you're aligned.
- Drive down the cost of validation, not just the cost of building.
If this landed, share it with someone navigating the same tension. And reach out at feedback@definitelymaybeagile.com - we read everything.
New episodes released every Thursday to challenge your thinking and inspire action.
Listen and subscribe:
Context, Intent, And The Problem
Peter Maddison 0:04 Welcome to Definitely Maybe Agile, the podcast where Peter Maddison and Dave Sharrock discuss the complexities of adopting new ways of working at scale.
Dave Sharrock 0:12 Hello Peter. How are you doing? It's been a while, hasn't it? Good opportunity to pick up where we left off - talking about context as a way of describing intent, and where that tends to fall down.
Peter Maddison 0:32 Right. So we've had a couple of conversations already. One about context engineering, what roles fall into it, whether it's actually a discipline. Then we talked about what good context looks like and where it goes wrong - how ideas and intent get diluted as they move through an organization. And AI, as with everything, can amplify that. So the real question becomes: how do we create good context? Context that agents can actually use to build what we're intending. And then there's the follow-on question - if we do generate an idea well, how do we know?
Why Agreement Still Fails
Dave Sharrock 1:16 This is a genuinely interesting area, and it's one we've been circling for years in different ways. What's changed is that the pace of work is stress-testing our assumptions a lot harder than before. Here's the core problem: if I describe an idea and we discuss it back and forth, we might reach a point where we think we understand each other. But the experiences, education, and principles we each carry - the lens we use to interpret the world - get applied to that shared description. Even if we completely agree that a document captures where we want to go, that doesn't mean we'll independently apply those ideas in the same way.
Peter Maddison 2:38 So why do we assume our AI tools can?
Dave Sharrock 2:43 I don't think they can - and that's the point. What we need is to go try something, come back, and say "here's how I interpreted this." That's the validation conversation. And validation isn't the same as debugging or testing for breakage. It's much more about product market fit, business fit. Are we actually moving in the direction we think we are?
Peter Maddison 3:16 For everyone listening, Dave is not suggesting you skip testing. Just wanted to park that one. But this is actually a great example of the exact thing we're describing - because my mental model is shaped by years of getting systems into production. Late nights, early mornings, all of it. So I'm always thinking about the operational side. The question is: how do you take context and intent and validate that you're getting what you expected? We know the English language is imperfect. And now we're working in an even further abstraction - using AI to build in languages computers understand - and that top layer is genuinely ambiguous. Which is powerful, by the way. That ambiguity is what makes it resilient to information that comes up along the way.
Dave Sharrock 4:34 Exactly. It's powerful because of that. But it also means we can't use the description alone to check whether you and I are on the same page. We have to put it in a real context - a testing environment, a business situation - and see if it's working the way we expected.
Peter Maddison 4:56 Right. And there are two slightly different things here. First, we do need some level of agreement that the idea or intent is documented well enough. But that agreement isn't a guarantee we'll get the same output. We're still going to have to look at what actually comes out.
AI Speeds Building, Not Learning
Dave Sharrock 5:20 This season in Formula One is a good analogy, actually. They made significant changes to the cars - splitting power more evenly between electric and engine. On paper, risks were identified. But it was only when they raced under real conditions that those risks were either confirmed or dismissed. You can agree on a plan, discuss it thoroughly, and still not know what will happen until you put it in the real world and see the consequences.
Peter Maddison 6:09 Right - no plan survives contact with the enemy. But one of the genuinely good things about AI is it can compress the cycle time if you use it thoughtfully. What we actually see happening though is that because so much more can be built at once, people stuff more into it. Bigger pull requests, larger chunks of change, which then stresses other parts of the system. But if we imagine a world where teams are releasing smaller pieces of capability much faster - that's where things get interesting. That's when you can experiment at speed, run variants, and start gathering real feedback quickly.
Dave Sharrock 7:11 And that's the mindset shift. For AI to actually help, you need a continuous validation culture already in place. If that's not the norm, you end up doing exactly what you described - feature stuffing. More output doesn't mean more learning. It just means more variables. You lose the ability to know which specific change is causing unexpected behavior. The goal should be to drive down the cost of validation, not just the cost of building.
Peter Maddison 8:13 That's where the CI pipeline becomes the bottleneck. If you could do three or four changes a day before, a 25-minute pipeline wasn't a big deal. Now, if you're doing 50, that same pipeline becomes a real constraint. You can't get things into an environment fast enough to even start gathering feedback.
Dave Sharrock 8:48 And the bottleneck is actually shifting from development to validation. Because validation often has a human being at the other end - someone who sleeps, who doesn't jump onto your new feature the moment it ships, who needs time to generate statistically useful feedback. There's a whole layer of post-deployment activity that the CI process doesn't touch.
Peter Maddison 9:16 Exactly. And that's the piece that often gets missed when people talk about shipping faster. Speed into production is only part of the picture.
Define Success And Disproof Signals
Peter Maddison 9:38 And then there's the market side. If I'm putting 40 features out, I can't bombard the same focus group with all 40 and expect anything useful back. You have to think about alternate ways of gathering feedback. Can you look at that product space differently? Can you run smaller, more targeted experiments?
Dave Sharrock 10:08 This actually brings us back to context. We talked earlier about giving context structure. And sitting alongside that structure should be a definition of how you'd validate the context is relevant - and also, what would you see if the idea was wrong? That second part is rare. Ideas come in, intent gets documented, but almost nobody thinks about the signals that would tell them their assumptions were incorrect.
Peter Maddison 11:11 Ideally - and there's a lot of variance here - you'd say: we have these 10 ideas, we put them into market, we're expecting a 5% uptick in sign-ups to this channel, we monitor for that, and we see which ones move the needle over a given time period. With AI you could even automate shutting down the ones that aren't working.
Dave Sharrock 11:47 And even then, different user groups behave differently. But that's part of what AI and data make possible now - exploring that variation with a lot more detail than we could before.
Peter Maddison 12:04 The capability is there. Not every organization knows how to dig into it yet.
Dave Sharrock 12:18 Three takeaways?
Peter Maddison 12:22 Three. First: intent is always imperfect. Which means alongside any definition of intent, you need to define how you'll validate whether you got what you were expecting. That's not new, but it's become more critical because we can experiment faster now. Especially if we want to automate the validation process - we need concrete, measurable signals. If it goes up, do this. If it goes down, do this. It's not enough to define the intent. It never really was, but it really isn't now.
Dave Sharrock 13:21 And I'd add something as a kind of pre-assumption to that. The goal shouldn't be to write context so precisely that it's unambiguous. Ambiguity is part of it. It's necessary. Which means validation isn't a failure of the description - it's the mechanism by which you confirm you both understood the same thing.
Peter Maddison 13:56 Exactly. So with that, don't forget to subscribe and bring a friend along for the conversation. You can reach us at feedback@definitelymaybeagile.com. Until next time. You've been listening to Definitely Maybe Agile, the podcast where your hosts Peter Maddison and Dave Sharrock focus on the art and science of digital, agile, and DevOps at scale.



