AI Won't Fix a Structural Problem with AJ Bubb
Definitely, Maybe AgileApril 09, 2026x
215
00:41:2728.49 MB

AI Won't Fix a Structural Problem with AJ Bubb

A lot of organizations are betting that AI will make their teams faster. Some of them are right. Most are solving the wrong problem. AJ Bubb, founder of MxP Studio and host of Facing Disruption, joins Peter and Dave to talk about what actually happens when AI lands in a development team without fixing the system around it. If engineers can't get approvals, can't get access, and spend half their day in meetings, AI just means they produce more output the organization still can't handle. That's...

A lot of organizations are betting that AI will make their teams faster. Some of them are right. Most are solving the wrong problem.

AJ Bubb, founder of MxP Studio and host of Facing Disruption, joins Peter and Dave to talk about what actually happens when AI lands in a development team without fixing the system around it. If engineers can't get approvals, can't get access, and spend half their day in meetings, AI just means they produce more output the organization still can't handle. That's not a tooling problem. It's a structural one.

They also get into velocity without direction, what ownership really looks like when a ticket gets blocked, and why synthetic user testing might be the most polite way to avoid talking to actual customers.

This Week's Takeaways

  • Own the problem from the customer all the way down. When something is blocked, it's still yours until it moves.
  • When an outcome surprises you in either direction, ask whether your model was wrong. Most teams take the win and move on. The ones that improve don't.
  • Before reaching for a technical solution, ask why five times. The problem someone walks in with is usually the invitation to a conversation, not the actual problem.

If this episode got you thinking, we'd love to hear from you. Drop us a note at feedback@definitelymaybeagile.com or leave a review on your podcast app. And if you know someone navigating AI adoption right now, send this one their way.

New episodes released every Thursday to challenge your thinking and inspire action.

Listen and subscribe:

Welcome And Meet AJ Bubb

Peter Maddison 0:04 Welcome to Definitely Maybe Agile, the podcast where Peter Maddison and Dave Sharrock discuss the complexities of adopting new ways of working at scale. Hello everyone, great to be back. Today we're joined by AJ Bubb, and of course my good friend Dave is here too. AJ, why don't you introduce yourself to our audience?

AJ Bubb 0:33 Yeah, absolutely. I'm really excited to be here. These topics are things I genuinely care about. My name is AJ Bubb, and I'm the founder of MxP Studio. We're a human plus AI startup studio, which means we focus on how to use AI as a tool to augment human capabilities, not replace them. We work with organizations, founding teams, startups, and enterprises to help them embrace what this next wave of technology actually makes possible - getting to market faster, finding product market fit, and ultimately delighting the customer. I say that with a smile because in a previous life I was at Amazon, and customer centricity is one of the things I genuinely took from that experience. I'm a 20-year technology veteran. I've been everything from a hands-on coder to technical co-founder to consultant and practice leader, all in emerging technologies. Now I'm launching my own products and helping others do the same.

Human Plus AI Done Right

Dave Sharrock 1:46 Can I ask you something? We hear so much about AI and how it impacts human work. You very deliberately chose to frame MxP Studio around the interaction between human and AI, and you're clear that it's not about replacing people. So what are you looking for when you're trying to find the right organization to work with? What does that conversation look like?

AJ Bubb 2:19 The first thing I'm trying to understand is what problems they're solving for their customers. What are they actually trying to accomplish? And the second is what's holding them back. A lot of the time you find ambiguity on both sides of that question right away, and that's not an AI problem at all. What I keep seeing with emerging technologies, over and over, is that they get brought in as the silver bullet to a problem the organization doesn't fully understand. Or more accurately, the problem they think they have usually isn't the real problem.

Peter Maddison 3:10 Sprinkling AI over everything and hoping it all magically gets better.

AJ Bubb 3:15 Exactly. Before it was data lakes. When those didn't work, it was data oceans, then IoT platforms, AR, VR, chatbots. Fundamentally it always comes back to the same question: is the organization doing the foundational work? Technology, culture, training. Are they actually ready to adopt what they're bringing in? So the questions I start with are simple. What are you trying to do? Where do you think you're stuck? And then let's figure out whether technology plays a role in that, and if so, what role.

Peter Maddison 3:57 How do you find that lands with customers? When you come in and start suggesting they need to think more holistically about how their customers get value, rather than just layering something new on top of an existing service, how do they respond to that?

AJ Bubb 4:38 The first thing is, I never tell them they're doing things wrong. They obviously did something right or they wouldn't still be in business. So I always approach it as yes, and. I took improv for years, so yes, and is instinctive for me. Yes, and let's dig in deeper. Is this the right workflow? Do you understand your customers' pain points? And then we build from there. That's a lesson I learned the hard way, by the way. As an engineer, I used to walk into meetings with my point of view fully formed and say something like, that approach is wrong. I did that a couple of times in the wrong room and the C-suite essentially benched me until they trusted me again. Your job as an expert is to win hearts and minds so people will follow your lead. You don't do that by telling them they're wrong.

Peter Maddison 6:02 I was being a little antagonistic there. Hoping you weren't going to say you walk in and tell people they're wrong.

AJ Bubb 6:08 You'd be surprised. When you carry a certain level of experience, it's easy to let that get ahead of you.

Peter Maddison 6:23 I see it a lot with subject matter experts, particularly in something like DevOps. Someone comes in and says, you should be deploying ten times a day, and the organization is sitting there going, we deploy once a year. The gap is so wide they can't even picture it. If you're painting a picture they can't conceive of, you've already lost them.

Start With The Real Problem

AJ Bubb 6:57 And we just lost all the DevOps engineers listening to this. But it's a good example. The engineer wants to release ten times a day. The organization says once a year. The right response isn't to push back. It's to ask, why? Tell me more about what's leading to that. That question is where the real work begins, and it connects to AI in a couple of interesting ways.

AI is, in some ways, the ultimate over-confident subject matter expert. It's trained on vast amounts of human knowledge, and the responses it learned from tend to come from people who are very prescriptive in how they answer things. So of course it mimics that. This is how you do it, full stop. It can sound very convincing while having zero context about the actual situation. That's one side of it.

The other side is that organization deploying once a year or once a quarter. They still exist. And this is where AI plays a misleading role, because the organization says we need to code faster. So let's give all our engineers AI and they'll be ten X engineers. But when you go to the engineers and ask what's holding them back, it's not how fast they can code. It's approvals they can't get, access they don't have, and meetings they're stuck in all day. You've just given people the ability to pump out more code that the rest of the system can't handle. That's a structural problem. AI is not going to fix a structural problem.

Dave Sharrock 9:40 There's something really important in what you're describing. The first thing to do when someone asks how we use AI to make things faster is to ignore the request for tooling and start by understanding how the business actually works. Peter and I have talked about this a lot. The problem in front of you isn't the real problem. It's the invitation that gets you into the conversation. The real benefits come from understanding what actually needs to improve, not just what feels like it needs a fix.

AJ Bubb 10:45 And there's something validating about fixing something small and scoped where you can actually see the difference. But the question is what does that roll up to? What you're describing is the idea of the durable challenge. You have a tool, a symptom, a problem. But what's the overarching challenge underneath all of it?

Take Amazon as an example. On the surface, they solve for wanting things delivered quickly. But at a deeper level, they're solving for a fundamental human need: we need things to survive and live our lives. Once you understand the challenge at that level, the number of possible solutions becomes almost limitless. Drones, local delivery, whatever. So when I go into an organization, I ask why a lot. We need to deliver faster. Why? Because of this. Why? Because our investors want to see it. Okay, now we're somewhere. Let's figure out what's actually driving this, and then work backwards from there.

I'll say this about innovation theater too, because I used to look down on it. I've changed my mind. If the goal is to signal to the market that you're thinking like an inventor, that's actually a legitimate marketing objective. You're not measuring revenue from what you're building. You're measuring market perception, stock price, funding. If you don't realize that's what's actually happening, you'll treat it like a product and it will fail.

Dave Sharrock 13:19 Signaling matters. External signaling around culture and direction makes sense. Internal corporate signaling is usually a different story, but what you're describing around demonstrating innovation to the market, that's real.

AJ Bubb 13:43 Absolutely.

AI Does Not Fix Structure

Dave Sharrock 13:45 Can we go back to something you touched on earlier? Around development teams. We often see it come down from leadership that AI will speed up delivery. But most development teams aren't held back by how fast they can write code. If someone comes to you from the build side with AI as the topic, what are the real problems they're trying to solve?

AJ Bubb 14:21 They're looking for direction. Builders build in the direction you point them. And I think direction was already a critical skill before AI. Now it matters even more, because engineers can move faster than ever.

Here's what I find interesting about velocity. So many teams measure progress by story points closed. Okay. But what's your process for deciding something is actually done? Did it go through QA? Did it go through customer validation? What does done actually mean? Because if velocity is literally speed and direction, and you've just multiplied speed by ten while direction was already questionable, you can now move very fast the wrong way. And suddenly you're stacking feature debt, technical debt, all of it, and you end up with enormous systems that don't make sense to anyone.

Peter Maddison 15:59 Builders will keep building.

Dave Sharrock 16:02 And that's what drives the feature explosion. Just adding more hoping the value will suddenly appear, instead of stepping back. It's really hard to pull an organization back from a direction once it's moving. Which is why getting the direction right early matters so much. And that's not an AI problem. That's a people problem and a customer understanding problem.

AJ Bubb 16:40 Exactly. This was already a challenge before AI. Leaders are making strategic bets and they need data to back them up. Some do it well, some go on gut feeling, some get lucky. The product manager's job is to live inside the customer's problem and feed data back to leadership about which direction to go. But right now a lot of organizations feel stuck. Leaders aren't making decisions. And not many people are asking why.

It's because things are moving so fast that we can't collect data quickly enough to keep up. And if we're focused entirely on how to use AI to move faster, we're missing the actual question: what do our customers want and need?

Peter Maddison 17:43 We're just moving the constraint. Classic theory of constraints. The developers get faster but the rest of the system doesn't. I've been doing work recently with an organization and we mapped out what it takes to get an idea in front of a customer. There were about 50 steps. I asked how many of those they were actually using AI on consistently. About three. So what about the other 47? If most of those are still manual, speeding up development hasn't actually changed the system much. AI is showing up in different ways across some of those steps, sure, but the bottlenecks are still there.

Velocity Needs Direction

AJ Bubb 18:48 And then it becomes AI all the way down. We take requirements, feed them into AI, generate specs, feed those into AI. I just published something about synthetic user testing, where you can hire AI agents with customer personas to give you feedback. And I have to be honest, I roll my eyes at that. Are we really going to stop talking to actual customers? You might use it to test your research methodology. But replacing real customer conversations with AI ones? That worries me.

Dave Sharrock 19:38 And it brings back the 50 steps Peter mentioned. If you don't close the loop, the number of steps doesn't matter. The goal isn't faster delivery of something. It's delivery of the right thing. And delivering the right thing to a synthetic audience is very different from delivering the right thing to the actual people using your product. There are so many barriers to getting real feedback, whether it's corporate structure, measurement systems, or just the reality that sometimes a product sits in the market for two years before you learn anything meaningful about it.

Peter Maddison 20:41 In large or complex organizations, that post-release piece is genuinely difficult. Getting a clear signal as to whether something worked, even just right after you deploy it, can be hard. You've got dozens of other things happening at the same time that cloud the picture. And we're all getting faster at shipping. So the question becomes: how do we know what any of this is actually doing? Do we have feedback loops? Are we looking at them? Are we coming back and asking what the impact was?

AJ Bubb 21:53 That learning and research piece needs to be planned from the very beginning. Not at the end. The question that rarely gets asked before a team ships something is: what's our plan for success? What does it look like when this works? I'd love to hear what you've both seen on that.

Dave Sharrock 22:15 Most organizations have no awareness of how early they need to start asking that question. By the time you get to the end of delivery, there's already another feature knocking on the door. So there's this tendency to interpret outcomes generously. An uptick is an uptick. We'll call it a win. Rather than asking: is this what we actually expected? Was our model right? An uptick that's way bigger than expected tells you something important about how you understand your customers. So does one that's smaller. But as long as it's going up, most organizations won't dig in.

Feedback Loops And Success Metrics

Peter Maddison 23:46 I agree. Organizations are generally not good at defining what success looks like upfront or how to measure it. There's also a political element. How much does the person who championed something at the start actually care about being there at the end to see whether it worked? And do we have the systems in place to even answer that question? A post-implementation review might exist, but it often doesn't capture whether the outcome we were expecting actually showed up, especially over a longer time horizon.

AJ Bubb 24:38 Dave, I like what you said about the uptick. As long as there's an uptick, we often don't ask why. Whether it beat expectations or fell short. If something blew way past what we expected, that's also a signal worth investigating. What did we miss about the market? Otherwise you're just hoping and spraying and praying. Keep releasing, hope something hits.

Peter Maddison 25:20 That is a strategy. And with AI, you can flood the zone even faster. That worries me.

AJ Bubb 25:26 That's where AI slop comes from. We can release so fast that quality becomes a casualty. I remember watching a documentary about video game competitions where developers had to build a full game inside 1.4 megabytes. People were building entire games in 700k, hyper-optimized, running in memory. Then compute got cheaper and we stopped caring about optimization. Now with AI, the cost of shipping is so low that people just ship everything. And now we're all complaining about AI slop everywhere. Of course. We made it too cheap to know what you actually want, so we're giving you everything instead.

Dave Sharrock 26:25 And it skews the market immediately. Look at any social media platform right now, inundated with algorithmically generated content. What happens? People leave, they filter differently, they adapt. It's not an optimization of a stable market. It's the breakup of that market. And you see the reaction in unexpected places. Handwritten notes making a comeback. People seeking out things that feel demonstrably human.

AJ Bubb 27:16 I think about where this ends up. If AI-generated content keeps proliferating at this pace, designed to sway opinions or just create noise, one path is that people simply filter everything out. The only things they trust are conversations with actual people they know. That's a shift from an information economy to a content economy. Information used to be where value lived. If you had access to listings, or proprietary knowledge from inside a Fortune 100, that was power. Now anyone can ask an AI what it's like to work somewhere. So the value has shifted to trust. Who is this person? Do I believe them? Your audience is your credibility now. The number of people who trust you is what makes others trust you. I didn't realize that's what I was building with my podcast until I was already doing it.

AI Slop And Market Distortion

Dave Sharrock 29:17 There's a relationship piece to that too. Content with real depth, where you can feel the time and thought that went into it, makes you slow down and actually read it. And then there's the community side. We're seeing people pull toward reconnection, toward real human contact, almost as a direct response to not being sure what's real anymore.

AJ Bubb 30:06 I realized I had started assuming everyone on Reddit was a bot. Not that they weren't real people, just that the ratio felt so off I stopped believing otherwise. Then I saw a post in a vibe coding subreddit and actually looked the person up, reached out, got on a call. We talked for an hour and a half. I kept thinking, there are still real people on Reddit. I'd just stopped looking for them. But you're right, we're looking for community. We're looking for people. And it's genuinely hard to show up meaningfully in the middle of all that noise. I see more people moving toward in-person connection, not necessarily in the return to office sense, but meetups, small events, virtual communities where people are actually present. Sometimes people are reaching out to AI for that human connection, which is its own conversation.

Dave Sharrock 31:46 That's a whole other episode. Synthetic customer conversation redux.

AJ Bubb 31:53 It's just easier to talk to. And it always thinks your ideas are brilliant.

Dave Sharrock 31:57 Which is exactly the problem.

Peter Maddison 32:02 I run mine in cynical and sarcastic mode. It actually makes me laugh sometimes.

AJ Bubb 32:10 That's the right call.

Dave Sharrock 32:12 So I want to bring in something before we run out of time. Development teams, classic six to eight people, cross-functional. You've worked at the company that invented the two-pizza team concept. How are those teams changing as AI gets fed into them? Are they getting smaller? Are individuals just working with a cluster of agents? What do you see as the natural evolution?

Trust Community And Human Connection

AJ Bubb 32:59 Great question. It goes back to this: what was actually holding the two-pizza team back in the first place? Was it development speed? Or was it that cross-functional in theory is easy, but in practice you need to give those people real agency to make decisions, real autonomy, real resources? Those were already the constraints. My worry is that we're going to collapse from two pizzas to one, and still have all the same structural barriers to decision-making and direction and customer feedback. You'll just have fewer people who are theoretically capable of doing more, measured against the same old organizational friction that prevents them from moving. Less people, same constraints, more pressure. That's my biggest concern. And I think it's what a lot of people are feeling when they say they're being asked to do so much more but don't feel like they're getting anywhere. There are teams doing this really well. But you'll find the whole spectrum.

Peter Maddison 34:40 And it's the system question again. What system is that team operating within? Both on the input side - is the work defined well enough - and on the output side. Even if you have agents writing code, you might still be waiting 30 minutes for something to containerize and push to a Kubernetes cluster. You can develop faster than the pipeline can move. The bottleneck just shifted.

Two Pizza Teams Under Pressure

AJ Bubb 35:21 Exactly. And there's a principle underneath this that I think is already breaking down in a lot of agile teams. When you pick up a ticket, you're taking ownership of that piece of work through to completion. But what I see instead is people treating the board like a personal Kanban. Pick something up, it's blocked, grab something else. Keep the velocity numbers looking good. And nobody's following up on the blocked thing.

The blocker just sits there. The scrum master might catch it in standup eventually. But the mentality of I own this until it's done gets lost. And with AI making it so easy and genuinely satisfying to ship new things, it's even easier to step away from the hard, blocked work and grab something that feels productive. So you end up with more things taking even longer, because individuals moved on assuming someone else would handle it.

I had an engagement where we were waiting on a database provider to change a stored procedure. The team marked the ticket blocked and moved on. Weeks later it still wasn't done. We changed the rule: you can only have three things assigned at a time, including blocked items. If something's blocked, you call them every day. Eventually I got a call from the client saying someone on our team kept calling and they weren't ready. I said, then escalate it internally, because they're counting on us and that person isn't going to stop calling until we're unblocked. Two days later, it was resolved. It's uncomfortable. It creates noise. But that's what ownership actually looks like.

Dave Sharrock 37:09 And that chasing behavior only works when the whole system is aligned around it. The reason it feels like noise is because the other side isn't used to being held accountable in that way. Getting to a place where people act on requests because they recognize the impact, not because someone's calling three times a day, is a whole different mindset shift that has to happen across the organization, not just in the delivery team.

Ownership And Getting Unblocked

Dave Sharrock 39:12 Let's bring this together. One takeaway each. AJ, you go first.

AJ Bubb 39:28 Own the problem. From the customer all the way down. Own it.

Peter Maddison 39:37 Mine is the broken model point. When you see an outcome, whichever direction it goes, are you actually asking whether the model you used to set that expectation was right? That step back rarely happens and it should.

Dave Sharrock 39:57 Mine goes back to the beginning of the conversation. When someone comes in with a problem and a technical solution already in mind, the first thing we do is ask five whys. What's actually going on? How does the business work? Where is the real problem? A good reminder to stay curious before getting prescriptive.

AJ Bubb 40:27 Love it.

Dave Sharrock 40:29 AJ, thank you. This was a great conversation. Peter, as always, a pleasure. And if anyone has comments or feedback, drop a note on whatever platform you're listening on, or reach out at feedback@definitelymaybeagile.com.

AJ Bubb 40:49 Thanks so much for having me. And please like and subscribe.

Peter Maddison 40:53 I always forget to say that. Thanks for the reminder.

AJ Bubb 40:56 My producer says nobody's going to subscribe if you don't ask them to. So. Please. Thank you both, this was genuinely a great conversation.

Peter Maddison 41:12 Thank you. You've been listening to Definitely Maybe Agile, the podcast where your hosts Peter Maddison and Dave Sharrock focus on the art and science of digital, agile, and DevOps at scale.

agile,DevOps, AI Adoption,Product Development,software delivery, engineering teams, velocity,technical debt,Product Management,organizational change,digital transformation, startup,Innovation,customer feedback,Feedback Loops,two pizza teams, ow,