Do You Actually Have a Capacity Problem?
Definitely, Maybe AgileApril 23, 2026x
217
00:19:4913.64 MB

Do You Actually Have a Capacity Problem?

Most organizations think they have a capacity problem. They usually don't. What they have is a work-in-progress problem. And those two things call for very different solutions. In this episode, Peter Maddison and Dave Sharrock dig into one of the most persistent headaches in organizational management: capacity tracking. Why does the instinct to measure utilization backfire? Why does loading people up to 100% actually slow things down? And what should leaders be asking instead? The conversatio...

Most organizations think they have a capacity problem. They usually don't.

What they have is a work-in-progress problem. And those two things call for very different solutions.

In this episode, Peter Maddison and Dave Sharrock dig into one of the most persistent headaches in organizational management: capacity tracking. Why does the instinct to measure utilization backfire? Why does loading people up to 100% actually slow things down? And what should leaders be asking instead?

The conversation covers the real cost of context switching, why that "nearly done" project is probably further away than it looks, and how AI is making all of this more urgent, not easier.

Three things to take away from this episode:

  1. 100% utilization is not a goal. It's a warning sign. 
  2. The right question isn't "how much capacity do we have?" It's "how much work in progress can we actually sustain?
  3. AI accelerates your breaking points.

If this conversation resonated, there's more where it came from. Peter Maddison and Dave Sharrock explore these kinds of organizational challenges every week on Definitely Maybe Agile - the podcast that gets into the real complexity of modern ways of working, without the buzzwords.

Listen wherever you get your podcasts, or visit definitelymaybeagile.com to catch up on past episodes and reach out with your own questions.

New episodes released every Thursday to challenge your thinking and inspire action.

Listen and subscribe:

Welcome And The Capacity Question

Peter Maddison 0:04 Welcome to Definitely Maybe Agile, the podcast where Peter Maddison and David Sharrock discuss the complexities of adopting new ways of working at scale.

Dave Sharrock 0:12 Hello, Peter. How are you today? I'm looking forward to the conversation. I've had a few thoughts brewing, and I'll try to keep my stronger opinions in check. We'll see how that goes. We're talking about capacity management today. It's important, right? Project managers track capacity all the time. So what's the actual value of it?

Peter Maddison 0:40 Well, we want to know how much of it we have left. Like, if Bob over there is 10% on Project X, 15% on Project Y, and 65% on Project Z - he's got a little room left. Maybe he could pick up another project.

Why Capacity Tracking Misleads

Dave Sharrock 0:58 Let me push back a little on that. My first take is that in knowledge work - which is where most of the organizations we work with are operating - capacity management is a red herring. I really, really want organizations to stop talking about it.

Peter Maddison 1:22 I'd agree 100%. But there are a couple of very large businesses selling very expensive products - we're talking millions of dollars to implement - that would not be happy hearing you say that.

Dave Sharrock 1:36 Fair point. Let's explore it. One of the most common situations I see is a senior leadership team struggling to reconcile the amount of work their organization delivers compared to its size, its cost, the number of people involved. There's this fundamental mismatch between output and investment. The feeling is: we're not getting enough from the teams.

Overload Raises WIP And Slows Delivery

Peter Maddison 2:22 Right, and a lot of the time that comes from leaders looking at everything that needs doing, all the commitments that have been made - often without talking to the teams, mind you - and concluding that delivery just isn't moving fast enough.

Dave Sharrock 2:44 I actually have a lot of empathy for those leadership teams. Because as soon as you overload a system, work in progress goes up. And the consequence of high work in progress is longer delivery time, which means less gets delivered overall. That's Little's Law in action, and we see it everywhere. The problem is that leadership often doesn't recognize what's causing it. Their instinct is that somebody somewhere isn't working hard enough, so there must be room to pile in more. It's an outdated management perspective - honestly, it's over a hundred years old.

Peter Maddison 3:42 Time and motion studies. Taylorism. All of those wonderful ideas from the turn of the century - the one before last.

Taylorism Breaks Down In Knowledge Work

Dave Sharrock 3:50 Exactly. And in a manufacturing context - assembling cars, tightening bolts - time and motion studies worked reasonably well through the mid-twentieth century. The problem is that knowledge work is mostly invisible.

Peter Maddison 4:22 And it has the opposite effect of what you'd expect. Instead of higher throughput and hitting your targets, you end up with functional silos. Everyone's so busy they can't help the next team. Everything jams up down the line. And of course, when something urgent comes in: drop everything.

Dave Sharrock 5:04 In a factory, the assembly line has physical limits. You can't add another vehicle when it's full - it's obvious. But in knowledge work, all of that is invisible. So we just park things to one side, and that creates delays that are really hard to track. The tools that claim to help you track it? I'm skeptical. The root misunderstanding is: we're not getting the output we want, so there must be spare capacity to absorb more work. That's the thing I'd most like to see change. The question leaders should be asking instead is: how do we reduce the amount of work going through the system until we start to see actual flow? Start there. Then you can learn and adjust.

Predictability Through Historical Throughput

Peter Maddison 6:12 It's a strategy question. Saying yes to everything isn't prioritization - it's accumulation. And if you keep piling work into the system, you're going to grind it to a halt. There is a real need to understand the capacity of your system. Just not the way most organizations think about it.

Dave Sharrock 6:43 Right. And that brings me to the second piece: predictable delivery. How much can a team consistently deliver each week? That's where you can actually start answering the capacity question. You look at historical throughput - what has this team actually delivered over time? Are we above or below that norm? Has something changed in the last month? That's a much more useful starting point.

Context Switching And The 90% Trap

Peter Maddison 7:20 It is, and it takes time to build up that organisational knowledge. Just like a new team needs time to find its footing. The other piece you touched on - the slicing and dicing of people across multiple projects - also doesn't work. The idea that someone can split their time 10% here, 10% there across ten different things... that's not really happening. Context switching alone eats 60 to 80 percent of the time.

MVP Thinking And Saying No

Dave Sharrock 7:59 Once you start looking at past performance, a few things come into play. You can't look over a 12-month window right now - the environment is moving too fast. A month or less is probably more realistic. But the other thing we typically find is that organizations don't track work in the right way. Work packages are partially complete, so you end up in that endless percent-complete conversation. I remember sitting at an Agile conference years ago listening to Ron Jeffries - one of the signatories of the Agile Manifesto. He's a brilliant technologist and programmer. He said he was one of the world's best at delivering 90% of what you're looking for, but the world's slowest at getting to 100%. That last 10% takes forever. And I think that's the real problem with partial completion. We're optimistic. We think we're nearly there. But that last piece is where all the uncertainty lands. So we need to package work so things are actually finished - done and moved on, done and moved on.

Peter Maddison 9:26 A lot of organisations struggle with MVP for exactly that reason. They want it to feel complete. But here's a related reality: of all the features in a piece of software, only the first 20% tend to get used. The other 80% is money going down the drain. So just don't build it.

Dave Sharrock 9:55 Product managers should probably be targeted on taking that full list of features and cutting the 80% you don't need. The hard part is making that call.

Peter Maddison 10:04 I told an executive the other day: just ask your leaders one question every time they come to you with a request. Ask what they said no to. If the answer is nothing, they're just accumulating. Work is always coming in from somewhere.

Delivery System Design And Feedback Loops

Dave Sharrock 10:23 And as we talk about what we're saying no to, how to reprioritise, how to package work properly - we also need to make sure that when work is done, it actually ships, gets feedback, and has some sense of completion. Not just passed off to the next team in a long chain of handoffs.

Peter Maddison 10:52 Which takes you back to your delivery system. If the system is a series of handoffs all the way to production, the people who built it never find out whether what they built actually worked. So you have to think carefully about how your delivery system is designed. Are you aligned to the products you're putting into market? That's another key piece of understanding what capacity really means in your context.

Dave Sharrock 11:31 I like where you're going with that. And I want to add: you don't need to spend six months figuring all of this out.

Peter Maddison 11:43 Oh, absolutely not.

Start Small With Pressure Points

Dave Sharrock 11:44 Start where you are. Apply a strong Pareto lens. You're not trying to understand everything - you're finding where the pressure points are right now, making a few adjustments, and then doing it again. That's really what continuous improvement means in practice. It takes the scale of change out of the conversation.

Peter Maddison 12:12 Exactly. And one thing we're seeing more of lately - because AI comes up in almost every conversation now - is that AI accelerates the breaking points.

AI Accelerates Bottlenecks And Failure

Dave Sharrock 12:25 You hit them much faster, and they become much more visible. Almost catastrophic, actually. Think of a shop that normally gets one delivery a day and suddenly AI means they're getting thirty. It's not a gradual increase - it's overnight gridlock. Organisations that think adding AI will just surface a few small problems are in for a surprise. If the underlying system isn't ready, AI doesn't smooth things out. It gums everything up, fast.

Peter Maddison 13:04 Right. And a good example is if you can generate a lot of output quickly but there's no thought on the other end about intent - how do I know if this thing I just shipped is actually having the impact I expected? - then there's no feedback loop. You're just throwing things at the wall at speed.

Dave Sharrock 13:29 Exactly.

Reporting Reality While Transitioning Metrics

Peter Maddison 13:31 So this is a really interesting space. I run into organizations all the time that measure capacity by counting headcount and assigning percentage utilization across projects. We try to steer them away from that. What they actually want - underneath that question - is predictability. When is this going to be ready? That's especially critical for sales people standing in front of a customer saying "that mobile app is coming soon, right?" Creating that shared understanding across the whole value chain doesn't have to take a lot of effort. Making it visible quickly has enormous value.

Dave Sharrock 14:36 I'd add to that: depending on where you sit in the organisation and how much influence you have, you may still need to keep providing that capacity information in parallel. Because realistically, you're not going to get two to six weeks of new data in front of leadership and convince them overnight to stop using the old approach. So keep running both for a while. Keep giving them what they're used to, while simultaneously showing them simpler, cleaner data that gives more accurate - or I should say more relatable and realistic - insight into what's actually happening.

Peter Maddison 15:34 And leaders will appreciate it, because it gives them the predictability they're actually looking for.

Dave Sharrock 15:40 Exactly. And it's actionable.

Peter Maddison 15:40 You can see when things are going off track before they've gone too far. Which you never really get when you're counting heads and percentages and feeding it all into a central system. That said - I know we've both seen this - you often end up with someone whose job is literally to tap each person on the shoulder weekly and ask how many hours they spent on which project, then fill it into a spreadsheet or a tool so someone in a PMO can do their reporting. Not an uncommon pattern.

Dave Sharrock 16:48 Not at all. How do we wrap this up? Three takeaways?

Wrapping Up

Peter Maddison 16:53 Three things. First: the way most organisations track capacity today - assigning people a percentage across multiple projects - is based on a belief that someone at 100% utilisation equals full value. That's not correct. When someone is at 100%, you're just adding handoffs, and the curve is exponential. You won't get the results you're expecting.

Dave Sharrock 17:39 Second: when a senior leadership team asks for visibility into capacity, that question is completely understandable. We've talked about what not to do, but the underlying question is legitimate. The answer just isn't about capacity in the traditional sense. It's about work in progress, cycle time, and Little's Law.

Peter Maddison 18:12 And third: AI is accelerating that conversation right now. How many people do we need? What roles? Which systems should we run AI through to get more output? If you don't understand your delivery system first, AI is not going to give you the results you're imagining.

Dave Sharrock 18:28 And it can go wrong in exactly the wrong direction. We've already seen a few high-profile examples of organisations that moved fast, made cuts, and then had to backtrack publicly because the system started falling over.

Peter Maddison 18:51 Because they didn't understand the delivery system they were changing. So that's probably a good place to stop. Don't forget to subscribe if you enjoyed this conversation, and you can reach us at feedback@definitelymaybeagile.com. Until next time.

Dave Sharrock 19:34 Until next time, Peter. Thanks again.

Peter Maddison 19:36 You've been listening to Definitely Maybe Agile, the podcast where your hosts Peter Maddison and David Sharrock focus on the art and science of digital, agile, and DevOps at scale.

agile,DevOps, Capacity Management,Work in Progress,Flow, Little's Law, Throughput, Cycle Time, Kanban,organizational change,Leadership, Knowledge Work, Team Performance, Delivery Predictability,systems thinking,Continuous Improvement, lean, AI in t,