A Useful Technique: MoSCoW

MoSCoW is an acronym of sorts that stands for Must-, Should-, Could-, and Won’t-Have.

It’s most commonly used when conducting release planning to give stakeholders and teams a short-hand for scope flexibility.

On a release plan board, it may look like this:

Note that we’ve indicated Must, Should, Could, and Won’t to the left of the release planning board.

  1. Must-have means exactly that. If even one must have condition isn’t present by the time we want to release, we won’t release. They are critical to the project’s success. These should be no more than 60% of your release scope. If your team is new and doesn’t have a known, stable velocity, consider limiting this to 40%.
  2. Should-haves are also important, but not absolutely necessary for the release. Limit these to about 20%.
  3. Could-haves are desired but not necessary. If we’ve got the time and resources, we’ll include them in the release. These make up the balance.
  4. Having a “Won’t” section allows us to give early warning to stakeholders about the desired backlog items that are unlikely to be in the release.
  5. If you’re working towards a release or major milestone, indicate it in your plan. It can be located anywhere, it doesn’t have to be at the end. Adjust your MoSCoW indicators accordingly.

Typewriter image: Prioritize by Nick Youngson CC BY-SA 3.0 ImageCreator

2018-05-07T14:56:42-08:00May 6th, 2018|0 Comments

A Useful Technique: “Fists-of-Five” voting

It’s often useful to get a “gut check” on the likelihood of a sprint or release plan being successful. If you’re a new team and don’t have an established velocity, this “Fists-of-Five” gut check might be all the data you have.

To perform a “Fists-of-Five” vote, every member of the team votes on the likelihood of a plan being successful on a 1–5 scale. Describe the scale below, then vote together on the count of three.

fists of five

Fists of Five voting technique

In general, it’s acceptable to proceed if everyone votes 4 or 5. If someone votes a 3 or less, ask them what it might take to vote a 4 or 5.

For instance, suppose all of the developers vote 4 or 5, but the user experience researcher votes 2. Ask why. They may tell you that there are too many items that require their skill set in the sprint. They may be willing to vote a 4 or 5 if some of those items are replaced with developer-oriented items. Then vote again.

2018-05-07T14:56:32-08:00May 6th, 2018|0 Comments

I.N.V.E.S.T.

Here’s a video I shot for freeCodeCamp about the I.N.V.E.S.T. mnemonic.

I.N.V.E.S.T. is a great starting point for your team’s Definition of Ready. In this brief video, you’ll learn what the mnemonic means and gain insight into a frequently required but often overlooked addition.

Click “Continue reading” below for a transcript.

(more…)

2017-09-19T16:20:06-08:00July 25th, 2017|0 Comments

The Agile Manifesto

Here’s a video I shot for freeCodeCamp about the Agile Manifesto.

The Agile Manifesto describes the fundamental values upon which every Agile framework rests. In this brief video, you’ll learn what the full Agile Manifesto is, along with a little history of how it was created.

Click “Continue reading” below for a transcript.

(more…)

2017-09-19T16:20:06-08:00July 24th, 2017|0 Comments

How Scrum Teams Can Seek Out and Destroy Organizational Impediments and Unplanned Work

A team should be able to complete 80–110% of their planned stories each and every sprint without heroics.

Why is this important?

  • The work output from this team is predictable. When the team commits to a set of stories at the beginning of the sprint, other teams can rely on them to deliver.
  • Predictable output breeds confidence. If a team consistently delivers on their commitments, they are considerably more credible when they need to push back on unrealistic expectations.
  • The team will likely feel motivated because they’ve demonstrated a degree of mastery in their craft.
  • The team has a stable base. Because they are delivering on their expectations, they can focus their energy on continuous improvement and optimization.

If a team regularly completes less than 80% of their sprint objectives, why does this happen?

  • The work tasks do not meet INVEST criteria and thus cannot be estimated accurately.
  • New work is given to the team mid-sprint.
  • The team faces new and old impediments that interfere—usually unpredictably—with their ability to deliver the work.

It’s not always easy to glean these issues from tools like Rally. Thankfully, there’s a simple solution that can help both individual teams and the program discover the severity and nature of the issues that prevent a team from achieving fast, flexible flow.

The Status Quo

Let’s take the example of a 2-person team working a 2-week sprint. (This isn’t an ideal team setup, but it keeps the numbers easier to work with.) Here’s their sprint backlog a few hours after planning:

sprint backlog no URI story

 

They’ve taken on 39 story points, which is one fewer than the 40 accepted story points they completed last sprint. That’s perfectly reasonable.

They’ve added tasks to each of these stories and began work on the first one.

I like to assume 6 hours/day of productivity per developer to account for planning meetings, standups, retrospectives, breaks, lunch, etc. Two developers * 2 weeks * 6 hours/day = 120 hours. Assuming a 25% “safety factor” (some teams use 30%, others use 20%, the truth is that we’re splitting hairs at this point), the team should be able to complete about 96 hours of planned tasks this sprint. They’ve identified 93, so this “smells” okay.

(Note: the team should use story points to gauge how much work to accept into the sprint backlog. Use the task hours as a sanity check.)

Let’s fast forward a week and a half. It’s Tuesday afternoon, and there are about 2-1/2 days left in the sprint:

Screen Shot 2014-07-30 at 10.38.50 AM

The product owner accepted 18 points or 46% of the sprint. There’s 36 hours of work left and about 30 hours of time left, so we’re a little behind. Most novice Scrum teams would not register concern at this point.

What happened during the sprint? The development team raised impediments during the standup and worked through them. One developer was out sick for a day. The team had to go to an unexpected all-hands meeting, and they had to do a couple of side projects.

The problem is, there’s no measurement or record of these unexpected requests and impediments. The unexpected requests should not have been added mid-sprint unless they were (rare) “on-fire” issues. The team (and anyone who attended the standups) would know what the issues were, but this knowledge is limited or non-existing at the program level or higher.

This is a missed learning opportunity as we do not have the transparency we need to inspect and adapt.

Introducing the Unexpected Requests and Impediments Story

Let’s rewind and add a new story to the sprint backlog:

Sprint Backlog right after planning

Note the addition of “Sprint 5 Unexpected Requests and Impediments” at the bottom. This doesn’t get story points and it’s at the bottom because it’s the last thing you want your team to be working on.

Each and every unexpected request or impediment gets added to this story as a task (with hours) during the sprint, like so:

Screen Shot 2014-07-30 at 10.54.38 AM

Suddenly these side projects and impediments become real.

Let’s take a look at that mid-sprint view of the backlog again.

Screen Shot 2014-07-30 at 9.46.56 AM

Suddenly, the problem becomes even more clear. We should be able to complete about 120 hours of work in a 2-person, 2-week sprint, but our task estimate is now up to 139. Unless this team works overtime (which they should not do as it is demotivating and ultimately productivity-killing), we’re not going to complete all of our stories in time for the demo.

So here’s where this team ended up right before their demo:

Screen Shot 2014-07-30 at 11.12.01 AM

They completed 28 story points or 72%. A “pointy-haired boss” might look at this team and say “you failed.”

That statement in and of itself is a failure. It jumps to the conclusion that the team experienced a performance failure. In reality (with all credit due to Mary Poppendieck), the more likely failure is that of the original hypothesis: that the team could have completed the work in the first place. There’s a major missed opportunity: the opportunity to learn something from our system and adapt.

We budgeted 25% of our time for these sorts of issues, or 24 hours. We wound up with 46 hours of unexpected requests and impediments, 22 hours “over budget.” We had 15 hours of work remaining on the two stories we didn’t complete, so it’d be pretty reasonable to say that had it not been for those extra 22 hours of work, we would have completed this sprint (and perhaps even added a 1- or 2-point story).

Ideally, you’re keeping track of your velocity from sprint to sprint. Add another metric: keep track of the percentage of task hours each sprint that came from unexpected requests and impediments.

So what?

Now we have transparencyTransparency allows inspection, inspection allows adaptation. Here are some ways to use this information to inspect and adapt:

  • The team can review the impediments and suggest user stories to the product manager (often spikes or technical user stories) to help address some of the underlying technical impediments.
  • The team can use this as feedback that they may need to slow down and refactor to address technical debt. They may not want to create new user stories, but they should at least spend a little extra time on their new user stories to clean up old debt and avoid creating new debt.
  • The Product Owner can show stakeholders the cost of unexpected requests and impediments on their predictability. This gives them the evidence they need to hold off on new requests until the next sprint and spend more time building quality into the work that they are doing.
  • Engineering managers and program managers can review impediments across teams and look for impediment patterns to solve. For instance, an engineering manager may be able to quantify that the company spends 10-15% of their development time fixing broken environments. This data could justify an much-needed investment: “We lose $1M a year in productivity fixing broken environments [based on salaries multiplied by time lost]. A new VM system would reduce this cost by 50% and cost us $100K.”

There you have it. Regardless of the software you use (if any), you can add the Unexpected Requests and Impediments story to your sprint backlog. You can use the data it generates to gain knowledge and take corrective action.

What are your thoughts? Have you used something like this in your own team? Please share your thoughts!

Photo from Kyle Pearson on flickr

2014-09-05T11:53:54-08:00July 30th, 2014|0 Comments

Rally’s “The Impact of Agile Quantified” White Paper

I’m not normally a huge fan of white papers, but Rally Software has done something extraordinary with this one. They’ve analyzed the process and performance data for nearly 10,000 teams using the Rally platform to extract some rather interesting findings. While there’s empirical evidence to support many of the prescribed Agile behaviors, Rally’s unique access to performance data as a SaaS process tool provides them with the ability to get an inside look across many different companies and teams.

Here’s Rally’s introduction:

Though people have made Agile recommendations for many years, we have never been able to say how accurate they actually are, or how much impact a particular recommendation might make. [Chris: I disagree as many of these recommendations have been made based on other data and evidence.]

The findings in this document were extracted by looking at non-attributable data from 9,629 teams using Rally’s Agile Application Lifecycle Management (ALM) platform. Rally is in the unique position to mine this wealth of SaaS (cloudbased) data, and uncover metrics-driven insights.

These insights give you real world numbers to make an economic case for getting the resources you need, and your people to commit to change. That’s the underlying motivation of this work.

A few highlights that I’ve copied and pasted:

[T]here is almost a 2:1 difference in throughput between teams that are 95% or more dedicated compared with teams that are 50% or less dedicated.

Stable teams result in up to:
60% better Productivity
40% better Predictability
60% better Responsiveness

Teams doing Full Scrum estimating [both story points and task hours] have 250% better Quality than teams doing no estimating

Teams that aggressively control [work in process]:
• Cut time in process in half
• Have ¼ as many defects
• But have 34% lower Productivity

Small teams (of 1-3 people) have
• 17% lower Quality
• But 17% more Productivity
Than teams of the recommended size (5-9)

While these are the summary findings, the white paper is short and well worth a read. Check it out!

2017-02-13T11:40:17-08:00May 28th, 2014|0 Comments

A Response to Marty Cagan’s “Product Management vs. Product Marketing”

Although over 6-1/2 years old, Marty Cagan’s “Product Management vs. Product Marketing” remains the Silicon Valley Product Group’s top blog article. (It’s entirely possible, of course, that it’s popularity is self-reinforced due to its prominent position on the SVPG home page…) While I generally agree with Marty’s premise and proposed solution, I believe that the article was written primarily from a Waterfall perspective and that an Agile perspective offers a better way out.
(more…)

2014-03-31T08:47:13-08:00March 28th, 2014|0 Comments
Go to Top