Root cause analysis using 5 Whys

scroll down arrow
go back arrowGo Back

My brother is a

SixSigma

consultant and my nephew is at that age where he asks "Why?" a lot.
I’m wondering whether he’s been trained by his father to use

SixSigma

‘s root cause analysis
technique, the 5 Why’s?




The technique is simple. Write a description of the failure on a
whiteboard. This helps formalise the failure and also helps the
team involved to focus. Ask the team, 5 times, why the failure
occurred, each time writing the answer given on the whiteboard.
Repeatedly asking the question helps to burrow through the symptoms
and identify a root cause of a problem (there may be more than one
root cause). 5 is a rule of thumb. You may ask the question fewer
or more times than 5 before you find the root cause of the
failure.





5 Why’s applied in a real
retrospective





In a previous post, I talked about the experiences a development
team were having in relation to

slop
and slack

. One particular problem was that planned user stories
were being descoped from each iteration as the last day approached.
Here’s the analysis:





Failure

: Consistently fail
to deliver all the user stories to the Product Owner, that are
planned during the iteration planning meeting.


  1. Why

    are user stories
    being descoped towards the end of each iteration and not being
    delivered to the Product Owner? Because we run out of time.

  2. Why

    do you run out of
    time? Because most of the user stories take longer than we
    estimated.

  3. Why

    do most of the user
    stories take longer than your estimates? Because most of our
    estimates are bad.

  4. Why

    are most of your
    estimates bad? Because we don’t fully understand enough of the
    details of a user story when we estimate. And although we
    triangulate to completed user stories, the task effort recorded for
    those completed stories differs significantly even though they have
    the same story points. (The tracking data showed that user stories
    with 5 story points had tasks with a total recorded effort between
    2 and 4 ideal days).

    [2 problems
    identified here]

  5. Why

    don’t you fully
    understand enough of the details of a user story? Because we’re not
    collaborating effectively with the customer during iteration
    planning.

  6. Why

    aren’t you
    collaborating effectively with the customer during iteration
    planning? Because most of the story cards are a mess of notes, so
    we get the customer to read them to us.

    [Root cause identified]

  7. Why

    is the tolerance on
    recorded effort so wide for user stories with the same story point
    value? Because we’re not revising the story point estimates.

  8. Why

    aren’t you revising
    story point estimates? Because we focus on tracking the tasks in
    ideal days.

    [Another root cause
    identified]



To address the 2 root causes, the following fixes were applied in
the next iteration:


  • Encourage
    collaboration by using just a story name on the card

    (a
    technique suggested to

    Brian Marick

    by

    Rachel Davies

    ).
    The customer rewrote the remaining story cards.
  • At the end of the iteration planning meeting, each team member
    verbally state their commitment to deliver the planned user stories
    to the product owner and the other team members. This made the
    developers spend sufficient time with the customer, beforehand,
    discussing the details of the user stories to ensure they
    understood what was required before providing estimates.
  • Start using ideal pair hours to estimate user stories and
    record velocity rather than story points. It seemed nobody really
    liked story points. Since there was some confusion about what they
    really were or meant, the developers were never entirely confident
    about their estimates. The customer was happy to see time come
    back, although the concept of ideal time had to be explained.
  • Stop tracking tasks and start tracking

    running
    tests features

    .
  • As part of the collaboration between the customer and the
    developers, split the user stories being planned for the iteration
    so that they would take between 1and 2 days to complete. Smaller
    units of work are easier to estimate.



It’s been a while since these fixes were applied. They made a
difference almost immediately with fewer user stories being
descoped from iterations. Collaboration is increasingly more
effective. There’s still room to improve the estimates, but the
developers’ confidence has increased and now it’s a case of
practice, practice, practice.





FIT

has been used but only to
produce automated acceptance tests. The next step will be to start
using

FIT

to actually facilitate
the collaboration with the customer. The aim is to produce the

FIT

documents before development
starts on the user stories.





Leave a Comment

Contact Us

We're not around right now. But you can send us an email and we'll get back to you, asap.