Using Spikes

1st November 2011
Gus Power

The XP practice of ‘spiking’ has been around for a while but there isn’t a whole lot written about it.

Why Spike?

The XP site states that the purpose of a spike is “to figure out answers to tough technical or design problems”. Spikes are considered useful when a more accurate time (and cost) estimate is required for an upcoming piece of work. Unsurprisingly the practice of spiking is frequently viewed as a method for improving planning accuracy, but it’s got more to offer:

An aid to design

Design sessions can be difficult to focus. Different dimensions – functionality, performance, available resource and system constraints – need to be considered and everyone has different experience and point of view. Spiking can provide data useful for evaluating and comparing different options, helping the team make more informed choices.

Expand knowledge

Pulling in information from the outside world, searching for similar problems and the solutions used to tackle them exposes the team to new ways of thinking, new techniques and new technologies.

Generate ideas

Often the effort involved in figuring out potential solutions generates new understanding and insight that can be applied to other areas of how we work and the systems we create.

Prevent stalling

Spiking can be used to invest time and thought into developing countermeasures to known upcoming issues so that they can be tackled in a controlled way.

Safely cover more ground

Explore the problem space and try out different possible solutions without the risk of disrupting the work of others.

Avoid expensive mistakes

“Costs are a result of the designs deployed to meet certain requirements.” Well-placed spikes can help a team avoid costly inappropriate solutions that looked good in theory but turned out to be less that ideal in practice.


There are a couple of signs that indicate when running a spike might be valuable:

  • The team finds it difficult to define clearly or agree on a suitable design or approach and discussion drags on without consensus.
  • People are reluctant or unwilling to estimate a piece of work, inflate their estimates, or there is a large range in the estimates given.
  • The system is about to move into a new area and/or a substantial change is forecast.
  • An issue due to a current design limitation surfaces.
  • A team member is convinced there is a better way of doing something but the wider team has not bought in.
  • Discussion is vague, generalizations abound and the conversation lacks data or concrete examples.

Ideally we would like to be able to reserve a certain amount of time / capacity every month to run regular spikes.


A spike is a small experiment. PDCA!


Define what you want the spike to achieve. It may be that the spike is intended to prove or disprove the feasibility of a specific solution, or generate some potential solutions for further discussion. We capture this information using our standard story card format with testable acceptance criteria (i.e. quantifiable measures) written on the back. Any preparation such as reserving space or getting hold of specific hardware or software should be done beforehand.


For us a spike gets prioritized and played like any other user story. The programming pair working on the spike often create a sample project or separate branch to work with. None of the code written is committed back to the main project.


The most important part of a spike is the follow-up. If the knowledge learned is lost or not transferred then we will be destined to repeat the same or similar experiments again and again. A “brown bag” session works well – it helps to have a spare whiteboard and projector available so that people can see the detail. Knowledge capture is a difficult thing – documents become stale, wikis get messy, policy gets ignored, notes on walls become invisible. Strive to make the output of the spike tangible so that others can recreate it and review it for themselves (e.g. sample project in source control with build instructions).


On the back of the spike the team should define the next actions to be taken. If the solution identified is deemed acceptable then the related work item can be defined and prioritized. Maybe the spike has spun out other ideas worth testing or potential improvements to be implemented.

Common Snags

As with all practices and methods there are pitfalls. Here are some things to look out for:

Avoid Big Design Up Front (BDUF)

Spiking should be used to support the emergent design of your software by helping you generate designs that fulfill your customers’ evolving needs without unnecessary complication and overhead. It’s not an opportunity to take over the world or tell your team how it should be. It’s an experimental process that needs peer review.

Don’t use the code in production

No seriously, don’t. The code produced the first time a problem is solved is never the cleanest. Take the opportunity to test-drive it into your production system and learn from doing it again, kata-style. Remember that code is read more than it is written.

Don’t invest too much

As a rule of thumb we invest less than a day into a spike. Going too wide or not defining clearly enough what outcomes are expected wastes time and effort. Don’t spend days trying to get a solution you hoped would work up and running. If you run into a wall back off and try another approach.

Don’t invest too little

If the key question hasn’t been answered, for example ‘will solution x allow us to deploy without losing active transactions’ then it makes sense to keep going until enough information is found. If there are multiple questions to be answered then maybe the spike can be split into smaller spikes.

Watch out for confirmation bias

We all have our favourite technologies and approaches, solutions that appeal to us and our reasoning, solutions we’ve already conjured up in our head. Be careful to not silently discard the negative aspects of your preferred solution – your team certainly won’t! It’s important that your team members are able to draw their own conclusions from the information gathered and come to consensus on the way forward.

Make sure testability and automation are part of your criteria

Is the solution proposed by the spike testable? Can we test it locally? Easily? Does it increase environmental or deployment complexity? Can we automate it? Testability is an aspect that is often overlooked, especially where 3rd party solutions are concerned, but has a direct influence on quality and cycle time. Similarly automation impacts operational overheads and other system properties such as reliability and reproducibility.

Look for suitability rather than coolness

When given an opportunity to do some ‘green hat’ thinking it is tempting to go a bit off piste and find some cool and cutting edge technologies that would look great on all our CVs. Stay focused on solving the problem at hand for your customer first. Remember that every technology introduction or change will need to be mastered by your team for you to be effective.


Spiking can be used to evaluate a specific option, a range of options or generate new options to pursue. We’ve run spikes across a wide range of scenarios, such as determining how difficult it is to integrate with other systems (e.g. payment providers), figuring out what’s required to handle upcoming changes (e.g. Facebook Connect -> OAuth), and evaluating potential solutions (e.g. linux containers). A spike provides an opportunity to step back from day-to-day feature work and generate options for solving technical and design issues in a controlled, experimental way.

Spike mindmap


No comments

Leave a Reply

Your email address will not be published. Required fields are marked *