AGILE IN ACTION

Tag: extreme-programming

Sunday, July 8, 2012

Knowledge nuggets from Kent Beck

Posted by Simon Baker
From the @energizedwork tweet stream with hash tag #rediscoveringkent:
Read more...

Friday, February 3, 2012

State of Agile survey for 2011 tells a familiar story

Posted by Simon Baker

One particular chart in the State of Agile survey for 2011 tells a familiar story. Have a look.

Read more...

Wednesday, November 2, 2011

Using Spikes

Posted by Gus Power
The XP practice of 'spiking' has been around for a while but there isn't a whole lot written about it.
Read more...

Saturday, September 8, 2007

Don't do Scrum without XP

Posted by Simon Baker
I've been doing XP since 2000 and Scrum since 2004. I've never done Scrum without XP and, these days, I don't think of them separately anymore. I guess over the years they've merged into one for me and matured into my own concoction of principles and practices, still largely based on the Manifesto , enhanced by lean thinking , and extended with my own bag of tricks devised through tough commercial experience. I have to agree with Jeremy Miller, Scrum is fine but don't leave the XP practices at home . Actually, I think Scrum is great but, to be honest, I'd feel very nervous doing Scrum without the XP practices because I care about software . In many teams, doing Scrum without the XP practices would just produce crap code more effectively. If you want to do Scrum, I strongly recommend that you do the XP practices too. I do think Scrum's 30-day Sprint duration is too long. In my experience, I always saw Parkinson's Law and Student Syndrome set in during the 30 days. If you're new to iterative development , by all means start with monthly iterations but make it a top priority to achieve weekly iterations (as used in XP). If you're using weekly iterations but it's not possible to 'ship' working software to your production environment every week, try using Scrum's monthly cycle as a release cycle containing four 1-week iterations. Obviously it's preferable not to queue the output of iterations but the queue is manageable at 4 weeks worth of working software, and releasing monthly drums out a release rhythm and allows you to establish at least some incremental flow of valuable marketable features to customers. This is better than releasing sporadically based on marketing dates and having to use much larger queues while delivering zero value to customers for longer periods of time.

Friday, December 29, 2006

Bit by bit or all at once?

Posted by Simon Baker

From a very early age, we are taught to break apart problems. When we try to ‘see the big picture’, we try to reassemble the pieces in our minds, but this like trying to reassemble the fragments of a broken mirror to see a true reflection. After a while we give up trying to see the whole altogether.

Read more...

Saturday, December 9, 2006

On average

Posted by Simon Baker
Here's the content of a presentation I recently put together based on some averages derived from how we've been working.
Read more...

Friday, February 10, 2006

Ten-minute build, continuous integration and developer rhythm

Ten-minute build It's worth developing an automated, reliable and reproducible build for your project that builds the system and runs all the tests. It should be run many times a day, initiated by a schedule or in response to some asynchronous event such as checking source code into the repository. You should also be able to initiate the build on demand, e.g. from the command line or from within an IDE. The build needs to be fast if it's going to run frequently. Therefore, you need to invest in continual improvement and optimisation to maintain a ten-minute build cycle. Any longer than ten minutes and the build won't be used as often and won't provide as much feedback. Continuous integration Traditional integration is a big-bang affair, highly unpredictable and typically fraught with problems. A build that runs frequently allows developers to integrate and test their changes often, perhaps every 2 to 3 hours. There's no rule of thumb here; it depends on your code and how you've broken the functionality down into chunks to be developed, but the longer you wait to integrate, the more risky and unpredictable integration becomes. By integrating often, integration is broken down into many small integrations that are performed as part of the cycle: test-code-refactor~~integrate . The integration nightmare goes away and the number of integration problems are reduced. A continuous integration tool like Cruisecontrol can be configured to start the build asynchronously, triggered by a check-in of source code to the repository. If problems are encountered during the build, which includes running all the tests, the developers are automatically notified by email, RSS, or a text message (also take a look at some alternative extreme feedback devices ). The rule here is to react immediately and fix the build. A broken build should not be tolerated. Developer rhythm A ten-minute build automated with continuous integration helps developers establish a rhythm as they develop software. This rhythm reminds developers to integrate regularly. Integration should happen at least once a day and only working tested code should be checked-in. Source code should not be checked-out for days on end, and broken code should never be checked-in. The following diagram shows the steps a developer performs in the test-code-refactor~~integrate cycle. developerrhythm Originally uploaded by sjb140470 I always do a local clean build before I commit any changes to the repository. With a ten-minute build I don't have to wait long before I know whether I can proceed with the check-in. This time gives me a chance to come up for air , grab a drink, or reflect with my pair-programming partner on the work we just completed. If you don't do a local build you'll know about any integration errors as soon as the continuous integration build runs. You can then assess the problem and decide whether a quick fix can be found or whether the changes need to be backed-out of the repository, so that the builds works once again. Continuous integration tools: Cruisecontrol , Beetlejuice , Continuum
Comments: 1

User stories part 3: Using spikes to help estimate user stories

Posted by Simon Baker
Read our latest thinking on spikes.
Read more...

Thursday, February 9, 2006

User stories part 2: Adaptive planning

Posted by Simon Baker
Planning in detail too far into the future can be wasteful because changes will inevitably happen and they can't be predicted. The horizon of predictability marks the point where predictability moves into uncertainty. This horizon is the duration of an iteration. It's safe to plan in detail up to the horizon, but beyond it you should plan with a decreasing level of detail and precision. Adaptive planning defines a high-level plan or roadmap that contains the user stories to be developed over a distance such as a release, and creates a detailed plan for the next iteration only. User stories are well suited to adaptive planning because they're easy to use with different levels of precision. The figure below (adapted from James Shore 's Beyond Story Cards article) illustrates this effect. userstorycone Originally uploaded by sjb140470 . Beyond the current release, user stories are typically epics that focus on broad or vague features. During release planning, the selected user stories may be split into smaller user stories that focus on narrower features. However, the purpose of release planning is to quickly establish a sense of how big a release might be. It's not necessary to split the user stories too far. You can tolerate a larger inaccuracy in their estimates because changes will occur over the period of the release. As the user stories approach the next iteration, they should be split further to focus on progressively smaller and more specific functionality. As the user stories become smaller, they become easier to estimate and their estimates will become less inaccurate. By the time the user stories are planned into the next iteration you want them to take between 1 and 2 days to complete, as a rule of thumb. During an iteration, it's difficult to start developing the software for user stories when you only know their names. Recall the conversation element of a user story. The developers and the customer need to collaborate and talk about the details of a user story at the latest responsible moment, i.e. when the details become important. Typically this collaboration to reveal the fine details of the user stories begins in the iteration planning meeting and facilitates a translation of the user stories into acceptance tests. As part of the collaboration, it's important that the developers understand the benefits of, and the motivation for each user story. Rachel Davies suggests that the developers use a simple checklist: Do we understand the value to the business that this user story provides? Do we know who the beneficiary of the user story is? Have we defined all the acceptance tests? Next post in this series: User stories part 3: Using spikes to help estimate user stories Previous posts in this series: User stories part 1: What is a user story and who writes them? References: [1] Mike Cohn 's Agile Estimating and Planning [2] Kent Beck and Martin Fowler 's Planning Extreme Programming
Comments: 2

Wednesday, February 8, 2006

User stories part 1: What is a user story and who writes them?

Posted by Simon Baker
What's a user story? A user story is a distinct unit of customer-visible functionality, which does not have to represent value to the business but must represent progress to the customer. It's meaningful to both the customer and the developers. Ron Jeffries uses 3 Cs to describe a user story: 1. Card: The name of the user story (no more than a word or two) used to facilitate collaboration between the customer and the developers. 2. Conversation: Collaboration and discussions that drill-down into the details of the desired functionality. 3. Confirmation: Acceptance tests capturing the details and used to determine when a user story is complete. Think of user stories as representing a collaboration between the customer and the developers, as opposed to documenting customer requirements. The purpose of this collaboration is to reveal and understand the details of the user stories, which are recorded in the confirmation. Some people write a short description of the user story on the index card, while others like a statement of the format: [Role] can [capability], so that [benefit] . I prefer Rachel Davies' suggestion, and put just the name of the user story on the index card. A verbose story card encourages people to think of them as requirements documents. A simple name, written in large capital letters, encourages collaboration that continues throughout the iteration William Wake advises us to INVEST in good user stories : 1. I ndependent: Dependencies between user stories should be avoided because they can lead to prioritisation and planning difficulties. 2. N egotiable: User stories are reminders to collaborate. Collaboration between the customer and the developers involves a negotiation to balance the desired functionality with the cost of implementing it. 3. V aluable to the customer: Whether a user story represents value to the business or simply conveys progress it must inherently be valuable to the customer. This enables the customer to intelligently prioritise user stories. Avoid user stories that have only technical value. 4. E stimatable: There are three reasons why user stories may not be estimatable: Developers lack domain knowledge. In this situation, collaborating with the customer will help them understand a user story. Developers do not possess the right technical knowledge. They should split the user story into a spike to gather information, and a user story to do the real work. A spike is an experiment that allows the developers to learn just enough about the technology to be able to estimate the user story. A spike must be time-boxed. This defines the maximum time that will be spent learning and fixes the estimate for the spike. A user story is too big. It should be split into smaller user stories during collaboration between the customer and developers. When development starts on a user story, it should take between one and two days to complete (including acceptance tests). 5. S mall: If user stories are too big they are difficult to estimate and cannot be planned into a single iteration. 6. T estable: User stories must be testable. A user story that passes all its acceptance tests (and all its unit tests) can be considered complete (subject to a final visual approval by the customer). Without this capability, developers will not know when they're done . Who writes the user stories? The customer writes the user stories because she's in the best position to know the desired functionality. Each user story must be written in the language of the business. This enables the customer to prioritise the user stories according to their value to the business and their cost, and select the user stories for each iteration. The developers can assist the customer but the responsibility for writing stories must reside with the customer. Next post in this series: User stories 2: Adaptive planning References: [1] Mike Cohn 's User Stories Applied For Agile Software Development [2] Kent Beck 's Extreme Programming Explained, Second Edition
Comments: 2