Iteration cycles (sprints) are designed to produce working, shippable software
Agile is made up of a number of approaches for managing work. Some weren't as successful as others (story points for one) but the one that has really worked well are iterations.
An iteration is a set period of time during which work is planned, completed, tested and ready to deliver. Whether or not you actually ship during that period is immaterial - the point is that the software is ready to go.
This also works well with budget constraints. If the budget changes midway through an entire project, you can still deliver a working product. If you're a consulting company, this can go a long way for building confidence in the company as a whole. While you couldn't deliver the entire solution (for reasons that weren't your fault), you were able to deliver critical pieces of the solution.
Now the question is: what do you work on?
In a traditional waterfall or even MSF, the first period is always on designing and building a series of specs for the entire software. But that means you don't have deliverable products at the end of each iteration. Each iteration needs to have its own set of features and goals.
We played with a few ideas:
Getting the customer focused on features that can be delivered in those iterations. The challenge here is that internal customers (such as departments or groups within a larger company) may not be ready for iterative development. They expect a traditional approach of delivering the finished product. That doesn't mean they aren't receptive to the idea; but it can take time and discipline to get them fully committed.
So that process can be frustrating but overtime, once you're able to prove its success, it will come.
Another idea was to switch focus regularly to ensure changing customer needs can be met. This allowed us to give the business analysts, who weren't fully on board with iterations, separate deadlines for pieces of functionality. If the customer decided that one thing was more important than another midway through the project, we would easily switch streams. In this manner, we could deliver one set of functionality during each iteration and while the customer was waiting for the entire piece, we were able to treat development with an iterative approach internally while maintaining an outward appearance that others expected.
The above ideas deal with the customer. But customers are only one part of the equation. Getting the team committed to the approach can also take time.
How long is the iteration? Some dictate the two-week iteration; some stretch it out over a month or two; some go crazy with a one-week iteration. I find issues with each:
a) no time for testing. Say what you want about unit testing - integration testing HAS to be part of the iteration because otherwise, you don't have shippable software.
b) high stress. Expecting high turnarounds every week or two weeks can quickly create burn-out.
b) bad habits are easily brought back. By extending an iteration over longer periods of time, it becomes easier to create excuses for tasks that take a long time or to push things out.
We started using three week iteration.. Using a three week iteration, the entire process became:
I used to think we could get it the backlog identified in one meeting. My vision was that everyone would read up on the outstanding work and show up day one of the iteration ready to figure out how to get it done. This never happened. Instead, the most common response was "I didn't have time."
Everything in an iteration is supposed to be time-boxed and time boxing for backlogs recommends meetings of 4-6 hours.
We spend three days of the first week on the backlog, typically about 2 hours a day. This gives time for regular intervals of reading and then deciding. During this time, the user stories and underlying tasks should be fleshed out to the point that if anyone was to leave the team, the remaining team members should be able to at least explain the goals of the iteration.
It also allows us to identify the tasks and how much time things should take. I have only one constraint:
tasks should be doable within 2 days
I've tried to use one day but it never works in large organizations - there are simply too many distractions. But two days gives a person at least 5 hours and at best 14 hours (given a full seven hour day) to get a job done.
But even with that time, each task is dedicated to a single purpose; be it research, interface, back-end coding, whatever.
Then at the end of the last day, I ask:
"Pick the things you will commit to doing by next week..."
The next part is a bit more specific:
"...but only choose one thing per user story"
So if you have a team of three, three user stories to work on and three tasks per user story, then each person would choose one thing from each user story.
This is done with another caveat as well - only choose things that you can do in the time you have available. Everyone has different schedules so if someone only has five hours available, then they aren't going to choose a task that is estimated at ten hours.
This separates out the iteration from three weeks into a few different periods:
a) Three day backlog (Monday to Wednesday)
b) Work on stuff (Thursday to Wednesday)
c) Review (Wednesday)
d) Work on stuff (Thursday to Wednesday)
e) Review (Wednesday)
f) Exclusive test period and bug fixes (final week)
Since each person is working on a single task within a user story, it also lets us identify things that simply grow too big and move them off to another iteration.
Next post, I'll talk about the scope of each user story.