I finished reading Mike Cohn's book "Agile estimating and planning". A good book, however not exceptional. I took some good points out of it.
- Project size and project duration are two different concepts and these should be measured with different unit types. Project duration is a function of the Project size divided by the project velocity.
- There are two main kinds of projects: Time fixed and Functionality fixed. The former provide a fixed deadline; in this case uncertainty is in the quantity of functionalities which will be delieverd. The latter requires a fixed amount of functionalities; in this case the uncertainty lays in the delivery date.
- Estimate uncertainty. Uncertainty could and should be estimated in a project. While more pre-emptive approaches tend to nail down requirements and new requirements have to go through a painful approval process (Gosh, now I realise that I worked on some really non-Agile projects!), Agile projects are open for changes and provide the infrastructure to accomodate new requirements in a (relatively) hussle-free way. I liked the 50-90% idea, where, for stories where the team feels 50/50 sure about the completion of a certain story, it should estimate a story in the best scenario (and that would represent the 50% case) and in the worse scenario (and that would represent the 90%). Then with a simple standard deviation formula (Sqrt of Sum (90% points - 50% points) ^ 2) it is actually possible to estimate uncertainty. What's cool about this idea is that estimating uncertainty is different from what Mike calls padding, i.e. the attitude of over-estimating just because "one never knows". It's a more empirical approach. By applying the 50-90% estimation, to calculate the uncertainty sum all 50% values and add the uncertainty at the end. This will give two different estimates: the 50% side I would use to track the project progress with the development team; the "buffered" estimated to track and publish progress with management.
- Prioritise the project features. This is actually job for the product owner but it's a key point in Agile projects: involve the key stakeholders (those who pay the bills) in the project lifecycle, from release planning to feature testing, from approval to requirements prioritisation, etc.
- There is no "me"; there is only "the team". It's either the team who delivers or who doesn't. Individual-centered metrics are disruptive for the projects because they will break it from inside and from the outside
- Replan frequently. Start the project estimation as a rough guess and replan at the end of every iteration or every few iterations and adjust your estimates accordingly.
In Mike's book there are also few ideas which I don't share. For instance Mike suggests that measuring the actual duration of a story compared to the estimated one is not a good thing, because the risk is to introduce "estimation apprehension". My experience is different. Measuring how long a task actually took compared to how long it was estimated provides a good measure of the accuracy of our estimates and therefore whether the project might still be on track. Discussing openly during a retrospective in my experience also helps towards the next iteration estimate; a bit of fear is useful at times: it helps us in pushing ourselves to the limit and to deliver more accurate results.
Another point I didn't share with Mike was about ways of estimating velocity at the outset of a project. One of the ways he suggests is that the first three iterations (when it's possible) should be used as a base to estimate velocity for the next iterations. My experience suggests that at the outset of a project things are very different from later phases. At the beginning of a project a team usually has no knowledge of the product they are going to build; if the technology is new the velocity of the project will be ways slower then later iterations. It's true that Mike says that these factors should be taken into consideration in future releases, but to base the velocity on the first three iterations it seems to me a bit too generic.
The point on velocity actually helped me in developing an idea: what if, instead of having a single velocity, we would use different velocities for different task types? It's my experience that in a project there are typically different type of tasks:
- Infrastructural tasks: interfacing with an external system, Continuous Integration, Setting up the running environment, automatic deployments, nightly builds, database work, etc
- Development tasks: These are the usual development activities that we all know and love (or hate)
- Analysis tasks: These in Agile are sometimes called spikes and are tasks aimed at clarifying requirements.
Why not to keep the history of these different velocities and use this data to calculate more accurately next iteration velocity? For example, let's say that for the past three iterations we observed that the total amount of story points brought to completion ("Done" in Agile parling) has been 90. This would give us an average velocity of 30 SP/iteration. We also observed the following:
- There were 9 Infrastructural tasks which required 50 SP (50/3 = ~17 SP/iteration on average)
- There were 21 Development tasks which required 30 SP (30/3 = 10 SP/iteration on average)
- There were 3 Analysis tasks which required 10 SP (10/3 = ~3 SP/iteration on average)
Calculating an average of SP vs task type over the past three iterations gives us the following figures:
- 1 Infrastructural task has a velocity of 17 / (9/3) = 5.67 (2dp)
- 1 Development task has a velocity of 10 / (21 / 3) = 1.43 (2dp)
- 1 Analysis task has a velocity of 3 / (3 / 3 ) = 3
What does this tell us? First of all it appears clear that infrastructural tasks have a much greater impact on an iteration than analysis tasks and that analysis tasks have a much greater impact on an iteration than development tasks. The figures above tell us also that developing features is relatively cheap and that probably the greatest risks come from infrastructural tasks (Mhmmm, what a coincidence...Does that ring any bells?) When planning for the next iteration we'll examine the task types and we can find the estimated velocity by multiplying the number of tasks of a certain type by their unitary velocity. During the project progress we will then adjust our velocity type factor depending on the velocity of previous iterations. In my experience, this type of observation is far more accurate than a single measure mixing apples with pears, because different task types have usually associated different velocity.
Happy technology to everyone!
Marco
Recent Comments