Automated estimates
It's time to make estimating the machine's problem.
Instead of asking people to guesstimate how long something will take to finish, Socratic automatically generates a personalized average duration for every task, based on historical actuals.
For each task in Socratic, you just choose whether you think the effort for the task is Average, Less, or More. Think of this as simply indicating how this task compares to others—what is it most like?
With your effort set, Socratic shows a total projected time to complete the task.
When assigning a projected time, Socratic uses the richest historical actuals available. So:
- 1.We look first to the historical actuals for the assignee in that workstream. If the assignee hasn't done previous work in the workstream, we fall back to...
- 2.Historical actuals for the workstream. If it's a new workstream with no historical average, we fall back to...
- 3.The assignee's historical actuals across other workstreams. If there's no personal historical actuals, we fall back to...
- 4.System defaults for each phase. These are:
- 1.Less: 0.5 days
- 2.Average: 1 day
- 3.More: 2 days
If a task's assignee changes as the task moves across phases—say, from a developer (phase e.g. "Doing") to a tester (phase e.g. "In Review")—we apply the time in each phase to the respective assignee.
Note: when a task is blocked, its blocked time is excluded from the historical actuals used to create personalized efforts.
To have enough data for a historical average, we look for a minimum of ten (10) tasks to have moved from start to finish. Of course, the more tasks, the richer the historical average, and the better the projections...
Not at all! Why? The law of large numbers (LLN). Over the course of enough tasks, the inevitable differences in size/complexity among them basically come out in the wash.
In our current model, we’re concerned with total elapsed calendar time. That is, if work on a task starts on a Monday, and completes the following Monday, the total effort (i.e. elapsed duration) was seven days.
But assuming no work was performed over the weekend, isn’t this “inflating” the effort by two days?
Maybe. Our initial theory was that here again, over enough tasks, any weekend work would come out in the wash. Meaning, sometimes weekend work happens, other times it doesn’t; by looking at total elapsed time you keep things simple. But it may be that weekend work is rare enough (fingers crossed) to exclude those days from historical actuals.
Aside from the time saved by having estimates created automatically, there’s another benefit to this model: it’s responsive as people and work change.
Consider the example of a new hire. In most cases, the average time it takes that person to complete a task in Month 1 is going to be different than in Month 10. They’ve become more familiar with the people, processes, technologies, and so forth: they’re an integrated team member. These kinds of improvements or maturations simply can’t be accounted for manually, using story points.
Of course, no amount of data science removes the real hard work in estimating—breaking a business request into its logical technical parts. Figuring out all the moving pieces. But this is where you want your engineering brainpower spent. Not in story point debates.
Last modified 30d ago