Close
Glad You're Ready. Let's Get Started!

Let us know how we can contact you.

Thank you!

We'll respond shortly.

LABS
Performance stories – Writing two stories instead of one

TL;DR

Estimating performance stories are total bike-shed conversations.

  1. Write a figure-out-what-to-do story which generates the knowledge to estimate #2 (with repeatable benchmarks)
  2. Now estimate and execute the actual story around doing the optimization (and compare benchmarks)
  3. Repeat until your slowness is gone

Estimating Optimization is hard

Don’t Pre-optimize

We are always told to never pre-optimize our code. This is generally a good practice; it tends to keep you focused on writing readable, correct code. It also keeps you from making the common mistake of optimizing things that you think are expensive and are not anywhere close to the hot-spot.

Okay, okay, but something really needs it

Given enough time though, some part of the software you are writing will eventually become slow enough that it actually affects the usefulness of the code. A product manager should eventually see these performance-related issues and request stories around fixing it.

Avoid the Bike-Shed Conversation

Unfortunately, on many occasions I’ve sat at a planning meeting being asked to fix aforementioned performance issues, and I’ve nearly always seen a huge bike-shed conversation snowball, taking up lots of time, and rarely generating actual value such as what to optimize and how. Even worse is attempting to estimate the difficulty of optimizing. Optimizing code can have wildly different amounts of effort. Sometimes some simple cacheing can make huge impacts in a couple lines of code, but sometimes extravagant new algorithms are required which take weeks to write and test.

Estimating Two Stories instead of One

Optimizing might be hard, but avoiding wasting copious time estimating stories in Tracker is easy: write two stories.

The Find-Something-Juicy-and-Benchmark-It Story

The first story is simple: analyze your code. Try to instrument your code as best you can, it can be hard but make sure you get real numbers, and record your efforts. Find low hanging fruit in code that is both taking lots of time, and you have a couple ideas on how to make faster. The primary output of this story is your benchmark results, and an estimation of the work it will take to optimize. It may be useful to timebox your work, this will make your product manager happy because they know you won’t be running off into the distance chasing things that aren’t going to deliver value.

The Actually-Fix-It, Estimatable Story

The next story you write is simple, do the optimization, and then run the benchmark again to see that you got some results. Doing anything but going off of repeatable benchmarks is just silly. Having simple metrics which prove you actually sped things up is also very important as it is the only way a product manager can truly accept your story most of the time. Proving some batch process ran 30 percent faster is not fun for a product manager, letting them see a graph that proves it is.

Save Your Breath and Overestimates

Now your PM doesn’t have to choke on an hour long conversation, or a grossly over-estimated story because there is so much risk in actually delivering an optimization story. All you had to do was stop spending time trying to estimate things that are really hard to predict.

Comments
  1. Jim Kingdon says:

    I’ve had good luck with the two-story approach described here. In our case the output of the first story was a test script which could be run against the app and output a performance number (in our case it was in requests per second, or number of concurrent users, or number of users in the database, or something like that). This script served as the acceptance test for the second story.

  2. Glenn Jahnke says:

    Writing tests around performance is always great, especially so you don’t regress the performance that you can sometimes spend a lot of time trying to get. Unfortunately in my experience its usually hard to write tests that you keep in your build for things because you usually want to test a whole system, not just some small component because testing small loops has large variations in time between your dev box and your CI box. Its always cool when you can do a lot of work on then come back to your perf script and use it again to see how you’re doing.

Post a Comment

Your Information (Name required. Email address will not be displayed with comment.)

* Copy This Password *

* Type Or Paste Password Here *