Glad You're Ready. Let's Get Started!

Let us know how we can contact you.

Thank you!

We'll respond shortly.

OpenView conference

OpenView Venture Partners

Ian McFarland and I attended a 2-day Development Forum hosted by OpenView Venture Partners last week in Boston. It’s the second year that Pivotal Labs has participated in the event. Open View has a portfolio of companies from all over (Europe, Australia, the US), each of which has been working on implementing Scrum over the past year. The engineering staff from 10 portfolio companies attended the event. Jeff Sutherland (amongst other things the co-creator of Scrum) is a Senior Adviser to OpenView; he provides advice and guidance to the portfolio companies as they progress through their Scrum adoption, and he gave a talk at the Forum. Pivotal Labs was invited to speak and lead a discussion of on 2 core topics: Developer Testing and the Principles of Build.

Status & Issues

First, we heard from each company on how they were doing with their adoption of Scrum and on the following two questions on the subject of technical developer practices:

  1. The goal of every sprint is to have fully tested quality software product ready to go live with customers. Where are you now with relation to this and what stands in your way of getting there?
  2. The goal of every development team should be to have one piece of software code globally, and all work and testing is done on this one piece of code, with multiple builds with built-in testing done in a given day. Where is your team in relation to this goal and what stands between you and getting there?

The 10 portfolio companies come from a disparate set of industries and technical domains, so Ian and I were very interested to hear each of their histories with Scrum and the current issues they were facing with respect to these 2 questions on developer practices. Some were at a fairly advanced stage – they had good test coverage and a stable CI setup – and some were just getting started. The most frequently stated barriers to achieving the goals of fully tested software that’s ready to release were:

  • A perception that the team wasn’t big enough to allow for writing tests
  • An assumption that ongoing developer testing could potentially slow the team down
  • An assumption that adding tests to a code base would mean first stopping and covering all the untested code so far with tests, and that therefore it meant stopping for a long period to retrofit tests
  • A question of whether clients even wanted frequent releases (e.g. once per week) of the software
  • A perception that the team wasn’t big enough to allow for pairing
  • A lingering assumption that QA and not Developers should be responsible for generating all test coverage

Pivotal Labs’ talks

Developer testing

Pivotal’s first talk was on developer testing. There were two main points we wanted to make:

  • The best way we know to get a big jump in quality and to be able to frequently release new versions of software is to rethink who is responsible for testing.

Rather than the traditional model of a QA team being almost solely responsible, consider a shift towards the whole team being responsible, and in particular a much greater emphasis on developers owning quality. For many developers it’s a radical shift.

  • The most effective way we’ve seen for developers to own quality is through the disciplined and sustained practice of TDD.

During the talk I gave a demo of a simple example of strict TDD, which gave rise to some useful conversations; as expected, the reactions varied from “yes, that’s what we do” to “that makes no sense!”. Having coded solely with strict TDD for almost 9 years now, and being around Pivots who exclusively test-drive also, it’s always interesting to hear reactions of people coming to TDD for the first time. The idea that tests are the center of the development effort, and that code is to some extent expendable, is a radical shift in thinking. We also touched on the benefits TDD brings in addition to reducing regressions. I find that a useful question to ask is what TDD stands for: “Test Driven Development” or “Test Driven Design”. The notion that TDD helps in designing your object model brought up some interesting discussions (for example, mock objects came up).

We also tried to address the barriers to adoption that had been brought up:

  • When introducing tests to a legacy code base, rather than trying to immediately cover everything with tests right away, we suggested:
  • “Stop the rot” – from now on, attempt to test-drive everything, including bug fixes
  • Spend time each iteration/sprint adding some coarse-grained, high-level tests around the legacy code
  • Once there’s at least a basic safety net in place, spend time refactoring towards unit-testability
  • Gradually introduce unit tests over time, with the goal being high test coverage at the unit level
  • Certainly with a small team there may not be bandwidth for an explicit traditional full-time QA role. We pointed out that with developer testing there’s no tester-developer separation: every developer is a tester, so testing isn’t gated by team size at all.

We made two points about speed when doing TDD:

  • It can certainly take some time to get proficient and achieve the same short-term speed you had before. However, in our experience, if a team has a member who’s done it before, or whose role it is to advocate for constant TDD, the team members start to get the hang of it much faster than you might expect – 3 weeks or less in many cases.
  • Independent of whether coding a certain feature is faster or slower, the medium- to long-term benefits (catching regressions instantly) are invaluable.
    There was an interesting discussion of why we’re so interested in pushing for frequent releasability even on projects whose clients are known not to want frequent releases. The consensus was that even if there’s no actual need or desire to release frequently, the act of trying to get to that point brings lots of benefits. There are fewer surprises when a real release comes due; the “last-mile” problem is often reduced; people integrate their changes with the main trunk of development more frequently, so there’s less merge hell, etc.


Our second talk was on build. As an ideal to shoot for, we promoted what Pivotal Labs does:

  • Checkins prompt a build (clearly, this helps to identify which changes broke what)
  • A broken build is an anomaly – teams should immediately stop and fix a broken build
  • Keeping the build fast is critical, so that it stays relevant
  • The build must be easily visible to the whole team. At Pivotal we have 2 large TVs on the wall that clearly show the build status of all our projects

There was a good deal of discussion over how to get started with a build. Some ideas that were brought up were:

  • You don’t need tests to have a build. Compiling and packaging your code with each checkin, and making sure everything works is still very useful information
  • If your build is flaky (i.e. some tests fail seemingly at random), immediately detach the flaky tests to keep the build stable. Work on the flaky tests in isolation. The team needs to trust the build’s status.
  • If your build is slow, break out a slower suite of tests and run it in a separate process that runs less frequently. Work towards high coverage with fast unit tests so that the status of your fast build is meaningful.

Hopefully our talks were useful. Certainly they sparked plenty of discussion!

Thanks to Jeff Sutherland, Igor Altman and Steve Rabin of OpenView for inviting Pivotal Labs to speak.

Below are our two presentations.

  1. Best write-up on TDD, period. My experiences on TDD adoption and practices mirror this 100%.

    Though, I would like to add that the world is not _completely_ as black and white as you describe. In terms of QA vs Developer testing, I’ve seen tremendous amounts of productivity gain and overall balance achieved by combining the two. That was with a total of 4 rails engineers (50/50 java and php backgrounds), 2 product managers, and 1 QA tester.

  2. Nivi says:

    You know I love you guys so let me jump right into the blunt questions. =)

    1. Do you think the 2 large TVs are useful? I never saw anyone at Pivotal use them for anything. People spend all day looking at Tracker. That’s where the action is. I’m guessing you don’t put Tracker on the TV because clients want their privacy.

    2. My (only?) frustration at Pivotal was the idea that every developer is a tester. I think the idea needs elaboration.

    See page 199 of [Poppendieck and Poppendieck’s new(er) book]( for a finer analysis of testing:

    a. Unit tests prove that the code does what the developer’s want it to do.

    b. Acceptance tests determine that the code does what the customer wants it to do.

    c. Exploratory tests (traditional QA) determine that the code doesn’t do what the customer and engineers don’t want it to do.

    At Pivotal, there isn’t anybody really doing the exploratory tests. Unit testing is not sufficient. It only proves that the code does what the developers want it to do. It doesn’t prove that the code doesn’t do what the developers and customers don’t want it to do.

  3. Nivi says:

    Strike “unit tests” from my previous comment and replace it with “all the tests that the developers write to specify the design”.

    P.S. I like your slide on “The Old Way”. That is point-based product development. TDD is set-based, it communicates the constraints.

  4. Parker Thompson says:


    re 1) I notice that when a TV goes down for a day at least half the builds go red, suggesting they matter more than you might think. Think of it as a mild form of public shaming, or just as a classic [Information Radiator]( with all the associated benifits.

    Tracker is where the magic happens, but CI makes sure your wand still works. For Pivots — who often don’t read email all day — the TV makes sure we quickly see/fix defects so you can focus on features.

    re 2) I buy into the [Elisabeth Hendrickson]( school of thought, which I’d sum up as: Testing is a way of thinking and we should (and *can*) all do it.

    It’s fine to have someone with a title/exclusive role of “Tester”, but often teams do this to deal with high defect rates that are a result of poor development practices. In other words, it’s an essential role/practice, but you can generally ship quality software if you occasionally just try to think like a tester (or as Elizabeth puts it: Put your evil hat on).

    Would you want a dedicated tester if/when working with Pivotal?

  5. Nivi says:


    Re: “Would you want a dedicated tester if/when working with Pivotal?”

    I don’t know the answer to that but more exploratory testing by somebody would be good. Otherwise, the customer has to do it. And that leads the customer to think that the devs don’t care enough about quality which would be the wrong conclusion. The right conclusion is that there isn’t enough exploratory testing going on.

  6. Nivi says:


    Regarding the TV: now I get it! If we don’t have a TV we need one (builds go red too long). If we do have a TV it seems useless (because builds don’t go red too long). Cool.

Post a Comment

Your Information (Name required. Email address will not be displayed with comment.)

* Copy This Password *

* Type Or Paste Password Here *