We'll respond shortly.
Big companies with well established products and business practices often differ from smaller, younger, start-up companies with the cash flow available to prove business ideas. The startup companies will often have a “runway” or “burn-rate”, meaning they have a limited amount of cash to keep the lights on. Big companies have budgets, committees and risk departments. Those companies may take a risk on a new product, but if it fails the lights will stay on and people will more than likely still have a job. Established companies will also invest in existing products to strengthen their market position which may result in growth targets rather than a focusing on validating those targets.
New products should always look for market validation regardless of the size of the company. Just look at how long the HP Touchpad (48 days) and the Microsoft Kin phone (49 days) lasted in the marketplace before being withdrawn. It’s reported that HP would go onto lose up to $300m after cutting the Touchpad and associated webOS product line along with job cuts. So much for market validation and their risk department. Another huge loss was RIMs Playbook, reported to have lost the company $1.5bn in a period when the company really needed a boost in the marketplace amid falling Blackberry sales.
Not since the dot com boom have startup companies had anywhere near that sort of money to take a product to market and while these examples are hardware based incurring the associated research and development costs there is a lesson about validation in there regardless of the size of the company. Software companies seem to fail in three categories: massive dot com blowouts (e.g. WebVan and pets.com); being squashed by a competitor (Friendster to MySpace to Facebook); and the ones you’ve never heard of as they failed before they made it.
That’s not to say that the companies that are using metrics and data to help make their decisions don’t push the process too far. The example of testing 40 shades of blue at Google is probably the best known. However, a little information can go a long way.
How should a company with a limited cash flow proceed? That company needs to know that every feature built is going towards a larger goal, is required and ultimated validated by the users. How does a company know when a feature has been validated?
Each new feature should have some validation criteria, e.g. “Removing password confirmation should result in decreased number of users getting stuck and leaving at the registration stage”. This then needs to be measured, which means we need to have that measurement before the feature is written ideally.
There are a number of tools available to start gather metrics, Google Analytics is good for general visitor information but tools like KissMetrics have gathered support for more of data driven demands. There are also open source alternatives such as statsd and Cube if the project has the time to invest in their own tools. If the hypotheses are more visually driven, tools such as CrazyEgg have proved useful for validating calls to action and discoverability within an application.
As an example, something as simple as CrazyEgg can show where users of an application get confused. On a recent project we deployed CrazyEgg on the homepage and almost immediately we could see that users were clicking on headings like they were links. You could almost sense their frustration through the heatmap when a click didn’t change the page.
On another project we could see we were dropping a high number of users through a certain part of the application funnel. At this point we did some user testing and tried to focus on this area where users were dropping off. We found that the application, which had a “real-time” price for the service the user was getting a quote built in some assumptions for earlier selected defaults but without telling the user that had happened the users were getting put off by the high quote. We could hypothesise on what might increase the throughput of the funnel, make those changes and see which one worked best.
So you have the tools, you have the talent. Now what? You need to know where the application currently stands with the metrics which drive the business. When a new feature is written it should include a hypothesis about what metrics will change when the metrics goes live. The feature gets built and goes live. How do you follow what happens next? Most tools and processes stop here, features have been accepted and is live, what more do we need? Those features should be monitored while the product team measures the effectiveness of the change. Did that feature make the hypothesised impact? If not, should the feature be re-evaluated or thrown out?
Step one to a successful business: know the product, understand the users and always be validating.