To make the right decision, use the right data.
When it’s time for a tough decision, it’s time to use data. The idea is the data removes biases and opinions so the decision is grounded in the fundamentals. But using the right data the right way takes a lot of disciple and care.
The most straightforward decision is a decision between two things – an either or – and here’s how it goes.
The first step is to agree on the test protocols and measure systems used to create the data. To eliminate biases, this is done before any testing. The test protocols are the actual procedural steps to run the tests and are revision controlled documents. The measurement systems are also fully defined. This includes the make and model of the machine/hardware, full definition of the fixtures and supporting equipment, and a measurement protocol (the steps to do the measurements).
The next step is to create the charts and graphs used to present the data. (Again, this is done before any testing.) The simplest and best is the bar chart – with one bar for A and one bar for B. But for all formats, the axes are labeled (including units), the test protocol is referenced (with its document number and revision letter), and the title is created. The title defines the type of test, important shared elements of the tested configurations and important input conditions. The title helps make sure the tested configurations are the same in the ways they should be. And to be doubly sure they’re the same, once the graph is populated with the actual test data, a small image of the tested configurations can be added next to each bar.
The configurations under test change over time, and it’s important to maintain linkage between the test data and the tested configuration. This can be accomplished with descriptive titles and formal revision numbers of the test configurations. When you choose design concept A over concept B but unknowingly use data from the wrong revisions it’s still a data-driven decision, it’s just wrong one.
But the most important problem to guard against is a mismatch between the tested configuration and the configuration used to create the cost estimate. To increase profit, test results want to increase and costs wants to decrease, and this natural pressure can create divergence between the tested and costed configurations. Test results predict how the configuration under test will perform in the field. The cost estimate predicts how much the costed configuration will cost. Though there’s strong desire to have the performance of one configuration and the cost of another, things don’t work that way. When you launch you’ll get the performance of AND cost of the configuration you launched. You might as well choose the configuration to launch using performance data and cost as a matched pair.
All this detail may feel like overkill, but it’s not because the consequences of getting it wrong can decimate profitability. Here’s why:
Profit = (price – cost) x volume.
Test results predict goodness, and goodness defines what the customer will pay (price) and how many they’ll buy (volume). And cost is cost. And when it comes to profit, if you make the right decision with the wrong data, the wheels fall off.
Image credit – alabaster crow photographic