I’ve been hearing an excuse lately for avoiding experiments and “getting out of the building”:
It boils down to this: “if the results don’t have clarity and repeatability then why test in the first place?” Or put another way, “if you can’t perfectly design the experiment and isolate a single variable, and if you can’t have absolute confidence in your results, then what is the point?”
Here’s a truth with startups and new products: understanding test results and root-causes is often really hard. Yet it rarely makes sense to spend the time and money to get statistical significance or perfect clarity. We need to exercise judgement and intuition to interpret results, but that does not invalidate the usefulness of getting outside of our own heads. Sticking one’s head in the sand is not a valid approach.
When I discussed this challenge with my project-teammate Jon Berger, he said, “We test to uncover clues, not facts.”
I thought that was an excellent phrase that honestly acknowledges the purpose and limits of lightweight testing. You are getting facts within your test, but only clues for the world beyond. You might be getting metrics, but still have to understand the “why” behind them. After all, we’re talking about human beings here.
Avoiding any experiments is as foolish as running over-designed, over-resourced tests in an attempt for perfect clarity.*
You won’t hear me argue with the premise that you want to structure your experiments intelligently, but I don’t consider the muddiness of results data to be a reason to avoid the process. You want and need both vision and validation. You want both intuition and data-driven iteration.
* When it comes to tests, the descriptors I think you want to shoot for are nimble, lightweight, creative, prioritized, and iterative. I didn’t say “frequent” because in the “think, make, check” or “build, measure, learn” cycle, there are times where it is ok not to be testing (when you are implementing the learnings of the previous tests and thus teeing up new ones).