Last week, I helped organize a small roundtable of product and design leaders across several successful NYC startups. One question we discussed was “how do you know when to ship?”
There was clear consensus that one should split when you *market* vs when you start letting users touch a working piece of software. The rule of thumb on starting to do marketing is “once things are working well.” The rule of thumb for when to get product into real customers’ hands is “as soon as possible.”
I found myself in an interesting situation recently as we tried to validate a new innovation idea for a large enterprise client. We had a compelling idea, albeit in a very complex space. Here is a mini “trip report”:
In week #1, we boned up on the space, talked to target customers to validate that the problem was real, and paper tested a few ideas. We intended to spend the second week creating a focused Ruby on Rails web app. We intended to get it into customers’ hands after that week.
By Thursday of week #2, we knew that we had something to ship that would be a decent experiment. We had also been continuing our qualitative research with customers during this week, and unfortunately realized that our solution did not make sense. We had a decision to make — should we put the MVP in front of users anyway?
In this case, we were not working on a consumer app. We did not have a near-unlimited pool of users to test with. I didn’t want to go to the well too many times with an obviously wrong product, even though we no doubt would have learned things.
Instead, we decided to hold off “shipping”. We pivoted our solution, and spent week #3 creating a new baseline MVP. Then the same thing happened. Continued qualitative discussions brought us to the conclusion that this pivot was not going to work either.
So at the start of week #4, we executed another product pivot. This time our design passed initial sniff tests, and we put the MVP in the hands of customers to examine usage, conversion rates, and sharing rates. It felt great to finally get solid quantitative data to balance the qualitative.
I was responsible for deciding to pivot before putting live code in the market. I remain conflicted about whether it was the right thing to do.
In this case, my gut said that once you are convinced you are on the wrong track, then you should not waste time studying that wrong answer but instead focus on what you hope is the right answer. However, I also know that you learn a tremendous amount by being in the market, even if in a controlled, limited way.
I am always trying to do lean better, faster, more focused, more actionable…while staying flexible to an project’s specific context. I am constantly running into these questions:
- How long should an experiment run?
- Is failure based on the idea or the implementation?
- When to switch from focused experiments to a true MVP?
- When is an “MVP” both minimal and viable?
- How to interpret mixed data?
It is a fascinating and endless struggle.