We’re always thinking about “minimum viable process” — i.e. what’s the least process that gets the job done. The answer to that changes as your team size scales, but I think it’s always worthwhile to fight a running battle for no more prescriptive process than you need.
This impacts everything from how you treat agile, how you approach research, how you interleave “lean startup” concepts into your work, how you communicate with the rest of the organization and beyond. Rather than create a rigid process, we created a simple checklist to keep in front of ourselves as a product team, and I thought I would share it here:
Prioritization Phase (asking if we should work on it)
- Do we have a clear understanding of the goal / problem we are trying to achieve / fix?
- Is the work aligned with a strategic initiative / imperative?
- Have we done at least a basic weighting of benefit / effort / risk? We ask ourselves about both execution risk (can we build it well) and impact risk (confidence in ROI)
Design Phase (figuring out the details of what “it” would be)
- What is the goal of the feature?
- How can/should we measure that goal?
- Are we listening to input from the rest of the organization?
- Are we listening to input from customers?
- Are we gathering creative input from engineering?
- Are we designing the minimum viable version?
- What risks do we foresee?
De-risk Phase (building our confidence)
- Are we testing the major risks in as lightweight manner as possible while gathering believable intel?
- Are we communicating results to interested parties (internal or external) from the design phase?
Build Phase (making it real)
- Are we communicating status (especially changes to design or timing) to the rest of the org in an ongoing and useful way? (i.e. no surprises)
- Are we developing and executing a release plan (both internal and external) with member success and marketing involved?
Release (putting it in customers hands)
- Are we releasing this in a low risk way (i.e. effective QA, controlled rollouts, well-coordinated communication, and relevant PMs and engineers on standby)?
- Are we ready with our qualitative or quantitative instrumentation?
Post-release (following up)
- Are we following up on the measurement against the goals?
- Are we communicating the results to the org, and especially to interested parties from previous phases?
- Do we have any immediate iterations/fixes that need to be prioritized?
- Can we learn anything from how we released/marketed the feature to improve our next effort?