A Guide to Lean Product Management

Giff Constable lean

In my previous post, I said that teams need to stop shipping features and focus on creating value. I promised a tactical view of how an existing organization (i.e. not a ground-floor startup) could bring about that change. I will start with a little tl:dr summary:

10 Characteristics of a Lean* product team:

  1. you are composed of small, goal-driven, cross-functional teams
  2. each team is tasked with improving a critical business metric (KPI)**
  3. features begin as hypotheses to be tested before heavy investment
  4. features come not just with acceptance criteria but success criteria
  5. a feature starts as a minimum valuable feature, and then iterates
  6. proof carries more weight than opinion
  7. the team talks to real customers on a regular basis. Including in-person.
  8. the team works in agile sprints, with close collaboration across all roles
  9. the team communicates regularly with the rest of the organization, and is transparent about priorities and work-in-process
  10. each team has regular checkpoints where you decide to stop, change, or double-down on pursuing the KPI

* the Eric Ries version of lean, aka “lean startup”
** with the exception of a critical infrastructure investment that is meant to prevent a metric from falling apart, as opposed to significantly improving

If you aspire to that list, how do you get there?

The first pre-requisite is a strong leader who has a vision for the business and also buys into lean concepts, at least in theory. If you have that person who will try and support this new structure, then let’s move on!

Team Structure

It is time to toss out the feature roadmap, shatter the functional silos, and leave the industrial revolution behind. You want to split your product organization into cross-functional, goal-focused teams. Give each team a problem to solve, a goal to reach, and the freedom to:

1. experiment to reach that goal
2. say no to things that are distractions from that goal

My ideal team size has as a base: a product owner (a clear team leader), a UXer (who ideally can do some UI and front-end work), and 2-4 developers.

The product owner needs to have a vision for the product, the strength to keep things focused and moving, the humility to empower their teammates, and the ability to communicate well both up and across the organization. This person can be, but does not need to be, a formal product manager. They should lead the charge for experiments and “getting out of the building,” and own analysis of the results. They should be able to pitch in on execution and do whatever it takes to help the team reach their goals.

Across your teams, you also want solid leadership in place that can keep a holistic view of the entire product and make sure that the teams are communicating and not conflicting with each other (or other important parts of the business). I am also a fan of the player/coach model that my Proof partner Jeff Gothelf implemented at The Ladders. He managed and mentored the UX staff, but he also was the primary UX person on one of the goal-focused teams.

OK, this sounds dandy in theory, but how do you keep teams focused?

You need a solid foundation with senior management, who have the power to support or disrupt focus. You want senior management to feel ownership of how this new effort is applied. In other words, they need to prioritize the “key performance indicators” (KPIs) of the business. By prioritize, I literally mean force-ranking KPIs.

Step 1. KPI Prioritization (1 hour)

Here is a simple exercise you can run:

Tools: whiteboard and two different-colored markers OR index cards, a sharpie, and m&ms.
Attitude: be welcoming to participation, but run the meeting efficiently and on-focus.

The meeting:
1. Get key organizational stakeholders into a room.

2. Write down all the metrics that really matter to the business

Tips:

  • this can be on a whiteboard or on index cards spread on the table
  • group the metrics into categories such as Financial Targets, Engagement, or Growth
  • if you need a framework to loosen this up, look at Dave McClure’s Startup Metrics for Pirates
  • think about non-digital metrics (for example, customer support calls).
  • think about metrics that touch upon technical debt such as uptime, performance.
  • it is ok to have metrics which affect each other, but try to minimize direct duplication
  • participants will inevitably start talking about features. Cut this off and keep people focused on results – i.e. things to measure.

3. Include a category for investments you need to make, such as new technical infrastructure components. Sometimes necessary work isn’t focused on improving a metric but rather preventing a metric from falling apart.

4. Give everyone 3 votes. Each person can choose 3 specific things to vote for (no category headers). I allow people to put more than one vote on an item.

Tips:

  • voting is not a time for discussion or debate. Keep the meeting focused.
  • First, ask people to privately choose their votes in order to prevent groupthink.
  • Then ask people to “dot-vote”, which means placing a dot next to the metric. You can do this on the whiteboard with a marker (ideally a different color from the text), or with M&Ms placed on index cards.

If you have the resources for 2 teams, then choose 2 KPIs. If you have 3 teams, choose 3 KPIs. The priorities might be obvious based on the votes, but if not, you can now allow discussion and debate.

Try to focus on things that feel *actionable*. For example, NPS score might be critical to the business, but it is hard to tackle from a product perspective.

The exercise and voting is very useful, but ultimately this is not a democracy. You need a strong leader to make final decisions. You *want* the CEO to make the decision that these are the most important goals of the business right now, so that they let each team stay focused on that mission. [Addition: this isn’t about dictating details. The CEO should set the vision, give a team a problem, and let them figure out how to solve it.]

I would also add that you need to take relevant expertise into account. For example, if your VP of Engineering says that a piece of infrastructure will crater in months given the growth trajectory, then you probably need to get on that, no matter how many votes it gets.

OK, that was a long description, but the meeting can actually be fun, fast and painless.

Step 2. Hypothesis Creation (1 hour)

You should now have a KPI and a team to attack it. So what are your ideas for how to reach that goal?

Gather the team and any critical stakeholders and do another brainstorm exercise. Lay out all the ideas you have that could significantly improve the KPI. Brainstorm practical ideas, low-hanging fruit, but also more creative and out-of-the-box ideas. You can write them down as features, but everyone needs to view these as hypotheses: if we do “X”, it will have a meaningful impact on our KPI “Y”.

Put a star next to the ideas that people feel are most important. Meeting adjourned!

Step 3. Weighting (1 hour)

Shrink the meeting down to just the goal-focused team (or even further to just the product owner, UX person, and one of the devs).

Keep the hypotheses that were given a star, and add anything critical that was missed. Put the most important ones up on a whiteboard. Dump the rest of the ideas into an icebox. Add a column for work effort involved, a column for guessed impact, and column for risk level.

The team lead should through the list and, with team input, assign Low / Medium / High in each column, for each idea.

Now, this is where a lot of people stop. They take the list and effort/impact/risk weighting, and proceed to prioritize the backlog, write user stories, and execute.

Not so in our land of “lean startup” / agile 2.0!

Step 4. Experiment Creation

For each feature, examine how you can test it before building it. For bigger things, you may want to attack the idea in more ways than one. Establish what you want to measure and who you want to run the test with, set a target goal for your experiment, and execute.

Experiments come in many shapes and sizes. You can have a fake button to see if anyone clicks (ideally with a second level of data collection such as a survey question to ensure the click wasn’t purely curiosity). You can paper-test or make clickable prototypes. You can try to pre-sell a solution. You can try a feature out on 10% of your user base. A common approach is to try your idea in a manual way, and automate upon success. The right experiment(s) depends on your context.

Every hypothesis can be tested with potential customers in some way or another, and you need to use your judgement to decide how. Some experiments require more dev work than others, but try to keep it lightweight. If you don’t run lean experiments already, then you will be amazed at how much you can learn from even just a little test.

Step 5. Backlog Creation

You now are armed with the information to fill out your user-story backlog and the top of your icebox. For each major idea you are trying out, try to make sure you have decided on what success looks like, and track your results against that goal.

In the backlog, experiments come first. The stories for an actual feature should remain in the icebox until the experiment has validated the idea.

Don’t get bogged down in debates over feature priority. Let the data from the experiments do the talking, not egos. Furthermore, don’t get lost in pipe dreams. Start with a “minimum valuable feature” and iterate from there if it is successful (don’t be afraid to iterate or kill an idea that doesn’t work out).

Step 6. Execution

When it comes to experiments, be willing to fail, and get uncomfortable! If you prove an idea is great, that’s a win. If you prove that an idea is a dud, and catch it early thus saving yourself wasted time and money, remember that this is also a huge win.

For the software development process, I am a fan of weekly sprints with user stories estimated at 1,2 and 4 based on complexity. If anything is an 8, then it is too large and needs to be broken down. The product owner is responsible for writing the stories and ensuring a healthy backlog. Developers, in most cases, should take the top, most-important story on the backlog.

Things are a little more complicated if you are the UX team member. You need to look at the backlog, and then further out to ideas that aren’t yet user stories. You need to evaluate and balance what you need to do each week. Some of it will be supporting the sprint in real time, and some will be getting ahead of things. You need to ensure that 1. the sprint does not get bottlenecked on UX, and 2. you are able to do the research, testing, and thought necessary to do good work. LeanUX is a big topic unto itself, and Jeff Gothelf is writing a book about it. Watch Jeff’s 5 minute Ignite talk for starters.

Analytics and A/B test infrastructure are critical but don’t forget to talk to people in person. I like the online testing tools, but there’s nothing like direct human contact for insights. Ideally, you can create a shared user testing process for all of the teams. Bring in 3-4 people a week and try out prototypes, live software, or even competitive products on them.

Externalize!

Each team should publicly share what they are doing in a way that non-product folks can understand. For example, give each team a wall to show:

  • their KPI and the target goal they want to hit
  • a high-level view of the priority list (constantly updated)
  • the high-level stuff in process
  • the stuff complete this sprint (cleared each week)
  • wireframes and mockups of work in progress
  • results of experiments run (or running)

Foamboards, index cards, sharpies, thumbtacks, and print-outs are your friend!

Step 7. Stop/Change/Continue Checkpoints

At a pre-planned interval, look at the work done so far and the results achieved. Decide if you want the team to stop, change things up, or double down. Each team can have a different time periods put in place for the stop/change/continue checkpoints. For example, you might need to give a team focused on your churn rate several months to evaluate if they can truly make progress. Elsewhere, you might time-bound a team focused on site performance to a week or three.

I suspect that it will be worthwhile to periodically align the checkpoints so that you can change up the teams. It’s always good to get fresh eyes onto a problem, and mix things up so that people feel ownership of the entire product, not a particular silo.

Final note

I have tried to jam as much as I can into this mega-post. Please let me know if you have any questions. Of course, I am learning and growing every day, and so I reserve the right to evolve my thinking on everything here!

Thank you to Andres Glusman, Josh Seiden, and Anil Podduturi for comments.