Product Management is really only about two things:
- Understanding the space in which your product exists: DISCOVERY
- Building your product to deliver the most value in that space: DELIVERY
These two are fundamentally linked. If you don’t understand the problems and opportunities in your space you’ll never address them. If you don’t know your users’ needs you’ll never fulfil them. Similarly, if you’re not conscious of the product you have, of the direction you’re growing, then you won’t ask the right questions to understand your space.
Discovery is about understanding your Product’s Space
What are you trying to achieve? Who are you trying to help? What problems are you trying to solve? Knowing the anwser to these questions is critical to the success of your product. I’m sure you could accidentally build something that will get you some success for a market you didn’t expect, but that won’t last unless you recognise the opportunity and shift to take advantage of it.
Discovery is the act of mapping that space. It starts off a black void. You know nothing Jon Snow. Worse, you probably have some pretty firm assumptions that could be holding you back. Working with your crack team, you’re going to shine a light into that void and start to understand:
- The space in which you operate: your audience, their needs, their current challenges, the cultural limitations and your competitors
- Your place in that space: how you’re perceived, what people like/hate about you, who are your users and how are they using/not using you,
Your Discovery Team
Working with your discovery team, the Product Manager should steer your efforts to tackle the most important areas of interest. The ideal discovery team consists of
- User experience + researcher: Understand users’ perception of the space. What are they trying to achieve in the space? What holds them back? How are they using your product now? What other products are they using?
- Biz Dev + Data analyst: Understand the space from a quantitive perspective. How big is the space-opportunity? Who are the big players and how much have they captured? How are people using your product now? Where are they dropping off in your pipeline?
- Product Management + Biz dev: What’s the future of the space hold? How does it align with the leader’s vision? Where could you pivot to? How does it overlap with the other products in your organisation?
Discovery in action
The purpose of Discovery is to first Identify and then Understand:
- The space you want to work in, proving it’s a viable part of your biz plan
- The current and potential users in that space
- The tools and processes they use
- The challenges, problems and opportunities they face
We achieve this through statistical analysis, polls, surveys, workshops, shadowing their current processes and extensive user interviews. As we identify new items from the list above, we can assign them two scores
- Impact – how important is this to the user or to us? How big of an issue is it? You could use modelling to assign a cash value or user polls to understand their importance, but often I’ve found it’s an estimated amalgam of all of those data points. T-shirt sizing (small, medium, large) can help as the most important thing is the size of the Impact relative to the other items.
- Clarity – how well do we feel we understand this issue? Early on we’ll be learning a lot about a lot of things. As we become more expert our discovery can become more narrowed. Our aim is to get to a point where we’re not learning anything new with each discovery item. At that stage we’ve internalised the space.
We can use these scores to focus and prioritise our Discovery work. We should work to identify and clarify large impact items quickly. We’ll undoubtedly surface new items as we clarify existing ones, but it’s worth taking some time every month to push into new areas that you haven’t scratched the surface of as these can uncover rich solutions that can be applied elsewhere.
Delivery is about bringing value to your space
Now that you understand your space you can focus your efforts on delivering real value. It’s easiest to talk about this in terms of solving problems, but that’s an oversimplification that can limit us. Yes, a lot of work, especially early on, can be done removing known, easily identifiable roadblocks to your users processes. Social logins, faster uploads and larger storage space are all good examples of this. But you could also be delivering on an opportunity to allow users to do something they hadn’t considered before. To misquote Henry Ford
“If I’d asked people what they wanted I would have built a faster Horse”
If you just focus on user problems, you’re going to be reactive to their needs rather than anticipate them. You’ll also solve for the existing problem, which will lead to local maxima. Looking for opportunities keeps you open to brand new ways of providing value that will really disrupt the market and revolutionise the way your users get their needs met.
Learn what to build with Experiments
As you increase the clarity of your space, it should be easier to understand if you’ve provided value to it. But that doesn’t mean you know what features will provide that value. That’s where experiments come in. By running lightweight experiments you can quickly understand how you can bring value to your Space without risking lots of development time. When you actually build the thing you’ll have to ensure you can scale, you’ll want to conform to internal guidelines, have a marketing plan and rigorous testing in place. With Experiments though – you can move much faster.
The aim of the experiment is to prove/disprove your main assumptions with as little effort as possible. How can you do something quickly to give you confidence that you’re headed in the right direction before you invest too much effort? Can you manually send an email to prove that your automated marketing solution will deliver results? Can you add a link that goes to a ‘coming soon’ landing page to see if people are even interested in your new functionality?
Early on this isn’t about testing that your user flow is accurate or that your design looks great, it’s about testing whether your high level mental model of the solution is in the right area. Once you’ve gained confidence there you can experiment with how that should be manifested.
Designing experiments can be hard
It can be hard to design experiments for pretty universal reasons
- We feel we know (or management feels they know) the right solution so why waste time experimenting
- We often feel rushed into building something and getting it out the door
- We work deep rather than broadly designing a user flow from start to finish
- We are afraid that experiments mean putting a poor user experience
We know the solution
If you truly believe you know the right solution then implement it. If your product is in the simple space then it’s likely that you might know the right solution. In which case move straight to delivery. But be sure you’re confident on all aspects (mental model, user experience, user interface). You might know you need a way for users to communicate, but are you sure that’s via chat rather than a message board?
We need it now!
We’ve all felt that rush to deliver value today. Whether it’s us pushing ourselves, management demanding change or a combination of both. It’s important to feel a sense of urgency, but it’s critical that we move deliberately. Once you’ve reality-checked your understanding of the space and clarity of the issue, get a feel for how long it might take. If the fastest solution is going to take 4 weeks to build and the company has survived without it for the last 4 years, then adding another week to do a design sprint won’t kill anyone. If you need the solution soon it’s even more important you know what the solution is first.
Consider all of the options
Often people use experiments to prove that they’re right or wrong. It’s great to be invested in your product, but be wary of being invested in your solution. If you find yourself testing a mental model by building a deep user flow (multiple clicks/pages) then something might be off. Don’t try and build the first 10 steps for the one way you believe is right. Don’t prove that your option is right by building it and showing it increases the key metric, because you can’t compare it to the other options out there. They might move the metric much more!
Instead try and build the first step of 10 different ways of implementing your model. You want to test these against each other. I’m a big fan of pushing for covering all options. Even if you rule them out rapidly due to a well proven truth – documenting that option should only take a few seconds. So consider experiments that cover all of the options – not just your first instinct, because it might tell you something useful and it should be cheap to build!
Working deep vs broad
Considering all of your options could seem exhausting and it would be – if you work deep. The first experiments we want to run are testing the high level mental model that we’re considering. These can be tested with paper prototypes, surveys or a lightweight code release to a few users. It’s important that we’re testing the first step here, the biggest risk. We want to validate that or rule it out before we proceed further to avoid wasting time. Once we’ve got that validated and identified the direction we want to move in we can work deeper.
- Test our mental model
- Test our user flow
- Test our user interface
A common concern with experiments is that it will lead to terrible experiences for users, broken code and awful testing. The truth is the opposite. Or it can be. By clearly defining your definition of done for both experiments and full features you give space for something quicker. If you only have a DOD for features people will repeatedly cut corners, eroding your commitment to the DOD and degrading quality across the board. By being explicit here you get buy in and raise quality where it matters.
For experiments to work best I’ve found the following to be important parts of the definition of done.
- A clear end date. This stops experiments becoming tech debt and allows you to build faster. Use your data volume to guide you, you want just enough to inform you. In one example, I heard a company required this to be coded into the experiment so that it gracefully disabled itself on schedule.
- A primary metric that you’re trying to change. Often these will have an opposing metric in tension e.g. speed to register vs data captured, emails sent vs open rate, etc. Compare and prioritise your experiments by looking at the percent improvement of the metric vs the time required to build and deliver the full feature. The best version of this will link to the dashboard before the work is done.
Bring it all together
Once you’ve run various experiments you’ll be armed with an array of options for what you want to deliver. It’s likely that you’ll merge various aspects of different experiments. Working to your definition of done you can then work on delivering the real value to users. This work is a greater investment, but by now you’ve tackled most of the riskiest assumptions and your work on your experiments can better inform your timelines so you can make much more accurate predictions about when things should be complete.
So what does this look like?