Skip to content

Tyner Blain - Scott Sehlhorst
Syndicate content This Feed Powered by FeedBurner.com
Software product success.
Updated: 5 hours 51 min ago

Minimum Valuable Problem

Fri, 07/22/2016 - 13:38

redacted use case dependency thumbnail

Defining and building a good minimum viable product is much harder than it sounds.  Finding that “one thing” you can do, which people want, is really about a lot more than picking one thing.  It is a combination of solving the minimum valuable problem and all of the other things that go with it.  Solving for both the outside-in needs and the inside-out goals is critical.

Starting with Icebergs

image of iceberg showing the massive hidden parts

Rich Mironov’s great article, the DIY Illusion, talks about the importance of focusing your team on building what is important to build (and not building something more easily acquired in other ways).  Imagine your team is building a mobile app.  Now imagine your team is building – from scratch – a CRM system to allow you to track all of the users who install the app.  Or imagine they are building a ticketing system – from scratch – to allow you to track development team progress on feature requests and bug fixes.

context of framing

I introduced the Andy Polaine’s concept of designing in different contexts in an article about roadmaps and feature-lists last year.  The same pattern / concept applies here.

Rich’s article describes a micro-version of the classic buy, build, partner decision. When it is your team making decisions about dev-ops or other infrastructure that they need, this is exactly what it feels like and looks like.

Pop up to the broader organization-level context, and now it is the classic MBA version – do we build a new product to complete our portfolio?  Or do we partner with someone else to include their product?  Or maybe acquiring that partner (or just the product) makes the most sense.

Both of those decisions are firmly in the inside-out side of thinking about product.  What about the outside-in framing?  Your customers are making  buy, build, partner decisions about your product.  How do you make sure the right answer for them is “buy?”

another iceberg - emphasizing what is hidden

An important point in Rich’s article is that the work you need to do (to roll your own <insert system here>) is much larger than a shallow analysis would lead you to believe.  The same is true about defining a minimum viable product.  You customers will need to solve more than the single problem on which you begin your focus.

Minimum Valuable Problem

I’m going to spend the next couple weeks talking only about minimum valuable problems, and not minimum viable products, as an experiment to see if it accelerates a change in thinking with my team.  [I dropped the term first in a meeting with executives yesterday (as of when I’m typing) explaining that our product is focused on completely addressing the minimum valuable problem, and got some head nods but no direct commentary.]  If you want to know the results, ask in the comments on this post.

In my mind, I remember reading Steve Johnson quoting Saeed Khan as saying that a minimum viable product is, literally, “the least you could do.”  I hope it’s true, I love that quote.  I don’t know if that’s actually where I heard it, but let’s make it quotable, and see if some tweets cause the original author to magically appear.  An MVP is literally the least you could do with your #product.

US quarter featuring the state of Texas

Why make the distinction between product and problem?  Two reasons – one philosophical and one practical.

Philosophical Soapbox

One thing my clients regularly value from me is that I’m joining their team with a “fresh set of eyes” and one thing I bring is an external perspective on what they are doing and plan to do.  It affords me an opportunity to help shift the perspective of the team from inside-out to outside-in.  In other words, being driven by the needs of the market.  At the product-level of context, this usually means being driven by the problems a particular set of users are trying to solve.  Introducing problem as a totem in many conversations helps reinforce and subtly shift conversations over time.  The longer I work with a particular team, the more I see the results of this.

When people talk about the product they are usually talking about “this thing we (will) build.”  That’s not nearly as valuable for me in assuring product-market fit as if people are talking about the problem we are solving.  I’m on a team in the early discovery and definition phases.

We get more value from conversations about why someone will use the product than discussions around how the product will work.  We get more value from conversations around how the product will be used than from discussions around how much it costs to make the product.

Practical Thinking

A huge challenge in communication is one best described by a sketch of Jeff Patton’s from his book User Story Mapping.

three people discovering they don't ACTUALLY agree[for a larger version, buy Jeff’s book].

When people talk about “the product,” in my experience, everyone in the room will happily carry the conversation forward, each referring to “the product” with no one clarifying precisely what they mean.

When people talk about “the problem” we intend the product to be used to help solve, it is common for the conversation to reiterate, refine, or at least reference which problem we’re speaking about.

I don’t know why these play out in different ways, but they seem to do so.  Perhaps we’ve got a cognitive psychologist in the audience who can shed some light?

Regardless, the minimum valuable problem seems to be something people are comfortable clarifying in conversation.

Solving the Problem

I get to stand on the shoulders of another giant, Gojko Adzic, and his book, Impact Mapping, as my personal product management  jiu jitsu.  Gojko’s approach helps me very quickly define what it means to my user to solve his or her problem.

By focusing on the outcomes (there are, in fact, many ways to get to this – I just happen to find Gojko’s to be compelling), you discover that solving the problem you originally intended to solve may not be sufficient.

Your minimum viable product may be solving half of a problem.  Solving half of a problem is creating half of a product.  There may be cases where this makes sense – splitting stories, incremental delivery, etc.  But it doesn’t make sense for very long.

How often are you interested in purchasing half a solution to a problem you’re facing?  When the brake lights on your car go out, would you ask the mechanic to just fix one of them right now, and schedule a follow-up visit next month to repair the other one?

Defining the minimum valuable problem is defining the minimum viable product.

The minimum valuable problem is one you completely solve.  You may only solve it in a very narrow context or scope.  You may only solve it for a small group of users or a subset of your ultimate market.  You may only solve it adequately, without creating a truly competitive solution.  But for that one customer, in that one situation, you have completely solved it.

Remember – you grow revenue one customer at a time.  This sounds like a platitude, but reverse the perspective.  That one customer is considering multiple vendors for that one sale.  Will the customer pick the vendor who is mediocre (and also mediocre for other customers), or will the customer pick the vendor who is perfect for them (even if imperfect for other customers)?

The Problems Behind the Problem

dependency map of user stories[larger]

The above diagram is a real view of the dependencies of an ecosystem for a product.  It is blurred out because it is real.  What it shows, in the upper left corner is the “target problem” to be solved.  This target is a candidate for minimum valuable problem.

Each connection in red says “requires” because for a given user to solve the problem in their blurred out box requires assistance from another user.  That other user then has to solve the problem of “help the first user.”  Or it could be that there is an operational context like “monitor performance of the [first user group] solving their problem, so we can fine tune the solution.”  When you’re doing service work, or designing whole-products, you see (or should see) this on every engagement.

In the ecosystem of a complex problem-space, we discover that there are multiple parties associated with adequately solving the user’s problem.  Each different color of user reflects a different user involved in the solution of the focus problem for the focus user.  This web of interdependent problems is the rest of the iceberg.

onion diagram

An onion diagram for this same problem space allows us to also very quickly see (even with this redacted version) that there are three systems (or system interfaces) through which different users directly or indirectly use our product to solve their problems.

Bridging the Process Gap

These views of the problem space help us assure that we are solving a valuable problem – which is my preferred definition of a viable product.  As a bonus, they help bridge the gap between the abstract thinking of a product management team and the concrete thinking of the engineering team who will create the solution and the executive team who wants to “know what it is.”

Categories: Blogs

Professional Services and Improving Your Product

Fri, 06/03/2016 - 14:14

Prioritization at whiteboard

How do you work with professional services, consulting, field engineers, etc. to make your product better? Do you just treat their inputs as yet another channel for feature requests, or do you engage them as an incredibly potent market-sensing capability?

Conversation Starter

I received an excellent and insightful question from one of my former students in DIT’s product management degree program (enrollment for the next cohort closes in a month).  This student is now a VP of product, and kicked off a conversation with me about best practices for establishing a workflow for product managers to collaborate with professional services teams to improve the product.  I’ve seen several companies try different ways to make this work, with one consistent attribute that described all of the approaches – not-visibly-expensive.

Two nights ago I was chatting with another colleague about how his team has been tasked with delivering a set of features, and not a solution to the underlying problem.  As a result, he’s concerned about potential mis-investment of resources and the possibility of not genuinely solving the problem once the team is done with their tasks.

Combining the two conversations, I realized that there’s a common theme.  When I look at how I’ve engaged with professional services folks, I found I’ve had success with a particular approach (which would also help my colleague).

First, let’s unpack a couple typical ways I’ve seen companies engage “the field” to get market data, and think through why a different approach could be better.

Just Ingest

tickets for a short order cook

One team I worked with managed their product creation process (discover, design, develop) within Atlassian’s Confluence (wiki) and JIRA (ticketing) systems.  Product managers and owners would manage the backlog items as JIRA tickets.  Bugs were submitted as JIRA tickets, and triaged alongside feature requests.  There was a place where anyone (deployment engineers, for example) could submit feature requests based on what they were seeing on-site at customers.  Product managers would then “go fishing” within that pool of tickets looking for the next big idea.  This process did not have a lot of visible overhead, but suffered from a “throw it over the wall” dynamic, a lack of collaboration, and a well-established pattern (not just a risk) of good ideas lying fallow in the “pool” waiting to be discovered, evaluated, and implemented.

stack of tickets that all look the same

From the product team’s perspective, going fishing was looking for needles in a haystack.  The cognitive effort required to parse through low-value tickets and duplicates shifts your thinking to where it is challenging to apply critical thinking to any given idea.  Therefore in addition to good ideas that were never discovered, many were touched but passed over.

This is certainly better than “no information from the field” but it emphasizes data and minimizes insight.

High Fidelity Connections

cook with ticket

One team I worked with had a product owner who formerly worked as a field support engineer.  This product owner reached out to her colleagues in the field regularly both socially (cultivating her network, and maintaining genuine connections with friends) and professionally – asking about trends, keeping her experience “current by proxy” as she realized her direct experience would grow stale with time.

This narrow-aperture channel was very high fidelity, but low in volume and limited in breadth of coverage.

Each idea that came in received thoughtful consideration, and the good ones informed product decisions.  The weakness of this approach was lack of scale; it suffered from the danger of extrapolating “market sensing” from a narrow view of a subset of the market.  Because this “just happened” within the way the product owner did her work, it appeared to accounting to be “free.”  Many good ideas were missed, presumably, because they didn’t happen to come to the attention of this product owner’s network.

I put this in the bucket of good (and better than just ingesting), but still falling short of the objective of a product manager.

A product manager’s goal is to develop market insights, not collect market data.
  • The first approach, while easy to institutionalize, had so much noise that you couldn’t find the signals.
  • The second approach had a great (data) signal-to-noise ratio, but the signal was constrained by limited bandwidth, and only worked because of the product manager’s unique background, approach, and interpersonal skills.
Manifestation Shows It’s Face Again Another truism in product management is that people tell you about how problems manifest, and ask you to address those manifestations.  They very rarely tell you which problem needs to be solved – because they don’t think about it that way.  Product people think about underlying problems. woman blowing her nose When your nose is runny, you reach for a tissue to clean up the mess.  You’re treating the lowest level symptom – a manifestation of the problem.  Some people will also reach for a decongestant, to stop their nose from running.  This too is treating the manifestation of the problem.  The underlying problem is illness, or allergies, or “something medical.” Software problems are experienced the same way. “I need to be able to see more issues on the screen at one time, because it is time-consuming to move through page after page of issues, and go back and forth to reference other related issues.” This is the software version of asking for a tissue.   If you dig into the problem, you will discover “the user needs to address groups of related issues simultaneously, and the UI does not help to collect and process them together.” Suddenly, you have different items in your backlog.  “I need a way to see which problems are urgent so that I can address them first – please add an icon to the display of each urgent issue in the issue list.  Then, when I scan through the pages of issues, I can find the most urgent ones and address them first.” Another tissue issue. When you delve into the problem and find “the user needs to be able to address the urgent issues first, even though other non-urgent issues are treated first-in, first-out.” You have an opportunity to re-sort the list to make the urgent issues be first.  You have the opportunity to understand if there are a team of people working against a queue of issues – and incorporate urgency into how those issues are assigned to individual users. When inputs are coming from the field, in my experience, a large portion of them are passed on “verbatim” as customer requests, without parsing by the services professionals who captured them.  And most of the remainder are augmented by well meaning team members who incorporate proposed solutions into the feedback.  Which is great.  Except they consistently ask for tissues, but perhaps helpfully suggest specifically how it might best be implemented in our product.  Problem solving is a character trait that makes great professional services people great.  Problem discovery and abstraction is not often a hiring criterion for folks in the field. Collaborative Workshops

collaborating to understand problems

There is another approach I’ve worked with a few teams to effectively generate product insights based on real-world insights from professional services team members.  The challenge with this approach is that the expense is visible – you’re pulling people out of the field for a day or two, taking them off their accounts for a day or two.  On one sufficiently high-profile project, a workshop date was scheduled (~5 weeks in advance) and people were “told” to come.  They planned travel, manged customer commitments, etc.  We booked a large room for two days and rolled up our sleeves.  On another project, we opportunistically scheduled a half-day session the day after an all-hands quarterly meeting that brought everyone into the office anyway.  The cost of the “extra day” was a lot lower than the cost of a standalone event. I’ve run two types of workshops that were very effective for this.  The first one frames problems in a broader context, and the second one really explores alternatives and opportunities in a more targeted exercise.  Ironically, the tighter targeting leverages divergent thinking as well as convergent thinking, and the broader framing is purely convergent. The first workshop is a co-opted customer journey mapping exercise.  I say co-opted because while I go through very many of the same steps, I am not attempting to improve the experience, I’m attempting to understand the nature of, and relative importance of solving, the problems a customer faces through the course of doing what they do while interacting with our product.  Without going into the specifics of running the workshop, the high level looks like the following:
  • Start out with a straw-man of what you believe the customer’s journey looks like – a storyboard is a good tool for making a visceral, engaging touchpoint for each step in the journey.  Review with the team and update the steps (add missing steps, re-order as appropriate, remove irrelevant and tag optional steps).
  • [Might not be needed, but worked when I did it] Start out with key personas identified, representing the customers for whom we are building product.  Workshop participants will be capturing their perspectives on the relative importance of problems from the point of view of those personas.
  • Within each step, elicit from the field all of the problems a customer faces within each of those steps.
  • Have the participants in the workshop prioritize the relative importance of each problem within each step (the 20/20 innovation game works great for this)
  • Have the participants prioritize the relative importance of “improving any particular step” relative to improving any other step. (Fibonacci story-pointing works well for this)
  • Record / take notes of the conversations – particularly the discussions where the participants are arguing about relative priority / relative importance.  Those conversations will uncover significant learnings that influence your thinking, and establish focused questions to which you will want answers later.  Before the workshop, you didn’t know which questions you needed to ask.
The heavy lifting comes later, in processing all of this information into multiple market hypotheses.  What is important is that you are gathering insights about the problems from the best-informed people, not simply processing a stack of tickets (or tissues). The second workshop is an impact mapping workshop.  Focusing on a specific task that users are performing, and really diving into why they are doing the task.  This activity applies both convergent and divergent thinking exercises to understand not only what people do (when using your product), but why they are doing it and how they measure success at their task.  From their you can discover alternative ways to solve the same problem, define measures of success for your product, determine how to instrument and what to measure about your product.  If you haven’t already bought Gojko Adzic’s book on Impact Mapping, just do it now. Conclusion

Professional services folks have massive amounts of customer data and insight – they only lack the (product management) skills to transform that insight into something usable by a product team.

The best way I’ve found to get value from that insight,  in a repeatable way across teams and individuals, is to incorporate running workshops that force teams to articulate what the customers are doing with the product (what are their goals and challenges).

When asking the questions this way, you get the answers you need.  By doing it in a collaborative workshop, you get more and better contributions from each of the team members than you would get through a series of interviews.

Categories: Blogs