Product Feature Prioritization

Alex Borisenko
9 min readNov 24, 2022

--

Product prioritizing requires balancing the numerous inputs and viewpoints of stakeholders in addition to creating a stack of features in a specific order. One of the most difficult aspects of a product manager’s work is whittling down the number of requirements and feature requests for a sprint or a product roadmap.

With the use of quantitative rankings, charts, and matrices with values that are closely related to your customer input and product strategy, strong product prioritizing frameworks should enable you to muffle the voice of the loudest individual in the room.

Through the years I spent creating products and receiving feedback from both business and consumer clients, I learned how important it is to constantly describe the right priority process. As a result, I always begin by determining the appropriate prioritization framework for the operational and business strategy of our organization.

Here’s a short guide through some of the popular frameworks, which happen to be those that I have used the most, their pros and cons, and why you should choose each;

  • Value vs. effort
  • RICE
  • KANO
  • Story mapping
  • The MoSCoW method

1. Value vs. Effort

Take your list of features and activities and quantify them using value and effort scores as part of this straightforward method for prioritizing.

The final ratings are merely an approximation when utilizing this method, so keep that in mind. There is a great deal of guesswork and opinion (backed by as many pertinent information as feasible) involved in quantifying the huge topic that prioritizing aims to address: “Will implementing this feature/update promote our goals and metrics? With the resources at our disposal, is it practically possible to build it?”

Product teams have a simple way to quickly and easily see a list of quantifiable priorities thanks to scoring techniques like value vs. effort.
This approach to prioritization allows for constructive debates among stakeholders about what they consider to be the meanings of value and effort, which in turn aids product managers in identifying and filling any gaps in the strategic alignment.

Pros of using value vs. effort

  • What constitutes value or effort is flexible. For some organizations, Value and effort definitions are open-ended. An effort may just refer to a development effort for certain firms, whereas implementation costs may be included. Any type of organization, sector or product may leverage a flexible priority system.
  • It works well for alignment. Product teams can decide which efforts are more important than others by pushing teams to quantify and rate features numerically. It excludes hazy assumptions and guesswork from the priority conversations.
  • In organizations with highly constrained resources, something as straightforward as a value vs. effort analysis enables teams to concentrate exclusively on the issues that will have the most influence on their business and product goals.
  • Because it doesn’t employ any complicated formulae or models, it is simple to use. All that is needed is an agreed-upon figure that is added to a single overall total.

Cons of using value vs. effort

  • It’s a game of estimating and guesswork, like any other prioritizing exercise. This gives the individuals conducting the estimation a lot of leeway for cognitive bias. The final score for each feature might be overstated or understated.
  • Disagreements might be difficult to settle when it comes time for product and development to vote on how high or low the value/effort scores should be.
  • It might be challenging to utilize for big teams that manage several product lines, components, and product teams.

2. RICE

RICE enables product teams to focus on the efforts most likely to have an impact on any given goal.

Each initiative or feature is scored according to the four factors: Reach, Impact, Confidence, and Effort (RICE). Here is an explanation of each factor and how it should be measured:

Reach
How many individuals will be impacted by this feature in a specific time frame? Reach is an important factor in order to avoid bias towards features you’d use yourself.
Example: customers per quarter, interactions per week

Impact
How much this will impact individual users? While impact can’t be measured precisely and is somewhat unscientific, the alternative is a mess guesses and gut feeling. At Intercom, where this method originates, it is customary to choose from a multiple-choice scale: 3 - “massive impact”, 2 - “high”, 1 - “medium”, 0.5 - “low”, 0.25 - “minimal”

Example: For each customer interacting with it there will be a huge impact. The score is therefore 3.

Confidence
Based on available data and feedback, how confident are we about the impact and reach scores? Condidence is percentage and a multiple-choise scale is used to make decisions. 100% is “high confidence”, 80% is “medium”, 50% is “low”. “Anything below that is total moonshot.”

Effort
The total amount of time required from product, design, and engineering teams. There are different ways to measure effort, at Intercom they stick top persons per month (amount of work one team member can complete in a month).

Then, a formula is used to combine all of those individual numbers into a single score. With the help of this formula, product teams can generate a standardized number and apply it across any type of initiative that has to be included in the roadmap.

You will receive a final RICE score after applying this formula to each feature. The order in which you’ll address each idea, initiative, or feature can then be determined using the final score.

Pros of using the RICE method

  • The effect of innate biases on prioritizing is lessened. The focus of prioritization changes from attempting to predict success to quantifying the level of assertion that each team member has for the features by including a confidence dimension in the calculations. By doing this, the discussion of features’ relative importance shifts from “Here is how much this feature is worth” to “Here is how we are quantifying our level of confidence for each of these qualitative, speculative scores.”
  • Before quantifying them, product teams must make their product metrics SMART. As can be seen in the parameter for the Reach score, SMART stands for specific, measurable, achievable, relevant, and time-based.

Cons of using the RICE method

  • Dependencies are not considered in RICE scores. Product teams should view the RICE score as a tool rather than the final say in what should be built next because there are times when an initiative with a high RICE score needs to be deprioritized in favor of something else.
  • Estimates are never entirely accurate. RICE prioritization is merely a method for quantifying features while taking into account the level of assurance that teams have in their estimations.

3. Kano Model

On the horizontal axis, you have the implementation values (to what degree a customer need is met). These values can be classified into three buckets:

  • Must-haves or basic features
    Customers won’t even think to look at your product as a potential solution to their issue if it lacks these features.
  • Performance features
    Customer satisfaction will increase as investment in these increases.
  • Delighters or excitement features:
    These features are pleasant surprises that customers don’t anticipate, but which, when offered, produce a delighted response.

On the vertical axis, you have the level of customer satisfaction (the satisfaction values). These are ranging from those that aren’t being fully met on the left, to the ones that are being fully met on the right (the implementation values).

By creating a Kano questionnaire and asking your customers how they would feel with or without a specific feature, you can obtain this customer insight.

The core principle of the Kano model is that customer satisfaction will increase as you invest more resources (time, money, and effort) in developing, innovating, and improving the features in each of those buckets.

Pros of using this Kano model:

  • Teams can learn from the Kano model questionnaire not to overestimate exciting features and to stop underestimating necessities.
  • It can assist teams in making better product decisions and predictions about the market’s acceptance of particular features and the needs of your target market.

Cons of using this Kano model:

  • It can take a while to complete the Kano questionnaire. You need to conduct a number of surveys that are proportionate to the number of customers you have in order to get an accurate representation of all of your customers.
  • It’s possible that customers might not fully understand the features you’re surveying them about.

4. Story Mapping

This product prioritizing framework’s simplicity is one of its greatest strengths. Additionally, it shifts the emphasis from internal team and stakeholder viewpoints to the user’s experience.

You develop a series of successive buckets or categories that represent each stage of the user’s journey through your product along a horizontal axis. This enables you to consider how users interact with your product, from registration to building up a profile to using particular features.

You then arrange these jobs in a vertical line, from top to bottom, in order of importance. This enables you to order the features you’ll work on according to priority. When you choose to put things on hold, the bottom section of the axis may be labeled “Backlog items.”

In order to separate all of these stories into releases and sprints, you finally draw a line across them.

Pros of using Story Mapping

  • It helps identify your MVP very quickly.
  • The experiences of the users are at the core.
  • Story mapping is a group activity that involves the whole team.

Cons of using this framework

  • It ignores factors affecting external product prioritization, such as complexity and business value.

5. The MoSCoW method

By grouping features into four priority buckets, the MoSCoW method enables you to determine what matters most to your stakeholders and customers. The acronym MoSCoW, which stands for “Must-Have, Should-Have, Could-Have, and Won’t-Have features,” has nothing to do with the city.

Must-Have: These attributes are necessary for the product to function in any way. They are necessary and unavoidable. The product cannot be launched if any of these conditions or features are not met, making this bucket the one with the most pressing deadlines.

  • To access their account, users MUST log in, for instance.

Should-Have: Even though they are not time-sensitive, these requirements must be met.

  • Users SHOULD have the option to reset their password, for instance.

Could-Have: This feature could have been delivered on time but is neither necessary nor important. They are bonuses that, if included, would significantly increase customer satisfaction but have little effect otherwise.

  • “Our app allows users to save their work directly to the cloud.”

Won’t-Have: These are the least important requirements, features, or tasks (and the first to go when there are resource constraints). Future releases will take into account these features.

The MoSCoW model is flexible and accommodates shifting priorities. Accordingly, depending on the kind of product, a feature that was once deemed a “Won’t-Have” might one day become a necessity.

Pros of using this prioritization framework:

  • It’s great for involving stakeholders with non-technical backgrounds in the process of prioritizing products.
  • A quick, simple, and intuitive way to let the team and customers know what the priorities are.
  • Helps with resource allocation thanks to the categorization of the features

Cons of using this prioritization framework:

  • Tends to cause teams to overstate the number of Must-have features
  • It’s more of a release criteria exercise than a prioritization technique.

Note: I have primarily used this method when prioritizing the early backlog for my current project which is called Fiction: Reading Tracker. You can learn more about it at www.getfiction.app or better yet by downloading the Fiction for iOS app here.

To summarize

Since I tried to concentrate on those that I have used more extensively, there are more techniques and frameworks than are given in this quick review. Overall, it’s not a good idea to completely substitute human input in decision-making with product prioritizing techniques, as their purpose is to serve as a blueprint or a structure when thinking about priorities. A strong framework can help everyone understand the bigger picture and the objectives for the final output. Prioritization techniques include weighing the various options that can be considered in relation to a particular feature or concept.

--

--

Alex Borisenko

Creating tools to improve readers’ efficiency and student productivity. Writing about reading, technology, and product management.