Introducing DRICE: a modern prioritization framework
Boost your experiment win rate and help your team drive more impact
👋 Hey, I’m Lenny and welcome to a 🔒 subscriber-only edition 🔒 of my weekly newsletter. Each week I tackle reader questions about building product, driving growth, and accelerating your career.
If you boil down a product manager’s job to just one task, it’s making sure everyone on your team knows what to do next—and that it’s the most impactful work they can be doing.
In other words, prioritizing.
If you can get better at prioritizing, you can grow your leverage as a product leader, and increase the impact you and your team drive. And while you can’t double the speed that engineers build at or designers design at, by picking better projects even a little bit better, you can double your team’s impact.
Below, Darius Contractor and Alexey Komissarouk share a brand-new framework that will make you significantly better at prioritizing. They also include plug-and-play templates, real-life case studies, and great memes. I wish I had this years ago.
Darius Contractor is the former head of growth at Dropbox, Facebook, Airtable, and Vendr and is currently Chief Growth Officer at Otter.ai. You can find him on X, LinkedIn, and Darius.com. Alexey Komissarouk, a former growth engineering lead at MasterClass and Opendoor, is currently writing a book and advising teams on growth engineering. You can find him on X, LinkedIn, and alexeymk.com. Alexey will be speaking about growth engineering at Reforge this Thursday.
With limited time and resources, what is the highest-value thing a product manager can do for their team?
Prioritization. Ruthless, rigorous prioritization.
The guide below contains everything you need to run a reliable prioritization process. It’s based on a distillation of techniques from the authors’ combined 35-plus years of experience driving growth at Dropbox, Facebook, Airtable, Opendoor, and MasterClass that, in our experience, doubled our impact rate.
The frameworks are primarily intended for growth teams, but the practices apply to any team focused on moving a business metric. This prioritization process should be run every planning cycle.
Below, we’ll walk you through:
The industry-standard prioritization process RICE (Reach, Impact, Confidence, Effort)
Then we’ll refine the top ideas using “DRICE” (Detailed RICE)
Finally, we’ll talk through how to make the most of these two frameworks
Throughout this guide, we’ll assume you’ve already collected tons of potential ideas—through brainstorming, sifting through data, looking at sales/CX calls, your past backlogs, etc. As a rule of thumb, you’ll want to have at least 5x as many ideas as you could reasonably build in the next time period (typically a quarter) before starting to prioritize.
Part 1: RICE
Definition
In short, RICE-ing is the process of T-shirt scoring (i.e. S/M/L) your ideas according to their:
Reach: How many of your customers would experience the new idea
Impact: If the idea pans out, how much it would affect conversion
Confidence: How likely it is to work
Effort: How much time it would take to validate (from the engineering team)
The process is best illustrated by example.
Example: “Checkout with PayPal” vs. “free-trial promo codes”
Say your team is choosing between two potential ideas:
Checkout with PayPal. Several potential customers have emailed complaining that they will only buy with PayPal.
Free-trial promo code. Your podcast marketing manager wants to be able to give away promo codes for an “X-day free trial.”
We can compare the ideas based on their qualities:
Checkout with PayPal:
Reach: all. Potential customers must go through checkout and are exposed to the option.
Impact: small. Most customers are comfortable with credit cards.
Confidence: likely. Competitors offer PayPal; customers have asked for it before.
Effort: medium. Will need to integrate PayPal through the Braintree API.
On the other hand, a free-trial promo code is:
Low-reaching: Only about 15% of your customers come in through podcasts.
Medium-level confidence: Free-trial promos are common but not ubiquitous.
High impact: The company has had free trials convert well in past tests.
High effort: The free-trial code is messy; the team will need a month to clean it up.
In visual terms:
Eyeballing would suggest we’re better off with Checkout with PayPal. Even though the potential impact isn’t as high, the change is likelier to work, faster to implement, and will affect more potential customers.
Processes
The process above works well with two ideas, but what if we have tens of ideas to score and rank? A spreadsheet keeps things organized. Here’s a minimalistic template from Alexey, and a batteries-included Airtable template (called EVELYN) from Darius.
Sizing heuristics
It’s not always obvious exactly what “high impact” versus “medium impact” means.
At this stage, these estimates are meant to be low-fidelity and directional SWAGs anyway. Rather than nailing the categories, compare between items—do all the M’s truly feel smaller than the L’s?
That said, here are some heuristics for evaluating an idea at this stage.
Reach is relatively straightforward to estimate: either your users will experience the change or they won’t. Effort estimates come with experience on the engineering side. Always strive to run the laziest test possible, asking “What is the least amount of engineering work required to test this idea?” Impact and Confidence have more subtlety, however.
Impact
An idea should be higher-impact if:
It addresses a sensitive-to-change portion of the customer journey: above the fold on the landing page, pricing, etc.
The change is particularly significant—a “big swing”
Confidence
An idea should be higher-confidence if:
You’ve already had the same concept work elsewhere: “Emphasizing the money-back guarantee was a winner on the homepage; let’s try it during checkout.”
One or more of your competitors already employ this tactic (they’ve likely tested into it): “Our competitors offer a monthly pricing plan; we should try the same.”
Your customers are explicitly and repeatedly complaining about it: “Why do I have to create a new account? I wish I could log in with Google.”
The product manager should have the context to fill in Impact, Confidence, and Reach. The Effort column can sometimes be filled by an experienced product manager, but more commonly a tech lead.
Once a T-shirt-size prioritization is complete, you can sort the list based on the implied score (with T-shirt sizes translated to numeric values to make the math work):
Score = (Reach % * Impact % * Confidence likelihood %) / Weeks of effort
Sort all of the available ideas using their scores. Take about twice as many ideas as you’d have time for during the period you’re planning for (typically a quarter). This is your preliminary shortlist.
Validating your shortlist
Some shortlisted ideas will surprise you. RICE isn’t gospel; double-check and adjust the suspicious scores until you feel comfortable. Next, loop in your team, who will appreciate a chance to weigh in and offer helpful adjustments.
Even after the refinements, some ideas you had thought of as clunkers will end up above your favorites. This is the process working as intended. By establishing objective evaluation criteria, you have separated the wheat from the chaff.
Part 2: DRICE
Once your RICEd shortlist is ready, it’s time to invest in a higher-fidelity evaluation of your ideas. This assessment method is known as DRICE, a “Detailed RICE” estimate.
During DRICE, we will go from:
A 30-second estimate to a 30-minute estimate
A relative scoring (S/M/L) to a $X of expected annualized revenue
“Wouldn’t it be cool if” to “We are shovel-ready”
“Wait, 30 minutes per idea?” says the growth PM reading this guide. “We don’t have time for this. Our list is solid. Let’s be biased toward action and get started!”
Not so fast.
Teams that adopt DRICE notice a difference in the projects they end up prioritizing. Many “promising” projects turn out to be defeated by a half-hour investigation, while “nice to have” features that would have been left on the cutting-room floor end up becoming high-ROI winners.
Dropbox case study
A particularly clear example of this comes from Darius’s time at Dropbox, focused on activating new Dropbox Business users. One proposed idea was a migration tool for Basic users, streamlining the process of getting started with a Business account. Initially we didn’t think this would be a huge win, since target users were only a subset of all sign-ups.
However, the DRICE investigation showed:
A large percentage of Business team creators were previously Basic users
A good chunk of teams had multiple users
Many teams had private files but not team files
A “choose folders to share with your team” modal could boost sharing significantly
The DRICE investigation significantly bumped the idea’s expected ROI. We built it. The Dropbox Business Migration Tool became the biggest activation win of the quarter. Without DRICE, we were unlikely to have prioritized it at all.
Overall, we found that Dropbox teams that adopted DRICE were able to move their key metric by twice as much as teams that stuck to a simpler prioritization process.
To see how the DRICE process works, it’s also best illustrated by example.
Example DRICE: Adding PayPal
We’ve already RICEd the PayPal project:
Reach: all. Potential customers must go through checkout and are exposed to the option.
Impact: small. Most customers are comfortable with credit cards.
Confidence: likely. Competitors offer PayPal; customers have asked for it before.
Effort: medium. Will need to integrate PayPal through the Braintree API.
Not bad. Here’s what a DRICE would look like:
Hypothesis
A potential customer segment prefers to pay with PayPal instead of credit cards and has been emailing to let us know. By adding PayPal as an option in the checkout flow, we will improve conversion by 2.7%, or an incremental $540k/year.
Impact estimate
Engineering estimate
Effort: 7 days
We originally thought we’d have to integrate the Braintree API, but these days, you can integrate PayPal directly from Stripe. This eliminates the bulk of the expected back-end work, leaving us:
[1 day] Integrate PayPal button on front end according to designs
[1 day] Update receipt emails to support PayPal
[1 day] Migrate e-commerce back end to support a new payment type from Stripe
[2 days] Integration tests for key flows (purchase, refund, cancellation, chargeback)
[2 days] Buffer time
Return on eng investment
$540k in annual revenue
Eng cost of 7 days (~1.5 weeks), yielding an ROI of
$360k/eng week
Through a DRICE estimate, we reduced the engineering estimate and increased the success likelihood for our “Checkout with PayPal” idea. With 30 minutes from PM and engineering, we’ve taken the idea from being marginal to a “let’s definitely do it this quarter” type of idea. Note that we didn’t eliminate guesswork as part of this process; our guesses simply became educated.
Here’s a DRICE template to use when making your own estimates.
Components of a DRICE estimate
Now that we’ve walked through an example, let’s talk about what a DRICE involves and how to perform one. You’ll want:
Hypothesis
A clear, brief explanation of the idea that anybody without context will be able to understand, and a short justification for why you believe the idea will be effective.
Impact estimate