Why You Should Aim For a 20-40% Win Rate

As more businesses harness the power of experimentation to enhance user experience, increase revenue, and outpace the competition, clients frequently ask, “What’s the optimal success rate for an experimentation program?”

Success rate, commonly referred to as “win rate,” is the number of experiments that reach statistical significance with a positive ROI out of all the experiments conducted during a specified period (i.e., per quarter). 

Bookend success rates (i.e., 0% or 100%) are indicators of either not researching ideas enough or researching them too much. A cursory search of articles published on the topic provides little clarity about the optimal success rate (suggesting wide ranges of 10-60%). Since each experimentation program has individual strengths and limitations, how do you determine the aim of your program?

Most companies try to achieve a success rate of somewhere between 20-40%. However, figuring out how to achieve this optimal rate is a balancing act between conducting different types of experiments.

How Do You Determine the Right Mix of Experiment Types?

Experiments, in this context, are bucketed into four main types: Naive, Moderate, Surefire, and Bold. Naive experiments are under-researched and unlikely to succeed. Moderate experiments are a safe balance between research and risk. Surefire experiments, while most-likely to succeed, are over-researched, and more time-intensive to move from idea stage to launch. Bold experiments lean toward more creative, data-driven disruptors with an unknown level of success.

Balancing Surefire, Moderate, and Bold experiment types is the key to achieving your program’s optimal success rate. We don’t recommend pursuing naive ideas.

Running a mix of more Bold hypothesis-backed experiments, less Surefire experiments with some Moderate experiments sprinkled in will likely put your success rate closer to the 20-30% range. If you really want to reach the 30-40% range, prioritize Surefire experiments and cut back on those Bold testing ideas. 

It’s important to remember that when experiments show conclusive, negative results it will save you from rolling out bad experiences and harming your bottom line. Most importantly, view the whole process of determining the optimal success rate as a learning experience about your customers! Learn more in our previous blog post “How to Analyze Inconclusive Test Results.” 

Conducting the right amount of research and finding the right mix of experiment types for your program are both very “Goldilocks”—too much one way doesn’t cut it and going too far in the other can also leave you unsatisfied.

So, now that you understand benchmark success rates, how do you logically increase your wins? It’s important to choose a success metric that makes sense for your test. For example, when testing a website’s “Contact Us” form, but pick revenue as your main KPI, you probably won’t get a “successful” test. Need help? We have a blog post on that, too!

One Metric is Not Enough

When analyzing your experimentation program, don’t measure success rates in a vacuum. Instead, choose one of multiple metrics (e.g., testing velocity, development time, and time to production) for insights related to the whole program.

[NOTE: Stay tuned for our forthcoming blog post that sheds more light on this topic: “The Whole is Greater than the Sum of its Parts” coming soon.]

Achieving Your Optimal Success Rate

Deciding which test to run can be tricky, as it requires you to navigate stakeholder requirements, budgeting constraints, and team dynamics. Our Evolytics testing and experimentation strategy specialists are here to help you find the right testing balance and optimal success rates to meet your business needs. Get in touch!

Before You Go

If your testing program is ready to evolve or is unsure where to turn with the sunset of Google Optimize later this year, get in touch with us. Evolytics has a team of A/B Testing and Experimentation experts who can guide you through the process of migrating your testing program to a new testing platform that fits your needs and stack requirements. With our expert guidance, these recommended testing platforms can help you achieve your target success rate by objectively ranking potential tests with a prioritization score. Tests are ranked on multiple custom parameters such as confidence, importance, and ease, letting you set the standard for your business.

Contact Us

Written By


Tracy Burns-Yocum

Tracy Burns-Yocum is Analyst I on our Experimentation & Strategy team. She conducts analyses and identifies trends that inform strategic business decisions for clients. She is Google Analytics and Amplitude certified, with additional training in Tableau, SQL, and Python. Tracy also has a background in research to understand human behavior.