A field guide to experimentation in spite of coronavirus and finding the new normal

When the effects of the coronavirus pandemic began showing up in my datasets, I struggled to find the new answers for all of my clients. Each brand had its unique challenges:

  • How do I test with any statistical validity if traffic plummets? 
  • Will I trust those results when we declare coronavirus over? 
  • If  orders suddenly doubled, how do I ramp up the optimization program over night to capitalize? 

These questions kept me up at night, but six months in, I have a new perspective and even a couple answers. For starters, I think COVID-19 is an amazing opportunity for optimization programs if they can execute well.

How do you test with any statistical validity during COVID-19?

Chances are, your traffic isn’t what it used to be. Depending on your brand, this may mean an influx of new customers who aren’t familiar with your product offerings, a sharp decline in visitors, or even a change in demand that has your conversion rates taking wild swings.

Although I’m usually quite risk averse, I’ve started testing with 90% confidence instead of 95% confidence during COVID-19. Even writing that feels controversial, but the decision was deliberate.

First, many websites are dealing with a decline in traffic or conversion, and lowering the bar for statistical validity means testing remains possible. Moving to 90% confidence does mean doubling the likelihood of a false positive result, moving from a one in 20 chance to a one in 10 chance. However, making a bar impossibly high means we’ll never hit it, and we’ll fall victim to constantly making directional calls with far lower validity than 90% simply because 95% felt too far away. 

Second, these aren’t normal times, when consumer preferences are fairly static and UX best practice preferences will change slowly. Besides the constant influx of news impacting consumer confidence and needs, the world went online overnight — even the technology luddites who strongly prefer brick-and-mortar shopping. Testing with 90% confidence rather than 95% means more results more quickly, and a lower likelihood that time and changing preferences is going to add murkiness to results.

There’s an entire group of consumers who, historically, haven’t been online shopping enough, either in totality or for specific products and services, to form preferences, but they’ll form them quickly. Using 90% confidence while opinions are changing so rapidly allows you to get real results and roll out the new standard as quickly as possible, meaning your optimizations to keep pace with customer needs.

Will you trust those results when we declare coronavirus over?

This question is more difficult. In short, I think optimization programs would do well to focus on conversion rate optimization with support from a cro agency, in the short term, even if they have historically leaned toward test and learn. I’m not advising a complete 180, but leaning a little more on one side of the spectrum. Experts tend to agree that once people feel confident going outside again, consumer behavior habits will also return.

However, it’s ludicrous to expect everything to rebound completely. Old habits will be replaced with new or modified habits.

I personally will be hesitant to trust messaging or coronavirus-specific learnings after COVID-19 ends. UX best practices are probably safe to assume are truly here to stay. However, I’m keeping a close eye on device makeup for coronavirus UX learnings. Data shows that desktop usage is up, while mobile usage is down. This means if I find experiments that do particularly well on desktop, while staying flat on mobile, I’ll gain revenue in the short term by rolling it out, but I shouldn’t count on annualizing those gains or seeing any sort of consistent performance as the population migrates back to mobile-heavy browsing.

If you can, avoid an A/B Test where you’re changing messaging and UX unless you’re able to execute a multivariate experiment. This is a best practice all of the time, but especially during COVID-19 where moods and messaging are changing more frequently than usual.

The learnings aren’t wasted just because they may not be lasting – conversion rate optimization still increases your bottom line, but you should certainly consider a robust back-testing plan post-COVID-19. This goes for both COVID-19 timeframe learnings and pre-coronavirus learnings – the only constant right now is that there is no constant, but what was true yesterday may not be true tomorrow. 

How do you ramp up your optimization program overnight?

To set expectations, it probably won’t be overnight. Evolytics A/B Testing Starter Packages typically last a quarter, and we’ve found that’s usually the right amount of time to get started and kick the tires on a couple of experiments.

Start with the basics: 

  1. Do you have a testing tool? Is it consistent across the organization? 
  2. How will you source data-driven ideas and prioritize them? 
  3. Who will develop the analytic, technical, and creative requirements? 
  4. Do you have tools or processes in-place such as IT sprints and Jira workflows? 
  5. How will you review results? Who should see what, and who ultimately makes the decisions? 
  6. What will you do once you’ve decided a treatment has won? 
  7. Do you have a RACI Chart that your team understands and is able to follow? 

A/B Testing programs have a lot of moving parts and pieces, and they tend to move quickly. If you’re truly new to testing, start with an A/A experiment to test your tools and process in a very low risk situation. 

COVID-19 Experimentation Field Guide Summary

Coronavirus may be a difficult time to be testing, but it’s also the perfect time. Harvard Business Review studies show that businesses who invested intelligently rather than stagnating recovered more quickly and gained market share after the 2008 recession.

It may feel like everything is a question right now, but avoiding the coronavirus won’t make it disappear. It’s possible to test with statistical validity, or at least reach data-informed results. You may not trust your results when COVID-19 ends, but you should also question your pre-coronavirus knowledge given how much COVID-19 will have changed consumer behavior. While ramping up quickly is both difficult and intimidating, it’s not impossible.

If you have any how-to questions keeping you up at night, reach out! The Evolytics team of A/B Testing and Decision Science experts would love to help you make lemonade out lemons.

Written By


Krissy Tripp

Krissy Tripp, Director of Decision Science, strives to empower her clients to make use of their data, drawing from a variety of disciplines: experimentation, data science, consumer psychology, and behavioral economics. She has supported analytic initiatives for brands such as Sephora, Intuit, and Vail Resorts.