Cardinal Path’s response to COVID-19 Cardinal Path is sharing all we know to help marketers during COVID-19.  Learn more.

Here’s a common scenario – one that is especially relevant to companies that are just beginning to dive into the world of optimization testing.

You launch an optimization test for the first time, maybe you’re testing the call to action on a landing page, or a banner on the Home Page. Maybe you get ambitious and execute a multivariate test. And this first test is successful – it results in some degree of benefit to your organization, such as boosting your conversion rates by 20%. 

This success breeds further interest in optimization testing across your organization and before you know it, people from various teams are coming to you with ideas for further tests. The excitement builds and before you know it, you’re running (or being asked to run) multiple tests at the same time.

These multiple requests often result in increased complexity, which can have some adverse effects:

  • Unclear or invalid results
  • Need for more resources to launch and manage tests
  • Slower time to market with new test ideas
  • Demotivation to identify and generate new test ideas  

These outcomes aren’t likely what you had in mind when you set out to grow the business value of your online efforts through testing! The key is to keep it simple and adhere to a set of best practices that will help to keep your optimization testing program on-track. Here are some strategies to identifying and working with simultaneous tests.

Testing Collisions or Conflicts

Running multiple optimization tests at the same time can result in “collisions” or conflicts – the outcomes from one test affect the results from another test, and ultimately impact the validity of your test results. 

There are two types:

(1)    Same Page Collisions

Same Page Collisions are easy to identify. They occur when two or more tests are running (or planned to be run) on the same page, but address different elements. For example, you may want to conduct two separate split tests on the same page. One team member may want to test the calls to action ‘Learn More’ and ‘Buy Now’ while another team member may want to test two different hero images. 

In this case, it is possible that the performance of each call to action could depend on which hero image a Visitor sees.    

(2)    Downstream Collisions

Downstream Collisions are somewhat more difficult to identify. These occur when a test that is running on one page impacts the results of a test on a subsequent page.

For example, consider the diagram below. You may be running a simple split test on the Home Page of your website – one that involves experimenting with different calls to action that are designed to drive more Visitors to a page featuring a form where a Visitor can download an E-Book.

On the E-Book page, you are running a split test as well – one that tests different benefit statements, such as what a reader will gain from using the E-Book.

It is possible that the calls to action on the Home Page set different expectations, which in turn, influence which benefit statement people respond to on the E-Book page. If that occurs, you have a collision. 

While that example is relatively simple, these are not always easy to identify. For instance, will an optimization test conducted on a campaign landing page (one that can only be accessed from an external marketing channel) impact a test on a product page?

Solutions for Same-Page Collisions

You basically have two options when facing a Same-Page Collision:

(1)    Run Subsequent Tests

This involves running one test and then after the results come in, you launch the next one.

(2)    Run a Multivariate Test

Integrate the multiple tests together so that you are testing the combinations of different elements simultaneously.

The option you choose can be guided by the following questions:

  • External Factors – will a subsequent test be impacted by events outside of an organization (e.g. seasonality)? If the answer is no, then running subsequent tests is a reasonable option.  
  • Dependency of Test Elements – is there a reasonable belief that proposed test elements interact with each other? If the answer is yes, then a multivariate test is likely a better option.
  • Resource Constraints – are you facing significant design, technology and analysis constraints for running a test? If the answer is yes, subsequent testing is likely a better option.  

Solutions for Downstream Collisions

Similar to dealing with same-page collisions, you have two options when addressing downstream test collisions.

(1)    Run Subsequent Tests

This is the same process as if you were dealing with a Same-Page Collision. Run the test that you believe will have the most significant impact, determine the winning variant and then launch the next test.  

(2)    Measure Interaction Effects

An interaction effect exists when the performance of one variable is dependent on the presence of another variable. For example, consider the following Table, which features output from our Home Page and E-Book Landing Page tests discussed above. 

In this particular example, the performance of each call to action is dependent on which benefit statement a Visitor is exposed to on the E-Book Landing Page. 

This is not an uncommon outcome in the testing world when two simultaneous tests are running. But measuring for statistical significance with interaction effects is not necessarily an easy thing to do and will require the use of statistical software – to the point where the topic could make its blog posts (or multiple blog posts). 

The takeaway from this is that if you are going to run multiple tests simultaneously, explore your data to see if interaction effects exist – that is, if the results of one test are dependent on the results of another. We will share a simple way to do this in a later blog post. 

That’s one more tool to Simplify Your Testing Processes

You should now have a good sense of how to manage requests for simultaneous tests – something that you will likely encounter as your company delves more into testing.