Last week we examined some popular excuses for not doing usability testing. This week, we’ll take a look at conversion testing (aka A/B and Multivariate Testing).

1. “I don’t have the budget to purchase testing software.”

Visual Website Optimizer starts at $49 per month. Google Website Optimizer (GWO) is free. Budget is no excuse.

2. “I don’t have the technical expertise to set up a test.”

Though it’s true that technical implementations can get complex, that doesn’t have to be the case. Setting up a simple A/B test on a tool like GWO is easy and can take only minutes. So take the initial plunge with a simple A/B test. For example, test some revised headlines.

3. “I don’t know how to run a statistical analysis on the results.”

There’s no need, as basic statistical analyses are built into all the major testing tools. Sure, you can run more sophisticated analyses on the results and learn more from the data, but you can leave that for later tests.

4. “I fear that running tests could lower my conversion rate.”

Yes, it’s possible that a poor-performing variation could temporarily lower conversion rates. But all testing platforms allow you monitor your tests on an ongoing basis, and to either stop the test or disable poor-performing variations. Also, you can reduce risk by specifying that only a certain percentage of your traffic will take part in the test. The greater risk lies in NOT running tests.

5. “There’s no guarantee I’ll get a positive result.”

True. But the flipside of that argument is that if you do nothing, it’s guaranteed you won’t get results. Stick with testing, and you’re bound to get positive results soon enough.

6. “I don’t have a UX designer to come up with the alternate variations.”

Then either hire one on contract, or (as mentioned above) test on simple things like text. (Headlines, subheads, bullet points, calls to action, etc.)

7. “I don’t need to test, I know what works best for my customers.”

Nobody gets it right every time. I’ve seen lots of test where everyone’s prediction was wrong and the winner is a total surprise. So never assume that your page is perfect; you might be surprised at the changes that lead to better performance.

8. “I prefer the simplicity of sequential testing.”

Sequential testing (i.e. running one version for a while, then running a revised version and seeing if it performs better) doesn’t have the accuracy of true “Split A/B” testing. (You’ll never know whether external factors may have influenced the results.) The whole point of testing is to base decisions on data rather than hunches; data must be reliable.

9. “My Website doesn’t get enough traffic.”

For very low traffic websites, this might actually be a valid excuse. However, testing just one variation against the original does not take all that much traffic, especially if the two versions are dramatically different. It’s not uncommon to achieve statistically significant results with a total of fewer than 1,000 visitors to the page.

10. “It’s too early for us.”

If you have a live website with significant traffic, then it’s time to start testing. The sooner the better. New businesses can actually benefit the most from conversion testing, as they’re likely to have a less complete understanding of their customers and what makes them tick.

  • Barry

    Google wants to promote AdWords down to the small business level. But smaller companies (and please don’t call them “Mom and Pops” like Avinash does) have unique situations that do not scale down from bigger companies. I’ve done many sites where the total traffic per week is 25 – 50 persons. 

    In order to analyze what is occurring, as managers for these businesses we need to interpret very limited data without all the tools. Testing is important. But it has to be done quickly and easily. 

    Truthfully I don’t think Cardinal Path or Google is familiar with how/what can be done at the smaller business level. Someone should tackle a series (about 50 small businesses) and then develop some much better guidelines. That person will be very surprised about what is expected and how it can be performed.

    • Michael Straker

      Hi Barry,

      As I mentioned in my post (point #9) low traffic may indeed be a valid excuse for not running A/B tests. I hate to think how long it would take to gather statistically relevant results on a site that only get 25-50 visits per week!

      Smaller businesses definitely have to get creative. Luckily there are lots of free or very cheap tools. For example, a small company probably can’t afford to do eyetracking studies. But they could perhaps afford to use tools that *emulate* eyetracking studies, like EyeQuant or Attention Wizard.

      And as I wrote last week (and on many other occasions) ALL companies should be doing some form of usability testing. This can be done quickly and on the cheap. 

      Thanks for your input.