7 Mistakes Most Beginners Make When A/B Testing

7 Mistakes Most Beginners Make When A/B Testing

Unless you’ve been living under a rock, you know that A/B Testing can be extremely beneficial for all your marketing and design decisions.

Knowing exactly what works, what doesn’t, testing before implementing anything so you don’t lose money, and all that jazz.

You’re most likely already doing A/B tests. All the power to you right? Well, not so fast. Here is the bleak truth…

A staggering number of people doing A/B tests get imaginary results. We see it among our clients, and 5 minutes on Google will tell you the same story.

How come?

Because you can screw up in so many different ways it will make you dizzy, and because maths are involved. But fear not, we got your back.

In the coming months, we’ll cover about every A/B Testing mistake you could make, and how to avoid them.

OR, if you’re impatient like me, you can download the ebook version of all these articles here. But be warned, this is a 10,000+ words behemoth.

Today, we start off with 7 mistakes most beginners make when A/B testing:

  1. They start with complicated tests
  2. They don’t have a hypothesis for each test
  3. They don’t have a process and a roadmap
  4. They don’t prioritize their tests
  5. They don’t optimize for the right KPI’s
  6. They ignore small gains
  7. They’re not testing at ALL times

You start doing A/B Testing with complicated tests

For your first tests ever, start simple. Being successful at A/B Testing is all about process. So it’s important that you first go through the motions.

See how theory computes with reality, what works for you and what doesn’t. Where you have problems, be it in the implementation of the tests, coming up with ideas, analyzing the results, etc…

Think about how you’ll scale your testing practice, or if you’ll need new hires for example.

Starting with A/B Testing is a lot like starting weight training seriously.

You don’t start with your maximum charge and complicated exercises. It would be the best way to injure yourself badly.

baby trying to lift heavy weight like an A/B Testing beginner would by starting out with complicated test

You start light, and you focus 100% of your attention on the movement itself to achieve perfect form, with a series of checkpoints to avoid all the ways you get injured or develop bad habits—that’ll end up hurting in the long run.

By doing that, you’ll imprint it in your muscle memory so when you need to be focused on the effort itself, you won’t even have to think about the movement. Your body will instinctively do it.

Then and only then, can you achieve the highest performance possible without screwing up.

Exact. Same. Thing. With. A/B Testing.

You start simple, focus all your attention on each step of the process, set up safeguards and adjust as you go so you don’t have to worry about it later.

Another benefit with starting with simple tests is that you’ll get quick wins.

Getting bummed out when your first tests fail (and most do, even when done by the best experts out there) is often why people give up or struggle to convince their boss/colleagues that split testing is indeed worth the time and investment.

Starting with quick wins allows you to create momentum and rally people to the practice inside your team/company.

So you got the message, doing complicated tests right off the bat could kill your efforts in the egg.

You could be overwhelmed, get fake results and get discouraged.

Here are a couple examples of things you could test to start with:

  • Test copy on your offers, product pages, landing pages (Make it focused on benefits not features, be sure that what you mean is crystal clear)
  • Removing distractions on key pages (Is this slider really necessary, or all these extra buttons?)

You don’t have a hypothesis for each test

Every test idea should be based on data, articulated through an informed hypothesis with an underlying theory.

If you’re not doing this, you’re mostly shooting in the dark without even knowing in what direction the target is.

blindfolded man bowling the wrong way like an A/B Testing beginner without hypothesis

Not having a hypothesis (or a flawed one) is one of the most common reason why A/B tests fail.

Okay, let’s take our initial statement apart piece by piece to understand what this means.

Every test idea should be based on data […]

Be it quantitative or qualitative, every test is induced by an analysis of your website data and the identification of a problem or element to improve.

Gut feelings or “I read that…” won’t do. It could work but you’re A/B Testing to make decisions based on data, so let’s not flip a coin for our tests ideas, shall we?

Here are several sources to build your hypothesis on:

  • Analytics
  • Heatmaps
  • Surveys
  • Interviews
  • Usability tests
  • Heuristic analysis (great article here by Peep Laja, don’t pay attention to the title, it’s great even if you do have lots of traffic)

[…] an underlying theory

Put yourself in the shoes of your customers. Why didn’t they do what you wanted them to? What was their intent at this stage? What would make you leave/not do what was expected?

Example:“I think that if I’m a customer, I’d be more inclined to go through with this step if the form had less fields”.

Don’t skip this. When you’re deep in your CRO practice, you sometimes tend to forget about your users. You focus on numbers and design.

It’s paramount to take a step back. The difference between a bad hypothesis and a great one could be staring at you in the face.

Let’s not forget that we are beings doted with empathy. We’re humans creating value for humans.

[…] an informed hypothesis […]

With data and a theory, you can now craft your test hypothesis.

You can use this format or something along those lines:

By {making this change}, {KPI A, B, and thus primary metric} will improve {slightly / noticeably / greatly} because {reasons (data, theory, …)}.

Working this way will not only improve the quality of your tests, but also your global conversion optimization efforts.

Good, you are now testing to confirm or disprove a hypothesis, not shooting from the hip hoping to hit something eventually.

You don’t have a process and a roadmap

If you want to succeed at A/B Testing, and more largely at Conversion Rate Optimization, you need 2 essential elements:

  1. a roadmap,
  2. a process.

1. Why do you need a roadmap? What does it consist of?

A roadmap will help you test what matters and work toward a clear end goal. It’ll guide your efforts and prevent you from doing aimless testing.

scientist dog who doesn't know what he's doing as an A/B Testing beginner without a process

In your roadmap, you should have:

  • Business goals: the reasons you have a website. Be concise, simple, realistic.
  • Website goals: how will you achieve these business goals through your site. What are your priorities?
  • What are your most popular pages? Which ones have the highest potential for improvement?
  • What does your conversion funnel look like, step by step? Where are the friction points?
  • Key metrics: how will you measure success?
  • A North Star: What’s the one metric—correlated with Customer satisfaction, that if you focus exclusively your efforts on will guarantee your success? (Ex: Facebook=Daily Active Users, AirBnb= Nights Booked, Ebay=gross merchandise volume)

Put all these down and make sure everyone in your team/company is on the same page and can get behind them.

2. Sometimes when we hear “process”, we have an allergic reaction. We think it means the death of creativity, that it’s boring. You’d feel trapped.

It’s absolutely not true for A/B Testing (or CRO). On the contrary. It’s BECAUSE you have a process that you’ll be able to be creative.

A/B Testing is a science experiment. You need to be rigorous, have a set of parameters making sure what you’re doing is measurable, repeatable and accurate.

It’s easy to fail. So you need safeguards. Steps that you will go through each and every time without having to think.

No need to reinvent the wheel every time. You need a process so you can focus on crafting the best hypothesis possible and optimize learning.

The details of your process will be specific to you and your company.

But it will look something like that:

  1. Measure
  2. Formulate hypothesis
  3. Prioritize ideas
  4. Test
  5. Learn
  6. Communicate (Don’t skip this. Share with your colleagues why you did that test, how it is pertinent to your business goals, if it succeeded/failed, what you learned. Encourage discussions; each department has insights for different stages of your funnel. And keep your IT team in the loop.)
  7. Repeat

When you feel like you lost your focus, use Brian Balfour question:

“What is the highest-impact thing I can work on right now given my limited resources, whether that’s people, time, or money?”

You don’t prioritize

With a process, a roadmap and the development of a testing culture inside your company, you’ll have a list of test ideas longer than your arm.

You need a system to prioritize tests, to stay on top of things and make sure you do what matters. And to not waste time and money on stupid tests.

Di caprio throwing money in a bin as an A/B Testing beginner not prioritizing his tests

Okay, I apologize in advance but allow me a small angry rant:

There are articles on the interwebz roaming about and claiming things like: “CHANGING OUR CTA TO RED INCREASED CONVERSIONS BY 287%.”

For the sake of argument, let’s say they did have this crazy lift.

First thing first: there are NO colors converting more than others.

What’s important is visual hierarchy, i.e. is your CTA standing out from the rest of your page for example.

Second, testing minute details like color won’t get you anywhere.

Most often, these types of changes are obvious. If your CTA doesn’t stand out from your page, or if your headline/copy isn’t clear—do something about it. You don’t need to test.

If you’d put it in the “Well, duh” category, don’t invest your time and traffic in it. Just do it!

We’re not saying you shouldn’t test small things like adding a line of copy or changing the wording on a CTA. Just don’t make them a priority (except of course—as we talked about earlier, if you’re just starting out with A/B Testing).

If your ship were leaking, you wouldn’t begin by putting a bandaid on a small hole in the deck. Also, keep in mind that testing small changes or on low traffic pages equal small results. It’s true you should test everything, but you’re working with a finite amount of resources.

Once you’re confident in your ability to A/B Test, focus on tests with the most impact/potential. Target your pages with the most traffic and influence.

You can use something like the PIE Framework to rate your tests so you know which one to do first.

Each test will be rated following three criteria:

  1. Potential gain (../10): How much room for improvement is there on this(these) page(s)?
  2. Importance (../10): How valuable is the traffic on this(these) page(s)?
  3. Ease of implementation (../10): How easy will this test be to implement on your site?

Average the three for each test, and you’ll have a backlog of ideas ranked objectively.

You don’t optimize for the right KPIs

There are two types of conversions. Micro and macro. You should measure both, and optimize for the macro conversions.

A micro conversion is a step (click, social share, newsletter subscription, add to cart, …) on the path leading to a macro conversion, which is an outcome that impacts your bottom-line (check out, free trial, …), in other words, the main conversion goals of your website.

Why is it important that you know the difference?

Because you need to make sure that you measure both for every test, but that you don’t optimize for micro conversions only.

There are two types of micro conversions according to the Nielsen Norman Group:

  1. Process Milestones are conversions that represent linear movement toward a primary macro conversion. Monitoring these will help you define the steps where UX improvements are most needed.
  2. Secondary Actions are not the primary goals of the site, but they are desirable actions that are indicators of potential future macro conversions.

Measuring micro conversions allows you to know where your friction points are and help you paint a holistic picture of your entire conversion funnel.

But you shouldn’t optimize for them. You want to set your test goals as close to revenue possible. You could get more traffic to your landing page through a given variation of your homepage, but have fewer form completions even though more people arrived on it.

So if you were optimizing only for the micro conversion of arriving on the landing page, you would have lost money.

Track micro conversion, but don’t optimize solely for them. And before starting any test, go back and make sure you measure everything that matters.

You ignore small lifts

“Go big or go home.” It’s true, as we’ve just said, that you should focus your tests on high impact changes first.

What you won’t see us say anywhere, though, is “if your test results in a small gain, drop it and move on.”

Why? Because maths, that’s why.

grumpy cat loves maths

Let’s take a quick example.

If a test give your variation winning with 5% more conversions, and each month you get similar results. That’s an 80% improvement over a year.

How’s that for small!

Also, the more you test, the more you’ll improve your website, the less you’ll have big results.

Don’t be sad if you just get small lifts. It could mean your website is good. (Pat yourself on the back and tweet that to your boss :D). It’s pretty rare to get big lifts on “normal” site.

Don’t be disheartened by small gains; it’ll pay off big time over time.

You’re not testing at all times

Every day spent without an experiment running is wasted. Opportunities missed.

Oh, look! An opportuni-aah… It’s gone.

Why are you testing? Because you want to learn more about your visitors, make informed decisions, get more customers and earn more money.

When wouldn’t you want all that? Never? Yup, same here. So as they say, Always Be Testing.

Testing properly takes time, you better not lose any.

“There’s a way to do it better—find it”, T.Edison.

How you say?

Test, test, test! And when in doubt, test again!

This concludes our article for today. Next time we’ll look into the concepts you need to grasp in order to decide if you can stop your A/B Tests, so you don’t pull the trigger too early and end up with fake results (yikes).

Again, if you don’t want to wait, download our ebook and you’ll get all 10,000 words worth of content on A/B Testing mistakes.

PS: Before you go, a couple of things I’d like you to do:

    1. If this article was helpful in any way, please let met know. Either leave a comment, or hit me up on Twitter @kameleoonrocks.
    2. If you’d like me to cover a topic in particular or have a question, same thing: reach out

It’s extremely important for me to know that I write content both helpful and focused on what matters to you.

I know it sounds cheesy and fake, but I feel like writing purely marketing, “empty” stuff—just for the exercise, is a fat loss of time for everyone. Useless for you, and extremely un-enjoyable for me to write.

So let’s work together!

Jean-Baptiste Alarcon

Jean-Baptiste is Growth Marketer at Kameleoon. Aside from reading a lot and drinking coffee like his life depends on it, he leads Kameleoon's growth on English markets.

Related Posts