What is an A/A Test & Do You Really Need to Use It?

What’s worse than working without data?

Working with “bad” data.

As marketers, we like to test headlines, call-to-actions, and keywords (to name a few). Among other things, we carry out A / B tests for this purpose.

Free download: A / B Testing Guide and Kit

As a refresher, A / B testing is the process of segmenting a target audience to test a number of variations of a campaign and see which one does better.

But A / B testing isn’t foolproof.

In fact, it’s a complicated process. You often have to rely on testing software to get the data and there is a high chance you will get a false positive result. If you’re not careful, you can make wrong assumptions about what will make people click.

So how can you make sure your A / B test is working correctly? This is where the A / A test comes in. Think of it as a test to the test.

The idea behind an A / A test is that the experience is the same for each group, so the expected KPI (Key Performance Indicator) will also be the same for each group.

For example, if 20% of Group A fills out a form on a landing page, the expected result is that 20% of Group B (who interact with an identical version of the landing page) do the same.

Differences between an A / A test and an A / B test

Performing an A / A test is similar to performing an A / B test; an audience is divided into two similarly sized groups, but instead of referring each group to different content variations, each group interacts with identical versions of the same content item.

Here’s another way to think about it: Have you ever heard the phrase “compare apples to oranges”? An A / B test does just that – comparing two different variations of a piece of content to see which one does better. An A / A test compares an apple with an identical apple.

When you run an A / B test, you program a test tool to change or hide some of the content. This is not required for an A / A test.

An A / A test also requires a larger sample size than an A / B test to detect significant bias. And because of such a large sample size, these tests take much longer.

How to do A / A testing

How exactly you perform an A / A depends on the test tool used. If you’re a HubSpot Enterprise customer doing, for example, an A / A or A / B test on an email, HubSpot will automatically split the traffic across your variations so that each variation gets a random sample of visitors.

Let’s cover the steps to run an A / A test.

1. Create two identical versions of content – the control and the variant.

Once your content has been created, identify two groups of the same sample size that you want to run the test on.

2. Identify your KPI.

A KPI is a measure of performance over a period of time. For example, your KPI could be the number of visitors who click on a call-to-action.

3. Using your testing tool, split your audience evenly and randomly and send one group to the control and the other group to the variant.

Run the test until the control and variation reach a certain number of visitors.

4. Track the KPI for both groups.

Since both groups are sent to identical content, they should behave in the same way. Therefore, the expected result will not be unique.

A / A test applications

A / A testing is mainly used when a company implements new A / B testing software or reconfigures a current one.

You can run an A / A test to achieve the following:

1. To check the accuracy of A / B test software.

The intended result of an A / A test is that audiences will respond similarly to the same content.

But what if they don’t?

Here’s an example: Company XYZ is A / A testing a new landing page. Two groups are sent to two identical versions of the landing page (the control and the variant). Group A has a conversion rate of 8% while Group B has a conversion rate of 2%.

In theory, the exchange rate should be the same. If there is no difference between the control and the variant, the expected result is ambiguous. However, sometimes a “winner” is declared on two identical versions.

In this case it is important to evaluate the test platform. The tool may be incorrectly configured or ineffective.

2. To set a baseline conversion rate for future A / B testing.

Let’s imagine Company XYZ is doing another A / A test on the landing page. This time the results of group A and group B are identical – both groups achieve a conversion rate of 8%.

Therefore 8% is the base conversion rate. With this in mind, the company can conduct future A / B tests with the aim of exceeding this rate.

For example, if the company performs an A / B test on a new version of the landing page and receives a conversion rate of 8.02%, the result is not statistically significant.

A / A Testing: Do You Really Need It?

To do an A / A test or not – that is the question. And the answer depends on who you ask. There’s no denying that A / A testing is a hotly debated topic.

Perhaps the most popular argument against A / A testing boils down to one factor: time.

A / A testing takes a long time to complete. In fact, A / A testing typically requires a much larger sample size than A / B testing. If you are testing two identical versions, you will need a large sample size to detect significant bias. Therefore, the test will take longer to complete, and this may take up the time devoted to performing other valuable tests.

However, in some cases it makes sense to do an A / A test, especially if you are unsure about new A / B testing software and want additional proof of its functionality and accuracy. A / A testing is a low risk way of ensuring that your tests are set up properly.

A / A testing can help you prepare for a successful AB testing program, provide data benchmarks, and identify discrepancies in your data.

While A / A testing is useful, performing such a test should be relatively infrequent. While an A / A test can do a “health check” for a new A / B tool or software, tweaking every small change to your website or marketing campaign may not be worthwhile as it takes a long time to complete takes.

The ultimate A / B test kit

Related Posts