Why A/B testing is essential for PPC

Key Takeaways:

  • Landing page A/B tests
  • Ad copy / creative
  • Match types
  • Tips for testing success
  • Times where A/B testing isn’t ideal

A/B testing is a powerful way to improve your pay-per-click (PPC) campaign’s performance. It can lead to generating a higher return and being able to scale for further sales or lead volume.

Being able to answer those ‘what if’ questions in your business can really start to unlock what resonates with the customer. This is something that can also impact broader marketing channels outside of paid ads.

A/B testing also means that new ideas can be pitched against what’s working currently which removes the risk of going all in on a change and derailing performance.

If deployed correctly, this is a robust and scientific way to know that something objectively worked better or worse. Even if the test leads to similar or lesser performance, effective A/B tests can provide extremely beneficial insights. There’s a lot of value in a negative test outcome.

There are many types of A/B tests that can be run for both paid search and paid social. We’ve put together the most useful and common in this article so that you can leave with actionable tests to take back into your accounts.

Landing page A/B tests

One of the most beneficial experiments to run!

Hypothesis: If we take the same traffic, with the same ad copy/creative but send the user into a more relevant or different landing page, will performance increase?

Variable: Only the landing page should be changed to run this scientifically. 

Expectations: If the test landing page (the new one) works better, we would see an uplift in conversion rate, leading to more lead or sales volume and a better return. If the control landing page (the current one) performs better, we will know that this is the best page for traffic to be sent to.

Use cases: This is a great experiment to run if you’re going through a rebrand or website design changes. This way you can test potential impacts on performance before a wholesale change is made which will affect all marketing channels.

This can also be useful if you have a landing page that’s working well but you want to drive further incremental performance. Rather than changing all of the traffic over (which risks current success) you can test the old vs the new and mitigate too much of a decrease.

Deployment: Deploy this via the Experiments feature in Google Ads. Run this on a 50/50 traffic split to make it a true A/B test. You can run on a lower percentage to further risk mitigate, just understand that’s not then a true A/B test.

Measurement: Depending on how much click and conversion data you get, you will need to leave this for 14-28 days at a minimum. If the click and conversion data is lower, you may need to wait several months to truly know the outcome of the test. Experiments will give you an indicator on when the test has reached statistical significance.

Once the experiment has run and reached statistical significance, you can now either apply the experiment if the test page worked better or revert to the previous page.

It’s important to debrief and digest what’s happened with either test outcome. If the new page performed worse, why was that the case? What can be done to edit the page and test again?

If the new page performed better, what in particular about it was it that worked better?

Use tools such as Microsoft Clarity to analyse the on-site user behaviour.

If the new landing page was a success for PPC, it’s advisable to consider pulling these new elements across pages for other marketing channels.

Ad copy / creative

A/B testing offers a huge amount of value when it comes to testing ad copy and creative.

If you have a particular image or message that’s driving performance, introducing new variants can offer further growth as well as potentially decrease in where your baseline is.

Hypothesis: If we use the same targeting, can we drive more engagement and click volume through to the landing page that we know is converting well?

Variable: Only the copy/creative variant should be changed to run this scientifically. Ideally only one variable also. I.e if the message is changing on the image, keep the colours and visual elements consistent or vice versa.

Expectations: If the test creative (the new one) works better, we will see the same conversion rate and return but paired with a higher volume of leads or sales as we have improved our engagement rate. If the control creative (the current one) performs better, we will know that this is the most engaging and qualifying creative for the audience in this channel.

Use cases: This is a valuable part of on-going incremental performance uplifts. If you’ve achieved a baseline of performance which works for your business, these small experiments and tweaks are where you can find further gains. On paid social, ad fatigue is a real issue. Creative needs to be refreshed to maintain performance, however, you want to deploy this in a way which aims to better the current performance not negate it with a wholesale change.

Creative A/B testing can give so much insight into what resonates with the user. Brand vs product/service, trust signals vs pragmatic, price sensitive vs value adds.

Deployment: Keep your targeting and landing page the same for this A/B test. Only change one aspect of the creative or copy to ensure that there aren’t multiple elements which could be the cause for a performance shift. Deploy via Google Ads or Meta experiments to split traffic in a scientific way.

Measurement: You would expect to see an increase in click-through-rate if the newer creative has had a positive impact. If this is a B2B campaign, lead quality can also be reviewed if the newer messaging has better qualified the user. Review CTR alongside conversion rate, CPA and CPM to get the full picture of the performance of this test.

Providing that the test has been deployed correctly, it should be clear to isolate what in particular has worked better or worse here.

How can you take those learnings, positive or constructive, back into other channels in the business?

What would you test next based on what’s been learned?

Match types

Keyword match types in PPC offer the ability to control relevance of search query against the volume of potential clicks.

Users expect more from their devices now so very often will search far more broadly than say ten years ago, where we had to make a very specific search to get the right result.

Broad match can take into account many more valuable data signals such as browsing history, which other match types cannot.

This flips the intent argument as a user who has searched “running shoes” and has a data footprint that indicates they’ve been to three online running shoe stores in the past 7 days, has very high intent to purchase. At least as much as someone searching “buy running shoes”.

The challenge with broad match is that in order for it to do what it can, it needs conversion data in the campaign to use as the signal to look for. Often, if enabled too soon, it can waste a huge amount of budget causing commercial damage in many cases.

A/B testing match types is an easy and practical way to see if there would be any benefit from broadening keyword targeting.

Hypothesis: If we can increase the volume of clicks we’re getting, where we know we have effective copy and a landing page that converts effectively, we will see more leads or sales volume.

Variable: In this instance, the only variable being changed should be the keyword match type. 

Expectations: If the test is a success, you will see the same high quality users engaging and converting. Broader match types often see lower click costs so this can also lead to a lower cost-per-lead or higher ROAS depending on your sector. If conversion rate is lower on the broad match terms, it’s likely that you don’t have enough conversion data for this to be a viable solution at the moment, this will likely change as machine learning continually improves in efficacy.

Use cases: If you’re seeing a positive return but can’t scale the budget any further due to being limited by search volume this is a great A/B test to run. If you’re in a highly competitive sector with expensive click costs, expanding too broad can allow you to capture more customers at a lower cost-per-acquisition.

Deployment: Avoid pausing or adding keywords to only the control campaign or editing other variables such as copy or landing pages to preserve the integrity of statistical significance tests.

Measurement: A positive outcome here can be a number of things. 

  • Outcome one is that broad match performs for you, even if it isn’t better than phrase or exact, knowing that broad has more volume and doesn’t perform worse is very positive as it offers move volume and scalability.

  • If broad match outperforms other match types then you’re tapping into the other intent signals such as browsing history and can now continue to scale.
  • If broad match sees lower click costs and a comparable or better performance then it’s working to show on less competitive but still high intent searches which can lower your overall acquisition costs.
  • If broad match performs worse in terms of conversion rate and/or lead quality then it’s likely that this isn’t something that the account is ready for right now.

Tips for testing success

Testing is all about creating an environment that isolates one variable. If there are multiple changes, it’s hard even when performance improves, to know specifically what made a difference. Consider these three important tips for testing success:

 

Test one variable at a time. Don’t make changes to other parts of your PPC campaigns or website when tests are running.
Allow enough time to collect a statistically significant amount of data before making decisions or changes.
Make testing purposeful. Test variables that are likely to drive a quantum leap forward or that offer the business insight into the customer as lower performing variants of anything can lessen your baseline.

Times where A/B testing isn’t ideal

In the modern marketing world, A/B testing and being data-driven is hugely valuable.

No longer do we need to make assumptions or implement changes based on anecdotes. We can ask the data and get the answer, quickly and easily.

That being said, there are some instances where I wouldn't recommend A/B testing as the right approach:

  • If you have no current performance - waiting and being overly scientific when there’s no performance baseline to affect isn’t an agile way to grow or progress. I would preference risk taking here until performance is established
  • If you have a low budget or are in a low volume sector - in some situations, it could take months to hit statistical significance. Whilst this would be useful, business can’t stand still for that length of time. I would default to A/B testing but if you’re not getting meaningful results and can’t afford to wait, you will have to take a risk.
  • If you would benefit more from horizontal scaling - A/B testing is generally getting more out of what you’re already doing. That can have a value, there’s a point where you can be doing it for it’s own sake though, if that’s the case, going again in a new marketing channel, country or with a new product set can be the better commercial option.

Summary

We hope this guide to A/B testing in PPC has been useful for you!

List out some potential A/B tests that you could run in your accounts, taking into account the guidance that we’ve given to ensure that testing is purposeful and scientifically deployed if you are going to experiment.

Use this guide to implement benchmarking, multivariate testing, and scientific evaluation in your PPC efforts.