Client Hub →
Theme

A/B Testing (Website)

Learn how to systematically test website elements to improve conversions, reduce bounce rates, and maximise ROI through data-driven A/B testing.

A/B Testing for Websites: A Practical Guide for UK Marketing Professionals

What is A/B Testing?

A/B testing (also called split testing) is a method of comparing two versions of a webpage or element to determine which performs better. By showing version A to one audience segment and version B to another, you can measure which drives more conversions, engagement, or other key metrics. For UK marketing teams, this is essential for optimising spend and improving campaign performance.

Unlike hunches or best practices, A/B testing relies on real user behaviour data. This is particularly valuable when budget is tight and every pound spent needs to deliver measurable results.

Why A/B Testing Matters

Websites aren't static – they're constantly evolving. Small changes can have surprising impacts:

  • Conversion rate improvements of 10–30% are common after successful tests
  • Reduced bounce rates through better UX
  • Higher customer lifetime value from better-qualified leads
  • Lower cost per acquisition by optimising funnels
  • Competitive advantage through continuous refinement

For media agencies managing client budgets, A/B testing demonstrates ROI and justifies paid media spend.

Setting Up Your First A/B Test

Step 1: Define Your Goal

Before testing anything, clarify what success looks like. Common goals include:

  • Increase form completions
  • Boost e-commerce purchases
  • Improve email signups
  • Reduce cart abandonment
  • Increase time on page

Example: A financial services client wants to increase mortgage application submissions. Your goal metric is "completed application forms."

Step 2: Choose What to Test

Start with high-impact elements that affect user behaviour:

  • Headlines – Test different value propositions or emotional triggers
  • Call-to-action (CTA) buttons – Colour, text, size, placement
  • Form fields – Fewer fields often increase completion rates
  • Images/videos – Product photos vs. lifestyle imagery
  • Copy – Benefit-driven vs. feature-driven messaging
  • Page layout – Single-column vs. multi-column designs
  • Pricing display – Monthly vs. annual breakdown

Pro tip: Test one element at a time. This isolates variables and tells you exactly what caused the change in performance.

Step 3: Create Your Variations

Design version B as a clear alternative to version A. If testing a CTA button:

  • Version A (Control): "Learn More" button in grey, bottom of page
  • Version B (Variant): "Get Your Free Quote" button in brand green, mid-page

The difference should be meaningful but not extreme – you want to measure incremental improvements that reflect real-world changes.

Step 4: Determine Sample Size and Duration

Statistical validity is crucial. Testing with too few users wastes time; you need enough data to confidently say results aren't due to chance.

  • Minimum sample size: Aim for at least 100 conversions per variation (200 total)
  • Test duration: Typically 1–4 weeks, depending on traffic
  • Statistical significance: Use tools like Optimizely or VWO; aim for 95% confidence level

Example: A UK e-commerce site gets 10,000 monthly visitors with a 2% conversion rate (200 conversions). To test reliably, you'd need 4–8 weeks to accumulate sufficient data.

Step 5: Run the Test

Use an A/B testing platform to split traffic randomly and equally. Key platforms include:

  • Unbounce – Landing page builder with built-in testing
  • Convert – Advanced testing with multivariate options
  • Google Optimize – Free option integrated with Google Analytics
  • VWO (Visual Website Optimizer) – Affordable and user-friendly

These tools automatically randomise visitors, track conversions, and calculate statistical significance.

Step 6: Analyse Results

Once the test completes, examine:

  • Conversion rate for each version
  • Statistical significance – Is the difference meaningful, not random?
  • Confidence level – Ideally 95% or higher
  • Practical impact – What does this change mean for revenue?

Example: Version B (green button, mid-page) converts at 2.8% vs. Version A at 2.2%. That's a 27% improvement. If the site makes £5,000 per conversion, this test could generate an extra £30,000 monthly.

Step 7: Implement and Iterate

If Version B wins, roll it out to 100% of traffic. Document the result and then design the next test based on insights learned.

If neither version significantly outperforms, consider why:

  • The difference was too subtle
  • Traffic was too low
  • The metric you chose wasn't sensitive enough

Move forward with a new test rather than dwelling on inconclusive results.

Common A/B Testing Mistakes

Stopping tests too early – Impatience leads to false positives. Wait for statistical significance.

Testing too many elements simultaneously – Multivariate testing is advanced; start with single-variable tests.

Ignoring external factors – A spike in paid traffic or media coverage can skew results. Note any anomalies.

Not considering seasonality – Test duration should account for natural traffic fluctuations.

Obsessing over small wins – A 1–2% improvement might not be worth implementation effort. Focus on 10%+ gains.

Real-World Example: B2B Lead Generation

A UK recruitment agency was getting decent traffic but low lead quality. They tested:

  • Control: Generic "Contact Us" form with 8 fields
  • Variant: Streamlined form with 3 fields (name, email, phone)

Result: 18% increase in form submissions with the shorter form. Secondary benefit: lead quality remained consistent, suggesting field reduction removed friction without filtering serious prospects.

Action: Implemented the 3-field form permanently and reduced cost per lead by 15%.

Building a Testing Culture

Successful media agencies run continuous tests:

  • Monthly testing roadmap – Plan 2–3 tests in advance
  • Cross-team collaboration – Involve creative, strategy, and analytics
  • Document learnings – Build an internal knowledge base
  • Celebrate incremental wins – Small improvements compound

Key Takeaways

A/B testing transforms guesswork into data-driven decision-making. Start simple, measure honestly, and iterate relentlessly. Over 6–12 months of consistent testing, even small improvements (5–10% per test) compound into significant conversion rate gains that justify your media spend and impress clients.

Need expert help?

Request a callback and we'll show you how to put this into practice.

Request Callback