# How to Build A/B Testing Statistical Significance Frameworks
In the world of digital marketing, making decisions based on 'gut feel' can be a costly mistake for Australian small businesses. A statistical significance framework allows you to determine if the increase in conversions you see in a test is a result of your changes or just random chance, ensuring you only invest in strategies that actually work.
Building this framework prevents you from chasing 'false positives'—those moments where a test looks like a winner but fails to deliver results once rolled out permanently.
Prerequisites
Before you start, ensure you have the following:- A conversion goal: A specific action you want users to take (e.g., filling out a contact form or making a purchase).
- Baseline data: Your current conversion rate and average monthly traffic.
- Testing software: Tools like Google Optimize (or its successors), VWO, or even a simple spreadsheet.
- A basic understanding of your margins: Knowing what a lead is worth to your business.
---
Step 1: Define Your Minimum Detectable Effect (MDE)
The MDE is the smallest improvement in conversion rate that you actually care about. If an experiment only improves your sales by 0.1%, is it worth the effort of changing your website?- Action: Decide on a percentage (e.g., a 5% or 10% lift).
- Screenshot Description: You should see a blank spreadsheet row labelled "MDE %" where you will input your target improvement.
Step 2: Calculate Your Sample Size Requirements
You cannot stop a test just because you see a 'winner' after two days. You need a mathematically sound sample size to reach significance.- Action: Use an online calculator (like AB Tasty or Evan Miller) to input your current conversion rate and your MDE. It will tell you exactly how many visitors each variation needs.
- Pro Tip: For most Australian SMEs with moderate traffic, aim for a 95% confidence level.
Step 3: Establish a Null Hypothesis
In statistics, the 'Null Hypothesis' (H0) assumes that your change will have no effect on user behaviour. Your goal is to gather enough evidence to 'reject' the null hypothesis.- Action: Write it down: "Changing the 'Submit' button colour from blue to green will have no impact on form submissions."
Step 4: Set Your Significance Level (Alpha)
The significance level is the probability of rejecting the null hypothesis when it is actually true (a false positive).- Action: In your framework, set your Alpha at 0.05. This means there is only a 5% chance that your results are due to random noise.
Step 5: Determine the Power of the Test (Beta)
Statistical power is the probability that the test will correctly identify a winner.- Action: Set your power to 0.80 (80%). This is the industry standard. It ensures that if there is a real difference to be found, your test has an 80% chance of finding it.
Step 6: Create a Standardised Testing Log
Consistency is key for long-term growth. Create a centralised document (Google Sheets or Excel) to track every test you run.- Action: Create columns for: Test Name, Start Date, End Date, Control Conversion Rate, Variant Conversion Rate, P-Value, and Result (Winner/Loser/Inconclusive).
Step 7: Account for External Factors (The 'Australian Context')
External events can skew your data. If you run a test during a massive EOFY (End of Financial Year) sale or a public holiday like Australia Day, your results might not reflect 'normal' user behaviour.- Action: Note any public holidays or major sales periods in your testing log to provide context during the analysis phase.
Step 8: Run the Test for Full Business Cycles
Even if you reach your sample size in three days, you must run the test for at least one full week (ideally two). User behaviour on a Monday morning is vastly different from a Saturday night.- Action: Set a calendar reminder to check the test only after 7 or 14 full days have passed.
Step 9: Calculate the P-Value
Once the test is complete, look at the P-Value. This number tells you the probability that the difference between your control and variant happened by chance.- Action: If the P-Value is less than your Alpha (0.05), your result is 'Statistically Significant'.
Step 10: Perform a Post-Test Segment Analysis
Sometimes a test fails overall but wins for a specific group.- Action: Look at your data specifically for mobile vs. desktop users, or Brisbane-based users vs. interstate users. You might find that your change worked brilliantly for mobile users even if the desktop results were flat.
Step 11: Document Learnings and Archive
Every test is a win if you learn something. Even a 'losing' test tells you what your customers don't like. Action: Write a brief summary of why* you think the result occurred and save it in your framework for future reference.---
Common Mistakes to Avoid
- Peeking at results: Checking the data every hour and stopping the test as soon as it looks like a win. This is the fastest way to get a false positive.
- Testing too many things at once: If you change the headline, the image, AND the button colour, you won't know which change caused the result.
- Ignoring 'Local' Trends: Forgetting that Australian seasons are opposite to the Northern Hemisphere. Testing winter coats in July makes sense here, but following US-based marketing blogs blindly might lead you to test them in January!
Troubleshooting
- The test is taking too long: If your traffic is low and it will take 6 months to reach significance, increase your MDE. Instead of looking for a 2% lift, look for a 20% lift.
- Results are 'Inconclusive': This happens often. It means there was no significant difference. In this case, stick with your original design (the control) and move on to a bolder hypothesis.
- The 'Winner' didn't result in more sales: Check your tracking. Did you track 'clicks' instead of 'purchases'? Always optimise for the metric that actually impacts your bottom line.
Next Steps
Now that you have a framework, it's time to start your first experiment. Start with high-impact areas like your homepage headline or your primary Call to Action (CTA).If you find the statistical side of marketing a bit overwhelming, the team at Local Marketing Group is here to help. We specialise in data-driven growth for Australian businesses. Contact us today to discuss how we can optimise your conversion rates.