A/B Testing allows you to compare two or more variations of your Shopify pages—such as product pages, collections, or landing pages—to determine which performs better. By dividing visitor traffic between different versions, you can measure key performance metrics and make data-driven decisions to optimize your store.
This guide walks you through setting up, managing, and interpreting A/B Tests in Instant.
Overview
A/B testing helps you understand what resonates with your customers by showing different page versions to different visitors and measuring the results.
What You Can Test
Content Type | Examples |
Pages | Landing pages, about pages, promotional pages |
Products | Product detail page layouts, buy box designs |
Collections | Collection page layouts, filter arrangements |
Blog Articles | Content layouts, call-to-action placements |
Custom URLs | Any page within your Shopify store |
Getting Started with A/B Testing
Navigate to your project dashboard in Instant
From the left sidebar, select A/B Testing
If this is your first time, you'll see a welcome screen inviting you to create your first test
Creating a New A/B Test
Step 1: Create a New Test
Click New A/B Test to open the test creation panel.
Step 2: Configure General Settings
Define the basic settings for your test:
Setting | Description |
Name | Choose a descriptive name (e.g., "Homepage Layout Test" or "New Product CTA Test") |
Localization | Select the market and language your test applies to. This ensures the correct Markets URL is used when creating redirects. |
Step 3: Define Variants
Variants represent the different versions of your page to compare. For each variant, add a name and select a content type:
Content Type | Description |
Page | Standard Instant pages |
Product | Product detail pages |
Collection | Collection pages |
Blog Article | Blog post pages |
Custom URL | Any URL within your store |
Important: Custom URLs must exist within your connected Shopify store. External URLs cannot be tracked or included in A/B tests.
Step 4: Set the Traffic Split
Determine how much visitor traffic should go to each variant.
Example Split | Traffic Distribution |
Variant A | 60% |
Variant B | 40% |
This defines how your visitors are distributed between the two experiences.
Step 5: Define Your Goal Metrics
Choose which key performance indicator (KPI) you want the test to optimize for:
Goal | Description |
Conversion Rate (CR) | Percentage of visitors completing a desired action |
Revenue per Visitor (RPV) | Average revenue generated per unique visitor |
Average Order Value (AOV) | Average value of all completed orders |
Click-Through Rate (CTR) | Proportion of visitors clicking a tracked element |
These metrics determine the "winner" of the test once it concludes.
Step 6: Generate the Test Link
Click Start Test to launch your A/B test. After the test starts, direct traffic to the A/B Test link.
Test Link Format
The test link uses your own Shopify store domain:
https://mystore.com/apps/instant/go/xxxxxxxxxxxx
Why This Matters: Because all traffic remains on the same domain as your store, tracking stays consistent and reliable. This prevents your ad campaigns from re-entering the learning phase caused by redirects to different domains.
Monitoring Test Performance
After starting your test, you'll be redirected to the Test Detail Page, which displays:
Test name and start date
Selected goal metric
Live performance metrics for each variant
Confidence level
Traffic and conversion summaries
Available Metrics
Metric | Description |
Sessions | Total number of visits to your test variants. Each session represents one unique visitor viewing a variant. |
Conversions | Number of sessions that resulted in a completed purchase |
Revenue | Total sales revenue generated from sessions in your test (based on completed orders) |
Revenue per Visitor (RPV) | Average revenue earned per session. Calculated by dividing total revenue by total sessions. |
Click-Through Rate (CTR) | Percentage of visitors who clicked through to another page beyond the original variant URL |
Average Order Value (AOV) | Average value of all orders from your test. Calculated by dividing total revenue by number of conversions. |
Understanding the Confidence Indicator
Instant uses a confidence-based approach tailored for ecommerce. When the model reaches a confidence level above 90%, it indicates that one variation is very likely performing better based on real-time data.
Why This Approach?
Challenge | Solution |
Traditional A/B testing requires thousands of conversions | Confidence-based approach works with smaller sample sizes |
~80% of ecommerce stores lack traffic for classic significance tests | Move faster without waiting for rigid academic requirements |
Long test durations delay optimization | Get actionable insights sooner |
Why Sufficient Data Matters
To accurately declare a winner:
Each variant must receive enough sessions and conversions
Small sample sizes can lead to false positives or unreliable conclusions
The system automatically notifies you if the test doesn't have enough data to conclude a winner
Ending a Test
When ready, click End Test. You'll be prompted to decide what happens next:
Option | Description |
Redirect all traffic to winner | Automatically send all visitors to the winning variant |
Review results first | Examine the data before implementing changes |
Note: If there isn't enough confidence to pick a clear winner, you'll be notified before finalizing.
Managing A/B Tests
From the A/B Testing Overview Page, you can view all your past and active experiments.
Available Actions
Each test entry provides a three-dot menu with the following options:
Action | Description |
View | Opens the test detail view |
Edit | Modify test settings after starting the experiment |
Duplicate | Create a new test based on an existing one |
Delete | Permanently remove the test and its redirect URL |
Best Practices for Reliable A/B Tests
Testing Strategy
Practice | Why It Matters |
Test one major variable at a time | Isolates the impact of each change (headline, image, layout) |
Run tests for a full conversion cycle | Avoid ending too early and getting unreliable results |
Ensure balanced traffic | Both variants should get enough visits and conversions |
Avoid overlapping tests | Multiple tests affecting the same page can skew results |
Document learnings | Guide future optimization efforts with past insights |
Minimum Requirements
Requirement | Recommendation |
Test duration | At least 7 days (ideally 7-14 days) |
Sessions per variant | Aim for 1,000+ per variant |
Conversions per variant | More conversions = higher confidence |
Quick Reference
Task | Action |
Access A/B Testing | Project dashboard → Left sidebar → A/B Testing |
Create new test | Click New A/B Test → Configure settings |
Set traffic split | Define percentage for each variant |
Choose goal metric | Select CR, RPV, AOV, or CTR |
Start test | Click Start Test → Copy test link |
Monitor performance | View Test Detail Page |
End test | Click End Test → Choose next action |
Manage tests | Use three-dot menu on test entries |
Goal Metrics Comparison
Goal | Best For |
Conversion Rate | Overall purchase behavior |
Revenue per Visitor | Balancing conversions and order value |
Average Order Value | Upselling and cross-selling effectiveness |
Click-Through Rate | Engagement and navigation testing |

