Skip to main content

A/B testing

Updated over 2 weeks ago

A/B Testing allows you to compare two or more variations of your Shopify pages - such as product pages, collections, or landing pages - to determine which performs better. By dividing visitor traffic between different versions, you can measure key performance metrics (like conversions or revenue) and make data-driven decisions to optimize your store.

This guide walks you through setting up, managing, and interpreting A/B Tests in your Shopify Page Builder project dashboard.

1. Getting Started with A/B Testing

Accessing the A/B Testing Dashboard

  1. Navigate to your project dashboard in Instant.

  2. From the left sidebar, select A/B Testing.

  3. If this is your first time using the feature, you’ll see a welcome screen inviting you to create your first A/B test.

2. Creating a New A/B Test

Step 1: Create a New Test

Click New A/B Test to open the test creation panel.

Step 2: Configure General Settings

In this section, you’ll define the following:

Setting

Description

Name

Choose a descriptive name for your test (e.g., “Homepage Layout Test” or “New Product CTA Test”).

Localization

Select the market and language your test applies to. This makes sure the correct Markets URL will be used when creating the redirects.

Step 3: Define Variants

Variants represent the different versions of your page to compare. For each variant, you can add a name and select one of the following content types:

  • Page

  • Product

  • Collection

  • Blog Article

  • Custom URL

Important:
Custom URLs must exist within your connected Shopify store. External URLs cannot be tracked or included in A/B tests.

Step 4: Set the Traffic Split

Determine how much visitor traffic should go to each variant.
For example:

  • Variant A: 60%

  • Variant B: 40%

This defines how your visitors are distributed between the two experiences.

Step 5: Define Your Goal Metrics

Choose which key performance indicator (KPI) you want the test to optimize for. Available goals include:

  1. Conversion Rate (CR) – Percentage of visitors completing a desired action.

  2. Revenue per Visitor (RPV) – Average revenue generated per unique visitor.

  3. Average Order Value (AOV) – Average value of all completed orders.

  4. Click-Through Rate (CTR) – Proportion of visitors clicking a tracked element.

These metrics will determine the “winner” of the test once it concludes.

Step 6: Generate the Test Link

The A/B Test link is the URL used to access the A/B test experience. Once everything is configured, click Start Test to launch your A/B test. After the test has been started, you can start directing traffic to your the A/B Test link.

The test link will utilize your own Shopify store domain and will be use the following URL formatting:

https://mystore.com/apps/instant/go/xxxxxxxxxxxx 

This ensures accurate ad tracking and prevents your campaigns from re-entering the learning phase caused by redirects to a different domain. Because all traffic now remains on the same domain as your store, tracking stays consistent and reliable.

3. Monitoring Test Performance

After starting your test, you’ll be redirected to the Test Detail Page, which provides:

  • Test name and start date

  • Selected goal metric

  • Live performance metrics for each variant

  • Confidence level

  • Traffic and conversion summaries

Available Metrics

Metric

Description

Sessions

The total number of visits to your test variants. Each session represents one unique visitor viewing a variant of your test.

Conversions

The number of sessions that resulted in a completed purchase.

Revenue

Total sales revenue generated from sessions included in your test. This is based on completed orders.

Revenue per Visitor (RPV)

The average amount of revenue earned for each session. Calculated by dividing total revenue by total sessions in your test.

Click-Through Rate (CTR)

The percentage of visitors who have clicked through to another page other than the original variant URL.

Average Order Value (AOV)

The average value of all orders from your test. Calculated by dividing total revenue by the number of conversions.

4. Understanding the Confidence indicator

we use a confidence-based approach tailored for ecommerce. When our model reaches a confidence level above 90%, we surface that one variation is very likely performing better based on real-time data. Traditional A/B testing significance often requires thousands of conversions to be reliable and approx. 80% of ecommerce stores don’t have enough traffic to run classic significance tests reliably. So with this confidence-based approaches you can move faster without waiting for rigid academic significance requirements.

Why Sufficient Data Matters

To accurately declare a winner:

  • Each variant must receive enough sessions and conversions.

  • Small sample sizes can lead to false positives or unreliable conclusions.

  • The system will automatically notify you if the test does not yet have enough data to conclude a winner.

5. Ending a Test

When ready, click End Test. You’ll be prompted to decide what happens next:

  • Redirect all traffic to the winning variant

  • Review results before implementing changes

If there isn’t enough confidence to pick a clear winner, you’ll be notified before finalizing.

6. Managing A/B Tests

From the A/B Testing Overview Page, you can view all your past and active experiments.

Each test entry provides a three-dot menu with the following options:

Action

Description

View

Opens the test detail view.

Edit

Modify test settings after starting the experiment.

Duplicate

Create a new test based on an existing one.

Delete

Permanently remove the test and its redirect URL.

7. Best Practices for Reliable A/B Tests

  1. Test one major variable at a time (e.g., headline, image, layout).

  2. Run tests for a full conversion cycle (avoid ending too early).

  3. Ensure balanced traffic, both variants should get enough visits and conversions.

  4. Avoid running multiple overlapping tests that affect the same page or audience.

  5. Document learnings to guide future optimization efforts.

Did this answer your question?