A/B Testing Best Practices

Pack’s A/B Testing works alongside your existing data infrastructure (such as GA4 or a more sophisticated provider). In other words, Pack will provide you with A/B testing in the front-end, and then surface test event data that your data platforms can consume.

Here’s how data generally flows from your storefront to your results dashboard:

Step 1: Collect data

First, it’s vital to know how your event data flows through those systems, and what’s your source of truth. We typically see brands working with data platforms such as Elevar, Northbeam, Fueled, Blotout, Littledata, or a combination of tools or a custom solution.

What we typically see is brands who event data from their Shopify store either via a data layer—often via a tool like Elevar, Northbeam, Littledata, Google Tag manager, Fueled, or Blotout—or you my be collecting analytics from a GA4 script on your storefront. You may be using a combination of tools, or a custom solution.

It’s best to gather a list of all the solutions you use so that you can clearly map where data is flowing.

Here’s an example of what your data flow may look like:

Step 2: Store data

Your storefront data (including your A/B testing events) are stored in a database, such as GA4, Big Query, Snowflake, Databricks, or Clickhouse. You may have multiple data storage vendors. If you’re aggregating multiple data sources in one place (Ex. POS data with ecom data), you may need to query your data here.

Pack has built-in integrations with Big Query and GA4, but you can set up any data and analytics tool you’d like with Pack’s web hooks. We are exploring closer partnerships with newer tools that are in the space!

Q: Do I need Big Query or another paid solution for data collection, or can I just use GA4?

A: No, you can use GA4 to both collect and store your data. However, storing your data in an SQL database like Big Query does give you more flexibility for querying your data. There is a free version of Big Query, but there are usage thresholds that may require a paid plan.

Step 3: Visualize your data

There are a few ways you can visualize the results of your test. Here’s how they all work:

Option 1: Pack’s dashboard

Pack’s A/B testing dashboard can automatically pull data from GA4 and/or Big Query, and apply a Bayseian equation to the data to help analyze your results. Pack then visualizes the baseline insights of your test in a dashboard in the Pack admin, so you can see the metrics for your control and variant groups, such as:

  • Total users and control/variant users that viewed the test
  • Total users that engaged with the test (view, click, ATC, etc.)
  • CVR, total purchases, AOV, Revenue

Pros:

  • Built-in integrations with GA4 and Big Query makes this method a low lift
  • Included with A/B testing feature

Cons:

  • It’s not currently possible to edit Pack’s built-in A/B testing analysis framework, so you can’t adjust which charts / graphs you see, confidence levels, risk thresholds, or switch from Baysian to Frequentist experiment analysis.

Option 2: A spreadsheet tool (ex. Equals)

You can use a spreadsheet tool with the ability to query an SQL database (like Big Query) to pull in results, analyze them, and then visualze your data.

To use this method, you’d write an SQL query to pull your testing data from data storage, select your preferred equation to crunch the numbers, and return results as any type of chart / visualization your tool offers.

Pros:

  • Great if you want more control over your results data, ex. to adjust confidence levels or acceptable risk thresholds, switch to Frequentist experimentation instead of Bayesian, or if you want to analyze complex event funnels within the A/B test

Cons:

  • Requires a bit more setup

Option 3: Other dashboarding tools / vendors (ex. Fullstory, Blotout, CRO agency)

Sometimes your CRO agency or data layer provider will have a method they prefer for pulling in data and analyzing results.

Just like in the spreadsheet method, they can create SQL queries to pull data from your data storage, choose an analysis method and crunch the test result numbers, and then visualize the data for you.

Pros:

  • Offers external support to navigate setup and data accuracy
  • Great if you want more control over your results data, ex. to adjust confidence levels or acceptable risk thresholds, switch to Frequentist experimentation instead of Bayesian, or if you want to analyze complex event funnels within the A/B test

Cons:

  • Requires a bit more setup

Step 4: Consider event data accuracy

If you don’t have a good sense for how accurate your data is, your test results won’t mean much.

Having a robust conversions API (CAPI) solution in place like Fueled.io, Blotout, or Littledata that handles server side event tracking and matches it with user activity—ie. Deduping, syncing with Meta, consistent/accurate attribution—is a great first step.

If the answer is “not yet” or “I’m not sure” to data accuracy, feel free to book some time with Pack’s team and we can talk you through some helpful strategies and find the right tools for your team.

For additional tips and best-practices, check out our blog post.

Resources

Pack A/B Testing

Get a tour of Pack's A/B Testing

A/B Testing API

Get a tour of Pack's customizer what it can do

Was this page helpful?