Skip to main content
A/B testing lets you compare different message versions to find what resonates with your audience. Test copy, offers, CTAs, and more.

How It Works

  1. Create multiple variants in your campaign
  2. Assign a percentage of the audience to each
  3. Contacts are randomly assigned to variants
  4. Track performance by variant
  5. Use the winner for future campaigns

Setting Up an A/B Test

Step 1: Create Campaign

  1. Go to Schedules > New Schedule
  2. Select Blast Mode
  3. Configure sending list and audience

Step 2: Add Variants

Click Add Variant for each version you want to test:
VariantCreativeOffer%
A”Flash sale today!“offer_12350%
B”Limited time: 50% off”offer_12350%

Step 3: Set Distribution

Adjust percentages to control how audience is split:
  • 50/50: Equal split between two variants
  • 80/20: Mostly variant A, test B on smaller group
  • 33/33/34: Three-way split
Percentages must add up to 100%.

Step 4: Schedule and Send

Set your schedule as usual. When the campaign runs:
  • Each contact is randomly assigned to a variant
  • They receive that variant’s message
  • Assignment is tracked for reporting

What to Test

Message Copy

Test different approaches to the same offer:
VariantMessage
A”Flash sale! 50% off everything today only”
B”Your exclusive discount: Half off for the next 24 hours”

Urgency vs. Value

VariantApproach
AUrgency: “Only 3 hours left!”
BValue: “Save $50 on your order”

CTA Phrasing

VariantCTA
A”Shop now →“
B”Claim your discount →“

Personalization

VariantMessage
A”Hey! Check out our sale”
B”Hi , exclusive sale just for you”

Different Offers

Test which offer drives more conversions:
VariantOffer
Aoffer_summer_sale
Boffer_clearance

Analyzing Results

After the campaign sends, compare variants:

Metrics to Compare

MetricWhat It Tells You
Click RateWhich message drives more clicks
Conversion RateWhich drives more sales
Revenue per SendWhich generates more revenue
Opt-out RateWhich causes more unsubscribes

Viewing Results

  1. Go to Schedules
  2. Click on your campaign
  3. View Performance by Variant
You’ll see:
Variant A: 5,000 sent | 12% clicked | $2,340 revenue
Variant B: 5,000 sent | 15% clicked | $2,890 revenue
Variant B wins on clicks and revenue.

Statistical Significance

Before declaring a winner, ensure results are significant:

Sample Size

More contacts = more reliable results:
Audience SizeConfidence
< 1,000Low - results may be random
1,000 - 5,000Medium - directional insights
5,000+High - reliable conclusions

Margin of Victory

A 1% difference isn’t meaningful. Look for:
  • Click rate: 2%+ difference
  • Revenue: 10%+ difference
  • Conversions: 5%+ difference
If results are close, the variants perform similarly. Pick the one that’s easier to scale or matches your brand better.

Testing Strategy

Test One Thing at a Time

❌ Don’t do this:
Variant A: Different copy + different offer + different time
Variant B: Original everything
You won’t know what caused the difference. ✅ Do this:
Variant A: Different copy, same offer
Variant B: Original copy, same offer
Now you know if the copy matters.

Iterate Based on Learnings

  1. Week 1: Test copy A vs B → B wins
  2. Week 2: Test offer X vs Y (using winning copy) → Y wins
  3. Week 3: Test CTA 1 vs 2 (using B + Y) → 2 wins
  4. Ongoing: Use B + Y + 2 as your champion

Keep a Control

Always have a “control” variant you can compare against:
VariantPurpose
ControlYour current best performer
TestNew variant you’re testing
This tells you if changes are actually improvements.

Multi-Variant Testing

Testing more than 2 variants:
Variant%Use Case
A25%Control
B25%New copy
C25%New offer
D25%New copy + offer
More variants require larger audiences for meaningful results. With 4 variants, you need 4x the sample size.

Best Practices

Test things that might move the needle significantly: offers, core value prop, urgency. Save minor tweaks for later.
For recurring campaigns, run tests across multiple sends before concluding.
Keep a record of what you tested and results. Build institutional knowledge.
Don’t just test—implement winners. The goal is improvement, not just data.

Next Steps