How It Works
- Create multiple variants in your campaign
- Assign a percentage of the audience to each
- Contacts are randomly assigned to variants
- Track performance by variant
- Use the winner for future campaigns
Setting Up an A/B Test
Step 1: Create Campaign
- Go to Schedules > New Schedule
- Select Blast Mode
- Configure sending list and audience
Step 2: Add Variants
Click Add Variant for each version you want to test:| Variant | Creative | Offer | % |
|---|---|---|---|
| A | ”Flash sale today!“ | offer_123 | 50% |
| B | ”Limited time: 50% off” | offer_123 | 50% |
Step 3: Set Distribution
Adjust percentages to control how audience is split:- 50/50: Equal split between two variants
- 80/20: Mostly variant A, test B on smaller group
- 33/33/34: Three-way split
Percentages must add up to 100%.
Step 4: Schedule and Send
Set your schedule as usual. When the campaign runs:- Each contact is randomly assigned to a variant
- They receive that variant’s message
- Assignment is tracked for reporting
What to Test
Message Copy
Test different approaches to the same offer:| Variant | Message |
|---|---|
| A | ”Flash sale! 50% off everything today only” |
| B | ”Your exclusive discount: Half off for the next 24 hours” |
Urgency vs. Value
| Variant | Approach |
|---|---|
| A | Urgency: “Only 3 hours left!” |
| B | Value: “Save $50 on your order” |
CTA Phrasing
| Variant | CTA |
|---|---|
| A | ”Shop now →“ |
| B | ”Claim your discount →“ |
Personalization
| Variant | Message |
|---|---|
| A | ”Hey! Check out our sale” |
| B | ”Hi , exclusive sale just for you” |
Different Offers
Test which offer drives more conversions:| Variant | Offer |
|---|---|
| A | offer_summer_sale |
| B | offer_clearance |
Analyzing Results
After the campaign sends, compare variants:Metrics to Compare
| Metric | What It Tells You |
|---|---|
| Click Rate | Which message drives more clicks |
| Conversion Rate | Which drives more sales |
| Revenue per Send | Which generates more revenue |
| Opt-out Rate | Which causes more unsubscribes |
Viewing Results
- Go to Schedules
- Click on your campaign
- View Performance by Variant
Statistical Significance
Before declaring a winner, ensure results are significant:Sample Size
More contacts = more reliable results:| Audience Size | Confidence |
|---|---|
| < 1,000 | Low - results may be random |
| 1,000 - 5,000 | Medium - directional insights |
| 5,000+ | High - reliable conclusions |
Margin of Victory
A 1% difference isn’t meaningful. Look for:- Click rate: 2%+ difference
- Revenue: 10%+ difference
- Conversions: 5%+ difference
Testing Strategy
Test One Thing at a Time
❌ Don’t do this:Iterate Based on Learnings
- Week 1: Test copy A vs B → B wins
- Week 2: Test offer X vs Y (using winning copy) → Y wins
- Week 3: Test CTA 1 vs 2 (using B + Y) → 2 wins
- Ongoing: Use B + Y + 2 as your champion
Keep a Control
Always have a “control” variant you can compare against:| Variant | Purpose |
|---|---|
| Control | Your current best performer |
| Test | New variant you’re testing |
Multi-Variant Testing
Testing more than 2 variants:| Variant | % | Use Case |
|---|---|---|
| A | 25% | Control |
| B | 25% | New copy |
| C | 25% | New offer |
| D | 25% | New copy + offer |
Best Practices
Start with high-impact tests
Start with high-impact tests
Test things that might move the needle significantly: offers, core value prop, urgency. Save minor tweaks for later.
Run tests for sufficient time
Run tests for sufficient time
For recurring campaigns, run tests across multiple sends before concluding.
Document your tests
Document your tests
Keep a record of what you tested and results. Build institutional knowledge.
Act on results
Act on results
Don’t just test—implement winners. The goal is improvement, not just data.