How It Works
Instead of manually choosing which message to send, the ML system:- Analyzes contact history (clicks, revenue, engagement patterns)
- Predicts performance for each available creative
- Selects the winner for that specific contact
- Learns from results to improve over time
Enabling Automated Selection
Campaign Setup
- Create a campaign in Automated Mode
- Configure the automated percentage:
- 80% of contacts get ML-selected creatives
- 20% get the default creative (control group)
Available Creatives
The ML system chooses from creatives marked as Automated type:- Go to Creatives
- Mark creatives as “Automated” type
- These become candidates for ML selection
The Selection Model
The production system uses Thompson Sampling with Beta distribution posteriors as the primary selection algorithm. (LightGBM exists as a placeholder that falls back to random selection.)How Thompson Sampling Works
Thompson Sampling is a Bayesian algorithm that balances exploration (trying less-tested creatives) with exploitation (favoring proven winners). For each send:- Sample a score from each creative’s Beta distribution posterior
- Select the creative with the highest sampled score
- After delivery, update the posterior with observed results
Two Selection Modes
| Mode | Description |
|---|---|
thompson_single_step | Scores all creatives together by sampled Revenue Per Send (RPS) |
thompson_two_step | Picks the best offer first, then the best creative within that offer |
Metric Optimized
The algorithm optimizes Revenue Per Send (RPS):Offers with payouts below 120 for scoring purposes, preventing bias against low-payout but high-volume offers.
Pre-Computed Parameters
Beta distribution alpha/beta parameters are computed by a daily job (thompson_stats_job) and stored in ThompsonSamplingParams per account. This avoids database queries during the send hot path.
Features Used
The model considers:| Feature | Description |
|---|---|
| Send count | How many messages they’ve received |
| Click count | Historical click behavior |
| Last click recency | Days since last click |
| Revenue history | Past purchase behavior |
| Time of day | When they typically engage |
| Creative history | Which creatives they’ve clicked before |
| Custom fields | Contact-level custom field data used as additional features |
Model Training
Models are trained on your account’s historical data:- Click model: Predicts probability of clicking
- Revenue model: Predicts expected revenue
Model Metrics
The system tracks detailed model performance metrics including:- Log loss — measures prediction calibration
- MAE (Mean Absolute Error) — measures revenue prediction accuracy
- Feature importance — ranks which features drive predictions most
Predictions
For each contact, the model predicts:Configuration
Optimization settings are configured at the account level inautomated_selection_settings. Campaigns only control automated_percent — the percentage of sends that use automated selection vs. the manual fallback creative.
Optimization Goal
Choose what to optimize for:| Goal | Best For |
|---|---|
| Clicks | Engagement, list warming |
| Revenue | Direct response, sales |
Cooldown Settings
Cooldown is enforced per-contact, per-creative and is configured in your account’sautomated_selection_settings. When a contact receives a specific creative, that creative becomes ineligible for that contact until the cooldown period expires.
Selection Methods
The system supports several selection methods, configured at the account level:| Method | Description |
|---|---|
thompson_single_step | Scores all creatives together by sampled RPS — the primary production algorithm |
thompson_two_step | Picks the best offer first, then the best creative within that offer |
lightgbm | Placeholder — currently falls back to random selection |
random | Randomly selects from eligible creatives (useful for gathering baseline data) |
manual | Uses a filtered subset with random or round-robin rotation |
traffic_pct weights, routing different percentages of traffic to different algorithms.
Thompson sampling naturally balances exploration and exploitation: creatives with uncertain performance get tested more frequently, while proven performers are selected more often. This replaces the need for a fixed exploration rate.
Performance Tracking
Model Metrics
View model performance in Reports > ML Metrics:| Metric | Description | Good Value |
|---|---|---|
| Log Loss | Prediction calibration quality | Lower is better |
| MAE | Revenue prediction accuracy | Lower is better |
| Feature importance | What drives predictions | - |
A/B vs Automated
Compare automated selection against manual:When to Use ML Selection
- Good Fit
- Maybe Not
- Large contact list (5,000+)
- Multiple creatives to choose from (3+)
- Enough historical data (10,000+ sends)
- Measurable conversion goals
Cold Start / Learning Pool
Creatives with fewer than 3,000 raw sends are placed in a learning pool. A configurable percentage of traffic (default 10%) is allocated to learning creatives, which are selected randomly to gather baseline data. The remaining 90% uses Thompson Sampling scoring. Once a creative crosses the 3,000-send threshold, it exits the learning pool and competes on its sampled RPS score.New Accounts
Without historical data:- All creatives start in the learning pool
- Traffic is split randomly until enough data accumulates
- After 3,000+ sends per creative, Thompson Sampling takes over
- The daily
thompson_stats_jobcomputes Beta distribution parameters
New Creatives
New creatives automatically enter the learning pool:- They receive guaranteed exposure from the 10% learning allocation
- Performance data accumulates over ~3,000 sends
- Strong performers graduate to Thompson scoring and rise to the top
Monitoring
Dashboard Indicators
Watch for:- Model health: Is the model performing well?
- Creative diversity: Are all creatives getting selected?
- Performance trends: Is click rate improving?
Alerts
Set up alerts for:- Model degradation
- Single creative dominating (may indicate overfitting)
- Performance drops
Best Practices
Maintain creative variety
Maintain creative variety
Keep 5-10 active automated creatives. Too few limits ML effectiveness.
Refresh regularly
Refresh regularly
Add new creatives periodically. Stale content loses effectiveness.
Keep a control group
Keep a control group
Always maintain some manual sends to measure ML lift.
Review feature importance
Review feature importance
Understanding what drives predictions helps create better creatives.
Next Steps
Campaigns
Set up automated campaigns
Reporting
Analyze ML performance