Skip to main content
Trackly SMS uses machine learning to select the optimal creative for each contact, maximizing clicks and revenue.

How It Works

Instead of manually choosing which message to send, the ML system:
  1. Analyzes contact history (clicks, revenue, engagement patterns)
  2. Predicts performance for each available creative
  3. Selects the winner for that specific contact
  4. Learns from results to improve over time

Enabling Automated Selection

Campaign Setup

  1. Create a campaign in Automated Mode
  2. Configure the automated percentage:
Automated selection: 80%
Manual fallback: 20%
  • 80% of contacts get ML-selected creatives
  • 20% get the default creative (control group)

Available Creatives

The ML system chooses from creatives marked as Automated type:
  1. Go to Creatives
  2. Mark creatives as “Automated” type
  3. These become candidates for ML selection

The Selection Model

The production system uses Thompson Sampling with Beta distribution posteriors as the primary selection algorithm. (LightGBM exists as a placeholder that falls back to random selection.)

How Thompson Sampling Works

Thompson Sampling is a Bayesian algorithm that balances exploration (trying less-tested creatives) with exploitation (favoring proven winners). For each send:
  1. Sample a score from each creative’s Beta distribution posterior
  2. Select the creative with the highest sampled score
  3. After delivery, update the posterior with observed results

Two Selection Modes

ModeDescription
thompson_single_stepScores all creatives together by sampled Revenue Per Send (RPS)
thompson_two_stepPicks the best offer first, then the best creative within that offer

Metric Optimized

The algorithm optimizes Revenue Per Send (RPS):
RPS = CTR × Conversion Rate × Offer Value
For CTO (cost-to-operator) offers, conversion rate is omitted:
RPS = CTR × Offer Value
Offers with payouts below 20areflooredto20 are floored to 120 for scoring purposes, preventing bias against low-payout but high-volume offers.

Pre-Computed Parameters

Beta distribution alpha/beta parameters are computed by a daily job (thompson_stats_job) and stored in ThompsonSamplingParams per account. This avoids database queries during the send hot path.

Features Used

The model considers:
FeatureDescription
Send countHow many messages they’ve received
Click countHistorical click behavior
Last click recencyDays since last click
Revenue historyPast purchase behavior
Time of dayWhen they typically engage
Creative historyWhich creatives they’ve clicked before
Custom fieldsContact-level custom field data used as additional features

Model Training

Models are trained on your account’s historical data:
  1. Click model: Predicts probability of clicking
  2. Revenue model: Predicts expected revenue
Training runs automatically with a configurable frequency set in your account’s automated selection settings. The system incorporates recent performance data on each training run.

Model Metrics

The system tracks detailed model performance metrics including:
  • Log loss — measures prediction calibration
  • MAE (Mean Absolute Error) — measures revenue prediction accuracy
  • Feature importance — ranks which features drive predictions most

Predictions

For each contact, the model predicts:
Creative A: 12% click probability, $0.45 expected revenue
Creative B: 8% click probability, $0.52 expected revenue
Creative C: 15% click probability, $0.38 expected revenue
Selection can optimize for clicks OR revenue.

Configuration

Optimization settings are configured at the account level in automated_selection_settings. Campaigns only control automated_percent — the percentage of sends that use automated selection vs. the manual fallback creative.

Optimization Goal

Choose what to optimize for:
GoalBest For
ClicksEngagement, list warming
RevenueDirect response, sales

Cooldown Settings

Cooldown is enforced per-contact, per-creative and is configured in your account’s automated_selection_settings. When a contact receives a specific creative, that creative becomes ineligible for that contact until the cooldown period expires.
Example: If cooldown is set to 5 days and a contact received Creative A today,
Creative A won't be selected for that contact for 5 days.
Other creatives remain eligible.

Selection Methods

The system supports several selection methods, configured at the account level:
MethodDescription
thompson_single_stepScores all creatives together by sampled RPS — the primary production algorithm
thompson_two_stepPicks the best offer first, then the best creative within that offer
lightgbmPlaceholder — currently falls back to random selection
randomRandomly selects from eligible creatives (useful for gathering baseline data)
manualUses a filtered subset with random or round-robin rotation
Multiple methods can be configured simultaneously with traffic_pct weights, routing different percentages of traffic to different algorithms. Thompson sampling naturally balances exploration and exploitation: creatives with uncertain performance get tested more frequently, while proven performers are selected more often. This replaces the need for a fixed exploration rate.

Performance Tracking

Model Metrics

View model performance in Reports > ML Metrics:
MetricDescriptionGood Value
Log LossPrediction calibration qualityLower is better
MAERevenue prediction accuracyLower is better
Feature importanceWhat drives predictions-

A/B vs Automated

Compare automated selection against manual:
Automated (80%): 14% click rate, $0.52 revenue/send
Control (20%): 11% click rate, $0.41 revenue/send

Lift: +27% clicks, +27% revenue

When to Use ML Selection

  • Large contact list (5,000+)
  • Multiple creatives to choose from (3+)
  • Enough historical data (10,000+ sends)
  • Measurable conversion goals

Cold Start / Learning Pool

Creatives with fewer than 3,000 raw sends are placed in a learning pool. A configurable percentage of traffic (default 10%) is allocated to learning creatives, which are selected randomly to gather baseline data. The remaining 90% uses Thompson Sampling scoring. Once a creative crosses the 3,000-send threshold, it exits the learning pool and competes on its sampled RPS score.

New Accounts

Without historical data:
  1. All creatives start in the learning pool
  2. Traffic is split randomly until enough data accumulates
  3. After 3,000+ sends per creative, Thompson Sampling takes over
  4. The daily thompson_stats_job computes Beta distribution parameters

New Creatives

New creatives automatically enter the learning pool:
  • They receive guaranteed exposure from the 10% learning allocation
  • Performance data accumulates over ~3,000 sends
  • Strong performers graduate to Thompson scoring and rise to the top

Monitoring

Dashboard Indicators

Watch for:
  • Model health: Is the model performing well?
  • Creative diversity: Are all creatives getting selected?
  • Performance trends: Is click rate improving?

Alerts

Set up alerts for:
  • Model degradation
  • Single creative dominating (may indicate overfitting)
  • Performance drops

Best Practices

Keep 5-10 active automated creatives. Too few limits ML effectiveness.
Add new creatives periodically. Stale content loses effectiveness.
Always maintain some manual sends to measure ML lift.
Understanding what drives predictions helps create better creatives.

Next Steps

Campaigns

Set up automated campaigns

Reporting

Analyze ML performance