The Problem
Salon owners want to optimize their marketing campaigns, but traditional A/B testing tools are expensive, complex, and designed for e-commerce, not local services. A salon needs to test different SMS offer variants, email subject lines, and push notification timing -- all while serving a local customer base. Running these experiments should not require a data science team or a monthly SaaS subscription.
The Solution
AiSalonHub built a **Serverless Experiment Engine** that runs A/B tests on marketing campaigns directly from Cloudflare Workers and D1. Salons define experiments through a simple configuration in the admin UI, and the engine handles traffic splitting, variant delivery, result tracking, and statistical analysis -- all at serverless cost.
Architecture
```
Campaign Launch -> D1 Experiment Config -> Traffic Splitter -> Variant Workers -> Analytics Pipeline
| | | |
A/B rules, 50/50 split SMS variant A Track opens,
variants, per customer Email variant B clicks, bookings
metrics config hash-based per variant
```
Experiment Configuration
Each experiment is a JSON document in D1 defining the campaign, variants, metrics, sample size, and duration. Salon managers configure experiments through a simple form in the AiSalonHub dashboard with fields for:
- **Campaign type**: SMS promo, email newsletter, push notification
- **Variants**: up to 4 variants with different content, timing, or channels
- **Target segment**: filter by customer type, location, booking history
- **Split ratio**: default 50/50 but configurable per variant
- **Duration**: minimum 3 days to ensure statistical significance
Traffic Splitting (Consistent Hashing)
The traffic splitter runs on Cloudflare Workers using consistent hashing of the customer ID. This ensures the same customer always sees the same variant across multiple campaign touches, preventing confusion and maintaining a clean experiment. The splitter queries D1 for the active experiment configuration, computes the hash, and routes the customer to the assigned variant worker.
```python
Consistent hashing for variant assignment
def get_variant(customer_id, experiment_config):
variants = experiment_config["variants"]
hash_val = int(hashlib.sha256(customer_id.encode()).hexdigest(), 16)
slot = hash_val % sum(v["weight"] for v in variants)
cumulative = 0
for variant in variants:
cumulative += variant["weight"]
if slot < cumulative:
return variant["id"]
return variants[-1]["id"]
```
Analytics Pipeline
Each variant interaction is logged to D1 with customer ID, variant ID, event type (sent, opened, clicked, booked), and timestamp. A separate worker aggregates results every 15 minutes and computes:
- **Conversion rate per variant**: bookings divided by sends
- **Confidence interval**: using a Bayesian beta-binomial model
- **Lift**: percentage improvement of the winning variant over the control
- **Statistical significance**: p-value using chi-squared test
Results are served through the AiSalonHub dashboard as real-time charts, allowing salon managers to monitor experiments without waiting for a manual report.
Real-World Results
Over 3 months, AiSalonHub ran 127 experiments across 50 salons:
- **SMS offer variants**: personalized offers (e.g., "20% off your favorite service") outperformed generic offers by 2.1x in conversion rate
- **Email timing**: Tuesday morning sends achieved 37% higher open rates than Friday afternoon
- **Push notification length**: 60-character notifications had 28% higher tap-through than 120-character ones
- **Multi-factor campaigns**: email + SMS combo achieved 4.2x bookings vs SMS-only
Technical Implementation
```javascript
// Worker for experiment variant delivery (simplified)
export default {
async fetch(request, env) {
const url = new URL(request.url);
const customerId = url.searchParams.get("customer_id");
const experimentId = url.searchParams.get("experiment_id");
const config = await env.DB.prepare(
"SELECT config_json FROM experiment_config WHERE id = ?"
).bind(experimentId).first();
const experiment = JSON.parse(config.config_json);
const variantId = getVariant(customerId, experiment);
// Track assignment
await env.DB.prepare(
"INSERT INTO experiment_events (experiment_id, variant_id, customer_id, event_type)"
+ " VALUES (?, ?, ?, 'assigned')"
).bind(experimentId, variantId, customerId).run();
// Return variant content
return new Response(JSON.stringify({
variant: variantId,
content: experiment.variants.find(v => v.id === variantId).content
}));
}
}
```
Key Takeaways
- **A/B testing belongs in the database, not in the app**: Storing experiment configs in D1 makes them editable without redeploying Workers
- **Consistent hashing is essential**: Without it, the same customer sees different variants, invalidating the experiment
- **Bayesian analysis works at the edge**: Simple statistical models run fast enough on Workers for real-time dashboard updates
- **Serverless A/B testing costs pennies**: Running 127 experiments for 3 months cost $1.23 in Workers execution + D1 reads
Integration with AiSalonHub Core
The Experiment Engine is tightly integrated with AiSalonHub's existing marketing infrastructure:
- **Campaign templates**: Experiments reference the same D1 template store as standard campaigns, reducing duplication
- **Customer segments**: Experiments inherit the segmentation engine from AiSalonHub's lead scoring system, allowing precise targeting
- **Analytics dashboard**: Experiment results appear alongside standard campaign metrics in the salon admin dashboard
- **Auto-promotion**: Winning variants are automatically promoted to production campaigns, closing the loop between testing and execution
Handling Edge Cases
The Experiment Engine handles several edge cases gracefully:
Low-Traffic Salons
For salons with fewer than 100 customers in the target segment, the engine extends the experiment duration instead of requiring a minimum daily sample. It uses a Bayesian approach that accumulates evidence over time rather than requiring a fixed sample size, making A/B testing viable even for small salons.
Holiday Spikes
Salon booking behavior changes dramatically during holidays (Mother's Day, Valentine's Day, Lunar New Year). The engine automatically pauses experiments during known holiday periods and resumes them afterward, preventing skewed results from anomalous traffic.
Multi-Variant Complexity
While most experiments use 2 variants (control + test), the engine supports up to 4 variants with configurable traffic splits (40/30/20/10). Statistical significance is computed using a Dunnett's test that corrects for multiple comparisons, preventing false positives when comparing several variants against the control.
Performance at the Edge
The entire Experiment Engine runs on Cloudflare Workers' free tier. Key performance metrics:
- **Variant assignment latency**: <5ms (hash computation + D1 lookup)
- **Event ingestion**: 2,000 events/second per worker instance
- **Aggregation queries**: 15-minute result recomputation completes in under 200ms even with 50,000+ events
- **D1 storage cost**: $0.89/month for storing all experiment configurations and event data
Key Takeaways
- **Serverless A/B testing democratizes optimization**: Small businesses can run sophisticated experiments without enterprise tools
- **D1 enables config-driven experiments**: Storing configs in D1 allows non-engineers to create experiments through the admin UI
- **Edge computing is fast enough for real-time assignment**: Sub-5ms variant assignment means zero perceptible delay
- **Bayesian methods handle small samples**: Low-traffic salons can still run statistically valid tests