PlayableAdStudio uses Cloudflare Workers at the edge to run server-side A/B tests on playable ad creative variants, routing 50% of traffic to variant A and 50% to variant B via Workers logic backed by D1 for experiment configuration and KV for sticky assignment caching. This lets the marketing team launch creative experiments in under 60 seconds without touching the ad server.

The Problem

Playable ads are the highest-performing creative format in mobile gaming UA, but optimizing them has been a slow, manual process bottlenecked by engineering.

**The creative testing bottleneck:**

1. Marketing designs two ad variants — level-preview versus character-select.

2. Marketing asks engineering to deploy both and configure a traffic split on the ad server.

3. Engineering writes config, updates deployment, waits for CDN cache purge.

4. The test runs 48 hours. Engineering pulls raw logs, joins with impression data, builds a dashboard.

5. Results arrive too late to iterate on remaining budget.

This cycle costs 3-5 engineering hours per experiment and takes 24-48 hours to see results. For a marketing team testing 5-10 variants per week, it is effectively impossible. Most teams stop testing early and guess.

The Solution

PlayableAdStudio eliminates the bottleneck by moving A/B test logic to Cloudflare Workers at the network edge. The workflow becomes:

1. Marketing logs into the experiment dashboard.

2. They define an experiment: two variants, targeting rules (geo, OS, device tier), and a split ratio.

3. They click **Launch**. That's it.

Under the hood, a Worker intercepts every playable ad request, evaluates experiment rules in real time, and serves the correct variant. Marketing never opens a terminal or pings a deploy engineer.

**Key design decisions:**

- **Server-side assignment** avoids cookie fragmentation and ad-blocker issues.

- **Edge execution** means zero cold-start — Workers run globally in ~5ms, experiment logic adds <2ms.

- **Sticky assignments via KV** ensure consistent variant delivery, keeping conversion metrics clean.

Architecture Overview

Three layers form a pipeline from request to response:

```

CDN Request (play.adstudio.io/v1/playable?campaign=123)

|

[Cloudflare Workers]

/ \

[D1 Query] [KV Lookup]

experiment config sticky variant

| assignment

v |

[Assignment Router] <--------+

|

+--------+--------+

| |

Variant A Variant B

| |

v v

[R2 Playable] [R2 Playable]

| |

+--------+--------+

|

[Analytics Pipeline]

(D1 event log -> Dashboard)

```

**Data flow in detail:**

| Step | Component | Action |

|------|-----------|--------|

| 1 | Worker | Intercepts request, extracts campaign_id, geo, user-agent |

| 2 | KV | Checks existing variant assignment: `variant:{campaign_id}:{user_hash}` |

| 3 | D1 | Queries `experiments` table for active experiment matching campaign |

| 4 | Worker | Evaluates split via deterministic hash of user_id |

| 5 | KV | Writes sticky assignment with TTL matching experiment duration |

| 6 | Worker | Serves playable HTML/JS from R2 with campaign tracking params |

| 7 | D1 | Logs impression to `events` table with variant, timestamp, user dimensions |

Implementation

D1 Schema

Two tables store experiment configs and event data:

```sql

CREATE TABLE experiments (

id INTEGER PRIMARY KEY AUTOINCREMENT,

campaign_id TEXT NOT NULL,

name TEXT NOT NULL,

variant_a_key TEXT NOT NULL,

variant_b_key TEXT NOT NULL,

split_ratio REAL NOT NULL DEFAULT 0.5,

status TEXT NOT NULL DEFAULT 'draft', -- draft|active|paused|completed

targeting_rules TEXT, -- JSON: {"geo":["US"]}

started_at TEXT, ended_at TEXT,

created_by TEXT NOT NULL

);

CREATE TABLE events (

id INTEGER PRIMARY KEY AUTOINCREMENT,

campaign_id TEXT NOT NULL,

experiment_id INTEGER NOT NULL,

variant TEXT NOT NULL, -- 'A' or 'B'

event_type TEXT NOT NULL, -- impression|click|install|purchase

user_hash TEXT NOT NULL,

geo TEXT, os TEXT, device_tier TEXT,

timestamp TEXT NOT NULL DEFAULT (datetime('now'))

);

```

Worker Router

The variant routing logic runs in under 100 lines:

```javascript

// Edge A/B test router

export default {

async fetch(request, env) {

const url = new URL(request.url);

if (!url.pathname.startsWith('/v1/playable'))

return env.ASSETS.fetch(request);

const campaignId = url.searchParams.get('campaign');

const userId = request.headers.get('CF-Connecting-IP') ||

crypto.randomUUID();

const userHash = await sha256Hex(userId);

const kvKey = `variant:${campaignId}:${userHash}`;

let variant = await env.KV_ABTEST.get(kvKey);

if (!variant) {

const experiment = await env.DB.prepare(

`SELECT * FROM experiments

WHERE campaign_id = ? AND status = 'active' LIMIT 1`

).bind(campaignId).first();

if (!experiment) return serveDefaultPlayable(env, campaignId);

const hashInt = parseInt(userHash.slice(0, 8), 16);

const norm = (hashInt % 10000) / 10000;

variant = norm < experiment.split_ratio ? 'A' : 'B';

await env.KV_ABTEST.put(kvKey, variant, { expirationTtl: 172800 });

}

const variantKey = variant === 'A'

? experiment.variant_a_key

: experiment.variant_b_key;

const playable = await env.R2_PLAYABLES.get(variantKey);

// Async impression logging

env.DB.prepare(

`INSERT INTO events (campaign_id, experiment_id, variant,

event_type, user_hash, geo)

VALUES (?, ?, ?, 'impression', ?, ?)`

).bind(campaignId, experiment.id, variant,

userHash, request.cf?.country).run().catch(() => {});

return new Response(playable.body, {

headers: {

'Content-Type': 'text/html',

'X-Variant': variant,

'Cache-Control': 'no-store'

}

});

}

}

```

Launching an Experiment

Marketing uses a CLI or dashboard that calls the API:

```bash

curl -X POST https://api.playableadstudio.io/v1/experiments \

-H "Authorization: Bearer $STUDIO_API_KEY" \

-H "Content-Type: application/json" \

-d '{

"campaign_id": "camp_spring_2026",

"name": "Level Preview vs Character Select",

"variant_a_key": "playables/spring-level-preview.html",

"variant_b_key": "playables/spring-char-select.html",

"split_ratio": 0.5,

"targeting": {"geo": ["US", "CA", "GB"]}

}'

```

The API writes to D1; the Worker picks up the change on the next request with zero deployment.

Results

Production data across 47 campaigns over six months:

| Metric | Before (Manual) | After (Edge A/B) | Improvement |

|--------|----------------|-------------------|-------------|

| Time to launch | 3-5 eng hours | 60 seconds self-serve | **~200x faster** |

| Experiments/week | 1-2 | 8-12 | **6x throughput** |

| Creative iteration | 48 hours | 12-16 hours | **3x faster** |

| Conversion uplift | +12% | +40% | **3.3x better** |

| Eng hours saved/wk | — | 15-20 hours | **Team unblocked** |

| P95 latency added | N/A | < 2ms | **Negligible** |

The 40% conversion uplift comes from running 10x more experiments and finding more winners. **Real example:** A puzzle game tested level-preview vs character-select. The winning variant showed 28% higher Day-1 retention. Within 24 hours, marketing paused the loser, reallocated budget, and saw a 34% CPI reduction. Engineering involvement: zero.

Key Takeaways

**1. Edge A/B testing removes the engineering bottleneck from creative optimization.**

Routing A/B logic through Workers instead of ad-server config lets marketing run experiments independently. A 60-second launch cadence changes the testing culture — more experiments, more winners.

**2. Sticky assignment via KV is essential for clean analytics.**

Without sticky assignment, users bounce between variants and pollute conversion data. KV delivers sub-millisecond lookups with automatic TTL cleanup.

**3. D1 as a control plane means zero deployment overhead.**

Experiments are rows in a D1 table. Creating, pausing, or updating one is a single SQL statement — no CI/CD, no restart, no CDN purge.

**4. Real-time analytics close the feedback loop.**

Every impression, click, and conversion is logged with the variant label. Marketing checks results anytime — no batch reports or engineering queries.

**5. This pattern generalizes beyond playable ads.**

Workers + D1 + KV works for landing page tests, in-app messages, pricing experiments — any server-side content split.

---

*PlayableAdStudio's edge A/B testing went from concept to launch in two weeks. Open-source at github.com/playableadstudio/edge-ab-testing.*