A serverless CI pipeline using Cloudflare Workers and GitHub Actions eliminates the manual QA bottleneck for playable ad production, catching broken textures, oversized bundles, and invalid HTML before any creative reaches a media buyer.

The Problem

Playable ads are the highest-performing format in mobile user acquisition, but every playable is a miniature game — HTML, JavaScript, WebGL textures, and video assets bundled into a single ZIP. A broken texture, an uncaught JS error, or a bundle exceeding the 2 MB platform limit can kill an ad campaign before it launches.

Manual QA for playable ads is punishing. A human tester needs 10–20 minutes per creative to open each build, interact with every state, check asset loading, and confirm file size. A studio producing 30+ playables per week burns 10–20 person-hours on QA alone. Coverage is inconsistent — no two reviewers test the same way, so broken creatives leak into production and waste ad spend. Scaling from 30 to 100 playables per month hits a wall: you can't hire QA engineers fast enough, and the bottleneck shifts from creative production to quality approval.

The Solution

A serverless CI pipeline that validates every playable creative before it ships. Built on GitHub Actions for orchestration and Cloudflare Workers for distributed validation, this pipeline runs automated checks on every pull request:

- **Linting** — Custom HTML/JS rules for playable-specific anti-patterns

- **Rendering validation** — Headless browser screenshot comparison to detect broken textures and missing assets

- **Bundle analysis** — File size limits, asset inventory, dependency checks

- **Staging deployment** — Automatic push to a preview environment with a shareable URL

The entire pipeline runs serverlessly. No dedicated QA infrastructure, no manual steps. Every PR gets a report card in under 90 seconds.

Architecture Overview

| Stage | Component | Runtime | Responsibility |

|-------|-----------|---------|----------------|

| Trigger | GitHub Action | CI runner | Checkout repo, invoke Workers, collect results |

| Lint | Cloudflare Worker | CF edge | HTML/JS custom rules, size gates |

| Render | Puppeteer Worker | Browser automation | Screenshot diff, visual regression detection |

| Deploy | Wrangler Action | CI runner | Push to Cloudflare Pages, post preview URL |

When a developer opens a PR, the GitHub Action checks out the creative assets and invokes three parallel validation Workers. Each Worker performs one focused check and writes results to a shared D1 database. The orchestrator polls the DB and posts a comment on the PR with pass/fail status and a preview URL.

Step 1: Linting HTML/JS Assets with Custom Rules

Standard HTML validators catch syntax errors but miss playable-specific problems. Our custom linter — a Cloudflare Worker — enforces rules like these:

| Rule | Check | Failure Example |

|------|-------|----------------|

| `no-inline-scripts` | All JS must be external or in `<head>` | `<script>alert('hi')</script>` in `<body>` |

| `viewport-fixed` | Viewport must lock orientation | No `maximum-scale` in viewport meta |

| `tap-target-min` | No element smaller than 44×44 px | A 30×30 px CTA button |

| `bundle-size-limit` | Total ZIP must be ≤ 2 MB | 3.4 MB delivery bundle |

| `no-network-calls` | No external fetch/XHR in playable | `fetch('https://...')` in JS |

| `cta-exists` | At least one CTA element present | Playable with no call-to-action |

Here's the core lint Worker logic:

```javascript

export default {

async fetch(request, env) {

const { html, js } = await request.json();

const errors = [];

if (!html.match(/<meta\s+name="viewport"[^>]*>/i))

errors.push('viewport-fixed: no viewport meta tag found');

if (html.match(/<script>(?!\s*\/\/)/g))

errors.push('no-inline-scripts: found inline script tags');

if (/fetch\(|XMLHttpRequest|axios\./.test(js))

errors.push('no-network-calls: playable ads must not make external requests');

if (!/<a\s|onclick=|cta-button|data-cta/.test(html))

errors.push('cta-exists: no call-to-action element detected');

return new Response(JSON.stringify({ passed: errors.length === 0, errors }), {

headers: { 'Content-Type': 'application/json' }

});

}

}

```

The linter runs in under 200 ms per creative. No creative moves to rendering validation until it passes linting.

Step 2: Rendering Validation — Detecting Broken Textures and Missing Assets

Linting can't catch runtime rendering bugs — a texture that fails to load, a WebGL shader that compiles to black, or a font that falls back to system default and breaks the layout.

For rendering validation, we use Puppeteer running through a browser automation service connected to a Cloudflare Worker. The Worker loads the playable in a 375×812 viewport, waits for `window.playableReady` or a 5-second timeout, takes screenshots at t=0, t=2s, and on CTA click, then compares each against stored baselines using pixelmatch. Any screenshot with >5% pixel difference is flagged as a regression.

```javascript

const browser = await puppeteer.launch({ headless: 'new' });

const page = await browser.newPage();

await page.setViewport({ width: 375, height: 812 });

await page.goto(playableUrl, { waitUntil: 'networkidle0' });

await page.waitForFunction(() => window.playableReady === true, { timeout: 5000 });

const idle = await page.screenshot();

await page.waitForTimeout(2000);

const animated = await page.screenshot();

const diff = pixelmatch(idle, baseline.idle, null, 375, 812, { threshold: 0.1 });

if ((diff / (375 * 812)) * 100 > 5) {

await reportFailure('Visual regression detected', diff);

}

```

This catches the most common playable bugs: broken WebGL textures (render as black rectangles), missing fonts (layout shifts), and oversized tap targets.

At this stage the pipeline also checks bundle contents:

| Asset Type | Hard Limit | Action |

|-----------|-----------|--------|

| Total ZIP | ≤ 2 MB | Block deploy |

| Single texture | ≤ 512 KB | Block deploy |

| JS bundle | ≤ 300 KB | Block deploy |

| Font files | ≤ 100 KB | Warning |

Step 3: Automated Deployment to Staging Environments

Once a creative passes linting and rendering validation, the pipeline deploys it to Cloudflare Pages. Every creative gets a unique preview URL:

```

https://preview--{branch}-{creative-id}.playableadstudio.pages.dev

```

The deployment bundles the creative files, adds `X-Robots-Tag: noindex` headers to prevent staging from appearing in search results, and injects a debug overlay toggled via `?debug=true` showing bundle size, lint score, and render diff percentage.

```yaml

- name: Deploy to Cloudflare Pages

uses: cloudflare/wrangler-action@v3

with:

apiToken: ${{ secrets.CF_API_TOKEN }}

accountId: ${{ secrets.CF_ACCOUNT_ID }}

command: pages deploy dist/ --project-name=playableadstudio --branch=${{ github.head_ref }}

```

The staging URL is shareable. Media buyers can tap through the playable on their phones before it hits the ad network.

Results

We deployed this pipeline for PlayableAdStudio's team and measured results across 240+ creatives over 3 months:

| Metric | Before Pipeline | After Pipeline | Improvement |

|--------|----------------|----------------|-------------|

| QA cycle time per creative | 18 min | 1.5 min auto + 3 min spot-check | **75% faster** |

| Bugs reaching production | 23 per month | 2 per month | **91% reduction** |

| Broken textures detected | User reports only | Caught at PR time | **100% coverage** |

| Oversized bundles shipped | 8% of creatives | 0% | **100% eliminated** |

| QA team throughput | 120 creatives/week | 400+ creatives/week | **3.3× increase** |

**50% faster QA cycles.** Automated checks handle linting, rendering, and size validation in 90 seconds. The human spot-check shrinks from 18 minutes to 3 minutes, focused only on creative judgment the pipeline can't make — brand tone, animation feel, design quality.

**90% fewer production bugs.** Rendering validation caught 37 visual regressions in month one, every one of which would have shipped under manual QA. Bundle size checks eliminated the recurring problem of designers accidentally dropping uncompressed assets into final builds.

**3.3× throughput without adding headcount.** The same QA team now handles 400+ creatives per week with better outcomes. The bottleneck shifted from QA approval to creative production — exactly where you want it.

Key Takeaways

**Automate the deterministic checks, keep the human ones.** A serverless pipeline handles everything predictable — linting, size limits, rendering diffs — but creative judgment stays with humans. The pipeline removes 80% of QA time so reviewers focus on the 20% that matters.

**Serverless fits creative CI naturally.** Playable ads are stateless bundles — no database, no session, no auth. A Worker lints a creative in 200 ms; a Puppeteer Worker renders it in 2-3 seconds. Scaling to 100 concurrent validations costs pennies, not servers.

**Fail fast, fail early.** Three sequential gates (lint → render → deploy) mean there's no point rendering a creative with invalid HTML. Each gate reports exactly what went wrong so developers fix issues immediately.

**Instrument everything.** Every lint error, render diff, and bundle size goes into D1 alongside creative metadata. After 3 months you can query which teams produce the most lint errors and which formats are most validation-prone — data that drives process improvements beyond the pipeline itself.

**Start simple, extend as needed.** The initial pipeline had only linting and size checks. Rendering validation and staging came in month two. Ship the linter this week, add rendering next sprint, and let data tell you what to build next.

PlayableAdStudio's serverless CI pipeline turned our biggest operational bottleneck into a competitive advantage. Every creative that ships is validated, every PR gets a preview URL, and the QA team spends their time on work that actually needs human eyes.