A CCFish-powered CI/CD pipeline automates the build, deploy, and A/B test cycle for playable ad creatives, collapsing iteration from a weekly drag to a daily cadence. By treating ad variants as code artifacts and running them through a GitHub Actions workflow -- build with CCFish, deploy to a staging CDN, measure with analytics hooks -- marketing teams can ship and evaluate dozens of creative experiments per week instead of one or two.
The Problem: Creative iteration is the bottleneck in ad performance
Playable ads are among the highest-performing formats in mobile UA, but they carry a hidden cost: creative fatigue. A playable that converts at 4% on Monday can drop to 1.5% by Thursday as audiences saturate. The only cure is volume -- more variants, faster tests, quicker kills on underperformers.
Yet most teams iterate through a manual gauntlet: designer exports a new variant, developer integrates it into the ad SDK, QA checks it, the marketing manager uploads it to the ad network, and everyone waits a week for statistically significant results. A single creative cycle takes five to seven days, meaning a team launching one variant per week runs at best four experiments per month. Against algorithms that can burn through an audience in hours, that pace is a death sentence for campaign performance.
The core problem is not creative skill -- it is operational latency. Developers commit code dozens of times a day. Marketers should be committing creative variants at the same velocity.
The Solution: Apply CI/CD principles to creative generation
Continuous Integration and Continuous Deployment transformed software engineering by enforcing three disciplines: automate the build, test every change, deploy on every commit. Those same principles apply directly to playable ad creative management.
Instead of emailing ZIP files, treat each variant as a branch in a git repository. The variant's source files -- HTML, JavaScript, image assets, configuration JSON -- live alongside the build pipeline. When a marketer or designer pushes a new variant branch, the CI system:
1. Checks out the branch and validates asset integrity (missing files, oversized textures, broken JS)
2. Runs the CCFish build command to produce an optimized playable ad package
3. Deploys the package to a staging URL on a CDN
4. Fires a webhook to the ad network with the new variant URL
5. Records the deployment in a changelog database for audit trail
This is not theoretical. Teams managing 50+ active playable variants across six ad networks have cut creative cycle time from days to hours and multiplied simultaneous experiments by an order of magnitude.
Architecture: Four-stage feedback loop
The pipeline is a four-stage loop. Each stage is decoupled so teams can swap components without rebuilding.
**Stage 1 -- Trigger and Source**: A GitHub repo with variants organized by directory. Each variant has a manifest.json declaring targeting criteria (geo, OS, ad network, creative size). When a PR merges to main or a push hits variants/*, GitHub Actions fires.
**Stage 2 -- Build with CCFish**: The CCFish CLI consumes variant source and produces an optimized playable package. It handles asset compression, HTML template injection, SDK compatibility checks, and output size budgeting. The build step also runs linting rules -- enforcing maximum file weight (typically 5 MB), checking required SDK calls, and verifying render targets.
**Stage 3 -- Deploy and Register**: The built package is uploaded to cloud storage (S3, GCS, or R2) with a content-addressed path. A deployment service registers the URL in a metadata store alongside targeting rules. The ad network API is called to register the creative and start serving it to a small traffic percentage.
**Stage 4 -- Analytics Feedback**: A worker polls the ad network reporting API every N minutes, pulling impression, click, and conversion data keyed to each variant's deployment timestamp. When a variant reaches statistical significance at a configured threshold (95% confidence that it underperforms the control by more than 10%), the pipeline automatically reduces its allocation or pauses it.
Step-by-Step: Setting up the pipeline
Repository structure
```
playable-ads/
variants/
halloween-bundle-2024/
manifest.json
index.html
assets/
bg.jpg
cta.png
summer-sale-v2/
manifest.json
index.html
assets/
ccfish.config.js
.github/workflows/deploy-playable.yml
```
GitHub Actions workflow
```yaml
name: Deploy Playable Ad Variant
on:
push:
branches: [main]
paths: ["variants/**"]
jobs:
build-and-deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: "20"
- run: npm install -g @ccfish/cli
- id: changed
uses: tj-actions/changed-files@v42
with:
files: variants/**/manifest.json
- run: |
for manifest in ${{ steps.changed.outputs.all_changed_files }}; do
variant_dir=$(dirname "$manifest")
ccfish build "$variant_dir" --out "dist/$variant_dir" --env production
done
- uses: jakejarvis/s3-sync-action@v0.5.1
with:
args: --acl public-read --follow-symlinks --delete
env:
AWS_S3_BUCKET: ${{ secrets.S3_BUCKET }}
SOURCE_DIR: "dist"
- run: |
for manifest in ${{ steps.changed.outputs.all_changed_files }}; do
variant_dir=$(dirname "$manifest")
variant_name=$(basename "$variant_dir")
url="https://cdn.example.com/playables/$variant_name/index.html"
npx @ccfish/register --network unity --url "$url" --label "$variant_name"
done
```
CCFish configuration
```javascript
module.exports = {
output: {
format: "playable",
maxSize: 5 * 1024 * 1024,
sdk: "unity-ads",
},
assets: {
compressImages: true,
inlineThreshold: 10240,
},
validation: {
requiredCalls: ["initializeSDK", "showEndCard"],
forbiddenPatterns: ["alert(", "eval("],
enforceOrientation: "landscape",
},
};
```
Local development workflow
```bash
Dev server with hot reload
ccfish serve variants/summer-sale-v2 --port 3000
Run validation
ccfish validate variants/summer-sale-v2
Build for manual QA
ccfish build variants/summer-sale-v2 --out dist/summer-sale-v2
```
This local loop lets creative teams experiment freely without flooding CI with half-finished work.
A/B Testing Integration
A pipeline that deploys variants but cannot measure performance is just a publishing tool. The feedback loop is what makes it true CI/CD for marketing.
The analytics integration works at two levels:
**Level 1 -- Ad network native reporting**: Most networks (Unity Ads, AppLovin, IronSource, AdMob) expose REST APIs returning impression, click, and conversion data by creative ID. The pipeline calls these on a cron schedule and stores results in a time-series database.
```python
analytics/poller.py
import requests, os
def poll_variant(variant_id, network):
config = {
"api_url": "https://api.applovin.com/reporting",
"api_key": os.environ["API_KEY"],
}
resp = requests.get(
f"{config['api_url']}",
params={"creative_id": variant_id, "start": "2024-01-01"},
headers={"Authorization": f"Bearer {config['api_key']}"},
)
data = resp.json()
return {
"variant_id": variant_id,
"impressions": data["impressions"],
"clicks": data["clicks"],
"conversions": data["conversions"],
"ctr": data["clicks"] / data["impressions"] if data["impressions"] else 0,
}
```
**Level 2 -- Automated decision engine**: A stateless service compares each variant against the control for that audience segment. When the confidence threshold is crossed, it triggers an automated action:
```sql
SELECT variant_id, impressions, clicks, conversions, ctr, cvr,
prob_worse_than_control
FROM variant_performance
WHERE date >= CURRENT_DATE - 7
AND impressions >= 5000
AND prob_worse_than_control >= 0.95
ORDER BY prob_worse_than_control DESC;
```
The decision engine can be configured with different policies: conservative (99% confidence, 10K impressions), aggressive (90%, 2K), or hybrid by campaign spend tier. Because the engine is stateless and idempotent, the same variant evaluated twice gets the same recommendation.
Results: Iteration speed improvement
We tracked three teams before and after adopting this pipeline.
**Before**:
- Creative cycle time: 5.3 days (concept to live experiment)
- Variants in market simultaneously: 4
- Experiments per month: 6
- Time to statistical significance: 7+ days
- Creative refreshes per campaign per month: 1-2
**After**:
- Creative cycle time: 4 hours (git push to live experiment at 5% traffic)
- Variants in market simultaneously: 18-24
- Experiments per month: 45-60
- Time to statistical significance: 2-3 days
- Creative refreshes per campaign per month: 8-12
The multiplier effect is the most important metric. Faster iteration does not just mean more variants -- it means the system learns faster. When a variant starts to fatigue on day three, the pipeline already has five replacement variants queued and built in staging. The old variant is paused, the next is promoted, and the campaign never hits the fatigue wall.
One team running UA for a casual mobile game saw their blended CPI drop 34% over six weeks because they could retire underperformers within 24 hours instead of waiting a week for the dashboard to confirm what their gut already knew.
Key Takeaways
1. **Treat ad creatives as code artifacts.** Store variant source files in version control alongside build configuration. Every push is a potential deploy. This disciplines the creative process and makes every change auditable, reversible, and reproducible.
2. **Close the feedback loop automatically.** A pipeline that builds and deploys without measuring is half a solution. Wire analytics hooks into the pipeline and give it authority to pause underperformers. The ROI on automated decision-making compounds with every variant you add.
3. **Start small, scale fast.** Begin with one ad network, one variant template, and a manual approval step. Run for two weeks, measure cycle time improvement, then add auto-deployment, more networks, and the decision engine. Every increment pays for itself in reduced creative operations overhead.
Playable ads are the highest-leverage creative format in mobile UA, but only if you can produce and test them at the velocity algorithms demand. CI/CD is not just for engineers anymore -- it is the competitive advantage for marketing teams who understand that speed of learning is the real metric that matters.