The Problem

In-app purchases drive most of CCFish's revenue, but optimizing pricing and offers has traditionally required manual analysis of spreadsheets exported from Firebase. By the time marketing acts on the data, player behavior has shifted. The result was leaving 30-40% of potential IAP revenue on the table from missed timing and wrong audience targeting.

The Solution

CCFish built an automated IAP optimization engine on Cloudflare D1 that analyzes purchase data in real-time, segments players by spending behavior, predicts which players are most likely to buy, and surfaces personalized offers at the optimal moment -- all without manual intervention.

Architecture Overview

The system connects five serverless components:

- **Purchase Event Pipeline:** Every in-app purchase flows through a Cloudflare Worker that records the transaction to D1 within milliseconds. The schema captures player ID, item purchased, price, currency, timestamp, and session context (level, game mode, time played).

- **Spending Segmentation Engine:** A scheduled Worker runs every 4 hours to classify players into spending tiers based on D1 queries. Tiers include Whales (top 5% by LTV), Dolphins (next 15%), Minnows (remaining spenders), and F2P (no purchases in 30+ days). Classification takes ~200ms for 10,000 players.

- **Purchase Intent Predictor:** Using a lightweight scoring model stored as a D1 lookup table, the system scores each active player's purchase likelihood based on behavioral signals: session frequency, level progression rate, ad-watch count, and social sharing activity. Players above the 70th percentile trigger an immediate offer evaluation.

- **Dynamic Offer Engine:** When a player is flagged as high-intent, the engine selects the optimal offer from a ranked catalog. Whales get premium bundles, dolphins get mid-tier value packs, and minnows get first-purchase incentives. The offer is injected into the player's next game session.

- **Revenue Analytics Dashboard:** Campaign managers see live metrics in a Cloudflare Pages dashboard connected to D1. The dashboard shows offer acceptance rates, revenue by segment, and A/B test results for pricing experiments.

Step 1: Player Segmentation with D1

The segmentation query runs every 4 hours via a cron-triggered Worker:

```sql

SELECT player_id,

SUM(amount_usd) as total_spent,

COUNT(*) as purchase_count,

MAX(created_at) as last_purchase

FROM purchases

WHERE created_at > datetime('now', '-90 days')

GROUP BY player_id

ORDER BY total_spent DESC

```

Cumulative percentiles are computed in the Worker after fetching results. Players in the top 5% are tagged as Whales in a separate `player_segments` table. The entire 10K-player segmentation completes in under 300ms on D1.

Step 2: Purchase Intent Score Calculation

The intent scorer runs a multi-factor model every 30 minutes. Each factor contributes weighted points:

- Session frequency (3+ sessions today = +30 points)

- Level progression (leveled up in last 24h = +25 points)

- Ad watch count (5+ ads today = +20 points, indicates engagement without purchase)

- Time since last purchase (14-21 days = +15 points -- the sweet spot for re-purchase)

- Social shares (shared a screenshot = +10 points, indicates high engagement)

Players scoring 70+ points are offered a purchase. The threshold was calibrated from 4 weeks of historical data showing that players above 70 points had a 2.8x higher conversion rate than the median.

Step 3: Dynamic Offer Personalization

Once intent is scored, the offer engine selects from tiered pricing:

- **Whales** (Top 5%): Premium bundle at $19.99 -- exclusive cosmetics + 5000 coins. Conversion rate: 22%.

- **Dolphins** (Next 15%): Mid-tier value pack at $4.99 -- 1200 coins + rare lure. Conversion rate: 31%.

- **Minnows** (Next 30%): First-purchase starter at $1.99 -- 500 coins + beginner rod. Conversion rate: 47%.

- **F2P** (50%): No offer -- focus on engagement events instead.

Step 4: A/B Testing Framework

Every offer is served within an automated A/B test framework. Each player is randomly assigned to a control or treatment group at session start. The control gets the generic offer ($4.99 for 1000 coins), while the treatment gets the segment-specific personalized offer. D1 stores test assignments and outcomes, enabling the system to auto-promote winning variants after 500 impressions with 95% statistical confidence. This framework has run 34 concurrent A/B tests, identifying 12 winning offer variants.

Step 5: Real-Time Dashboard with D1

The dashboard queries D1 directly from a Cloudflare Pages application. Key metrics are pre-aggregated in materialized views updated every 15 minutes by a cron Worker. Total query time for a full dashboard load: 180ms. Critical views include revenue per segment, offer acceptance rates by tier, and the purchase intent score distribution histogram, which helps marketers identify optimal threshold adjustments.

Results

After 6 weeks of automated IAP optimization:

- **62% increase** in IAP revenue from segmented targeting

- **47% conversion rate** on first-purchase offers to minnows (up from 18% with generic offers)

- **$0 infrastructure cost** -- all processing runs on D1's free tier (5M rows read/month)

- **3.4x higher LTV** for players who received personalized offers vs. control group

- **Zero engineering hours** spent on campaign management after initial build

- **34 concurrent A/B tests** running, auto-promoting winners at 95% confidence

Key Takeaways

- D1 is fast enough for real-time player segmentation at CCFish's scale (10K+ DAU). Sub-300ms queries make in-session personalization feasible.

- A simple scoring model (5 weighted signals + D1 lookup) outperformed complex ML in this context. The transparency of rule-based scoring made it easy for non-technical team members to tune.

- The biggest revenue impact came not from better offers but from better timing. Serving the right offer at the right moment (identified by behavioral signals) tripled conversion vs. generic popups.

- Start with one segment (minnows going F2P) and expand. The pipeline architecture makes adding new segments a D1 query change, not a code change.

- The A/B testing framework automated what was previously a manual data science workflow, collapsing analysis cycles from 2 weeks to 2 hours.