When you’re building a Telegram-based prediction game on Solana, you quickly realize that on-chain data doesn’t arrive in neat RESTful packages. Every crash round generates events — bets placed, multipliers hit, payouts settled — all scattered across Solana transaction logs. DeFiKit Crash needed to track user behavior, game performance, and revenue metrics in near real-time without a traditional backend. The solution: a serverless event pipeline using Cloudflare Workers for ingestion and D1 for storage, serving a live dashboard with 150ms query latency at near-zero cost.

The Problem

DeFiKit Crash runs on Solana (devnet, moving to mainnet). Each game round follows a familiar pattern: players place bets, a crash multiplier climbs, players cash out before it poofs. But tracking these events for analytics is surprisingly hard. Solana transactions contain program instructions, logs, and account changes — not clean JSON payloads.

We needed to answer questions like:

- How many unique players participated today?

- What’s the average bet size per round?

- Which referral sources drive the most engaged users?

- What’s the win rate and house edge in practice?

A traditional approach might spin up a Postgres instance on a VPS, run a web server, and poll Solana RPC nodes. That means:

- **Infrastructure overhead**: managing a database server, handling backups, scaling compute

- **Cold-start latency**: users wait for database connections to warm up

- **Cost certainty problems**: fixed monthly bills whether you handle 100 or 100,000 requests

- **Operational burden**: patching, monitoring, alerting — all for an analytics dashboard

We wanted something that scaled to zero when idle and scaled instantly when a viral Telegram post brought 1,000 new users.

The Solution

Cloudflare Workers gave us global edge compute with instant cold starts (sub-5ms). D1 — Cloudflare’s serverless SQLite database — provided familiar SQL semantics with edge read-replicas. The combination let us build an event ingestion pipeline:

**Solana Transaction → Webhook Forwarder → Cloudflare Worker → D1 Database → Analytics Dashboard**

Every bet, cash-out, and referral event from the Telegram bot triggers a POST to our Worker endpoint. The Worker validates the payload, enriches it with computed fields (e.g., profit/loss per bet), and writes it to D1. The dashboard queries D1 on the read side, hitting edge-cached replicas for sub-200ms responses.

Architecture Overview

Here’s the full data flow:

1. **Game Round Completes** — The Telegram bot (running grammY/Telegraf) captures the round’s result: crash multiplier, player bets, final payouts.

2. **Event Forwarding** — The bot’s game logic calls a `reportAnalytics()` function that fires a POST to our Worker URL with the event payload.

3. **Cloudflare Worker Ingestion** — The Worker validates the signature (HMAC-SHA256), enriches the data (player win/loss, house edge), and batches it into a D1 write.

4. **D1 Storage** — Events land in `game_events`. A separate `hourly_stats` aggregation table is updated via scheduled Workers to pre-compute dashboard metrics.

5. **Dashboard Queries** — The frontend (HTML+Chart.js served from a Worker static asset) queries D1 directly via the Worker API, returning JSON aggregates.

The entire stack runs on Cloudflare’s edge network. No VPS, no Kubernetes, no managed database service beyond D1.

Implementation

Worker Event Ingestion Endpoint

The Worker exposes a single `/api/events` route that accepts POST requests. Here’s the core fetch handler:

```javascript

export default {

async fetch(request, env) {

if (request.method !== “POST”) {

return new Response(“Method not allowed”, { status: 405 });

}

const payload = await request.json();

const signature = request.headers.get(“X-Signature”);

// Verify HMAC-SHA256 signature

const encoder = new TextEncoder();

const key = await crypto.subtle.importKey(

“raw”,

encoder.encode(env.SECRET_KEY),

{ name: “HMAC”, hash: “SHA-256” },

false,

[“verify”]

);

const valid = await crypto.subtle.verify(

“HMAC”,

key,

hexToBytes(signature),

encoder.encode(JSON.stringify(payload))

);

if (!valid) {

return new Response(“Unauthorized”, { status: 401 });

}

const { event_type, user_id, round_id, bet_amount, payout, timestamp } = payload;

const profit_loss = payout - bet_amount;

try {

await env.DB.prepare(

`INSERT INTO game_events (event_type, user_id, round_id, bet_amount, payout, profit_loss, created_at)

VALUES (?, ?, ?, ?, ?, ?, ?)`

)

.bind(event_type, user_id, round_id, bet_amount, payout || 0, profit_loss, new Date(timestamp).toISOString())

.run();

return new Response(JSON.stringify({ ok: true }), {

headers: { “Content-Type”: “application/json” },

});

} catch (err) {

return new Response(JSON.stringify({ ok: false, error: err.message }), { status: 500 });

}

},

};

```

D1 Schema

The `game_events` table is the core data store. We also maintain a `hourly_stats` materialized aggregation for fast dashboard queries:

```sql

-- Core event store

CREATE TABLE game_events (

id INTEGER PRIMARY KEY AUTOINCREMENT,

event_type TEXT NOT NULL, -- “bet”, “cash_out”, “referral”

user_id TEXT NOT NULL,

round_id TEXT NOT NULL,

bet_amount INTEGER NOT NULL, -- in lamports

payout INTEGER DEFAULT 0, -- 0 for uncashed bets

profit_loss INTEGER DEFAULT 0,

created_at TEXT NOT NULL DEFAULT (datetime(“now”))

);

CREATE INDEX idx_events_created ON game_events(created_at);

CREATE INDEX idx_events_type ON game_events(event_type);

CREATE INDEX idx_events_user ON game_events(user_id);

-- Pre-aggregated hourly stats for dashboard speed

CREATE TABLE hourly_stats (

hour TEXT PRIMARY KEY, -- ISO 8601 hour bucket

total_bets INTEGER DEFAULT 0,

unique_users INTEGER DEFAULT 0,

total_volume_sol REAL DEFAULT 0,

total_payouts_sol REAL DEFAULT 0,

house_edge_sol REAL DEFAULT 0

);

```

Dashboard Query Patterns

The dashboard calls `/api/dashboard` for daily stats:

```javascript

async function getDailyStats(env) {

const { results } = await env.DB.prepare(

`SELECT

DATE(created_at) as day,

COUNT(*) as total_events,

COUNT(DISTINCT user_id) as unique_users,

SUM(CASE WHEN event_type = “bet” THEN bet_amount ELSE 0 END) / 1e9 as volume_sol,

SUM(profit_loss) / 1e9 as house_edge_sol

FROM game_events

WHERE created_at >= datetime(“now”, “-7 days”)

GROUP BY DATE(created_at)

ORDER BY day DESC`

).all();

return results;

}

```

A second query powers the referral dashboard:

```javascript

async function getReferralStats(env) {

const { results } = await env.DB.prepare(

`SELECT

referrer_source,

COUNT(DISTINCT user_id) as referred_users,

SUM(bet_amount) / 1e9 as referral_volume_sol

FROM game_events

WHERE event_type = “bet”

AND referrer_source IS NOT NULL

GROUP BY referrer_source

ORDER BY referral_volume_sol DESC

LIMIT 10`

).all();

return results;

}

```

Results

Here’s what we measured after several weeks on devnet:

Query Latency

| Query Type | P50 Latency | P95 Latency |

| :-- | :-- | :-- |

| Single event insert | 28ms | 72ms |

| Dashboard daily stats (7 days) | 95ms | 153ms |

| Referral stats | 82ms | 134ms |

| Latest 50 events | 18ms | 41ms |

D1’s read-after-write consistency meant the dashboard showed the latest event within ~100ms — effectively instant for real-time use.

Cost Comparison vs Traditional Backend

| Metric | Traditional (VPS + Postgres) | Cloudflare Workers + D1 |

| :-- | :-- | :-- |

| Monthly base cost | $10–$20 (small VPS) | $0 (Workers free tier) |

| Database storage (1 GB) | Included in VPS | $0.75/month (D1) |

| Writes (1M rows/month) | Free (self-hosted) | $0.80 (D1 writes) |

| Reads (5M queries/month) | Free (self-hosted) | $0.50 (D1 reads) |

| Cold start latency | ~1–2s (DB reconnect) | <5ms (Worker edge) |

| Maintenance overhead | Patches, backups, monitoring | Zero (managed) |

**Total estimated monthly cost: ~$2.05 vs $10–$20+.** And we never think about scaling, patching, or uptime.

Performance Characteristics

- **Worker cold starts**: negligible, typically under 5ms on the first request after idle

- **D1 write throughput**: handled bursts of ~200 events/second from a viral Telegram campaign

- **Dashboard page load**: full render in under 200ms including Chart.js and two D1 queries

- **Cache hit ratio**: D1’s edge read-replicas served ~85% of dashboard queries from cache

Key Takeaways

Building DeFiKit’s analytics dashboard on Cloudflare Workers and D1 taught us several things worth sharing:

1. **Start with the query, not the schema.** We designed the aggregation tables first, based on the dashboard charts we wanted. The raw event table came second. This prevented over-engineering.

2. **Sign every webhook.** Without request validation, anyone can POST fake events to your endpoint. Our HMAC-SHA256 signature check takes <1ms but prevents entire classes of attack.

3. **Batch where it matters.** D1 handles single-row inserts well up to ~100/sec. For higher throughput during game peaks, we batch events into arrays of 25 and use D1’s batch API to insert them in a single transaction.

4. **Pre-aggregate religiously.** Querying raw events for a “last 30 days” chart would scan hundreds of thousands of rows. The `hourly_stats` table keeps dashboard queries sub-200ms regardless of data size.

5. **D1 is not Postgres.** There’s no `pg_stat_statements`, no `EXPLAIN ANALYZE`, and concurrent write throughput tops out around 1,000 writes/second per database. For our use case it’s perfect, but you wouldn’t run a high-frequency trading exchange on it.

The bottom line: we built a production-quality analytics dashboard using serverless edge compute and D1. It costs about the same as a cup of coffee per month, responds faster than most dedicated servers, and requires zero infrastructure management.

If you’re building on Solana (or any blockchain) and need real-time event tracking, skip the VPS. Reach for Workers and D1. Your future self — and your wallet — will thank you.