The Problem
Content calendars are usually managed in spreadsheets. Someone writes topics for the month, someone else drafts posts, an editor reviews them, and finally a publisher clicks deploy. For a single site publishing 3+ posts per week, this workflow consumes 10-15 hours of human time weekly. AIKit needed a system that could schedule blog posts, generate them, and publish across multiple channels -- all without a human in the loop.
The core question was simple: Can a serverless cron pipeline replace an entire editorial team for a technical blog? The answer turned out to be yes, with the right architecture. The key insight was separating concerns: a scheduling layer (cron), a generation layer (LLM with structured prompts), a storage layer (JSON queue files), and a delivery layer (D1 database inserts). Each layer is independently testable and replaceable.
Traditional CMS solutions like WordPress or Ghost require a browser-based admin panel for publishing. Even with REST APIs, the workflow involves multiple HTTP calls, authentication tokens, and error handling for transient failures. AIKit's approach bypasses all of this by writing directly to the database from a cron job running on the same serverless edge.
The Architecture
AIKit's content pipeline has four layers, each running on Cloudflare's serverless stack:
**Layer 1: Content Calendar (Markdown)**
A flat markdown file at `~/cmo/content/content-calendar.md` serves as the editorial plan. Each entry maps to a post number, title, slug, category, and status. The calendar is updated programmatically after every publishing run -- new sections are appended, completed posts are marked live. This file is git-tracked, giving full revision history of the editorial plan.
**Layer 2: Queue System (JSON Files)**
Generated posts land in `~/cmo/content/queue/` as JSON files. Each file contains the title, body_text (800-1500 word markdown), excerpt, category, and tags. The queue-publisher script picks the first file, publishes it via the blog-publisher.py script, then archives it to `published/` with a timestamp prefix. File ordering is alphabetical, so prefix numbers (239-, 240-) determine publish order. JSON files were chosen over a database queue for simplicity: no migrations, no connection pools, no dead letter queues. A failed publish just leaves the file in place.
**Layer 3: Publishing Engine (blog-publisher.py)**
This script converts queue JSON into D1 database inserts. It generates ULIDs for post and revision IDs, converts markdown body_text to Sanity Portable Text JSON, and executes four sequential D1 queries: INSERT into ec_posts, INSERT into revisions, UPDATE ec_posts with revision refs, and INSERT into _emdash_seo. The circular foreign key between ec_posts and revisions requires this specific insert order -- the post must exist before the revision can reference it, and the revision must exist before the post can reference it back.
The script must run from the EmDash project directory with CLOUDFLARE_ACCOUNT_ID exported to avoid multi-account auth errors on Cloudflare. This is a common pain point on multi-account setups where the default account differs from the target zone's account.
**Layer 4: Delivery (Multi-Channel)**
After D1 insert, the pipeline cross-posts to Telegram (via send_message), updates the content calendar, and logs the result. The blog URL is live immediately -- no build step needed because EmDash renders content from D1 at request time via Astro SSR. The dynamic sitemap at /sitemap.xml automatically picks up new posts within seconds.
The Cron Schedule
Three cron jobs drive the pipeline on a Mon/Wed/Fri 6AM CT schedule:
| Job | Schedule | Purpose |
|---|---|---|
| Queue Publisher | 0 6 * * 1,3,5 | Publish next post in queue |
| Content Generator | 0 6 * * 1,3,5 | Refill queue if <= 1 post |
| Channel Conqueror | 0 6 * * 1,3,5 | Cross-post to all channels |
The jobs share state through the queue directory and D1 database. Each run checks published post count via `SELECT COUNT(*) FROM ec_posts WHERE status='published'` to determine the next post number. This prevents collisions between concurrent runs -- a real problem when multiple cron jobs fire within the same hour.
Theme Rotation for Consistent Variety
To avoid publishing repetitive content, the pipeline uses a theme rotation based on DAY_OF_YEAR modulo 4:
- 0 = Content/Growth: SEO trends, content strategy, viral hooks
- 1 = Marketing Automation: Pipeline architecture, LLM scaling, cron workflows
- 2 = Sales Channel: Affiliate strategies, pricing, CRO
- 3 = Hybrid Dev+Marketing: Tool building, analytics, data-driven decisions
Project focus rotates hourly with HOUR modulo 5 across AIKit, CCFish, AiSalonHub, PlayableAdStudio, and DeFiKit. This ensures coverage across all projects while maintaining topic diversity. The rotation prevents stale topic repetition -- even with 238+ posts, each article covers a different angle.
Results After 238 Posts
The pipeline has published 238 posts to ai-kit.net with zero human editing. Average time from topic generation to live URL: under 2 minutes. The blog receives consistent daily traffic from organic search, and the dynamic sitemap automatically indexes every new post. Cross-posting to Telegram reaches the community within seconds of publication.
The total infrastructure cost is near-zero: Cloudflare Workers free tier handles the cron execution, D1's 5GB storage tier covers all content, and the wrangler CLI auth is free. No database servers, no CI/CD builds, no deployment pipelines, no CMS hosting fees.
Key Takeaways
- A content calendar as markdown is surprisingly effective: it is human-readable, git-trackable, and easy to manipulate programmatically without a database schema migration
- D1's write-then-read consistency is good enough: posts appear on the live site within seconds of INSERT, no cache invalidation needed
- JSON queue files are simpler than a database queue: no migrations, no connection pools, no dead letter queues, and files can be inspected with any text editor
- The theme rotation keeps content fresh without manual curation -- the modulo arithmetic guarantees diversity across 238+ posts
- Multi-channel distribution doubles reach with zero extra effort per channel once the pipeline is built
- Start with one channel (blog), prove the pipeline, then add channels incrementally