Imagine a blog that never sleeps. A content engine that wakes up at 6 AM, scans your product changelog, crafts a 1200-word deep-dive, optimizes it for SEO, and publishes to production — all before your marketing team has finished their first coffee. That's not a roadmap fantasy. It's running today on AIKit.

At the core of AIKit's Auto Blog & SEO Plugin is a sophisticated LLM orchestration pipeline that stitches together OpenRouter-backed model calls, prompt chains, content validation gates, and Cloudflare D1 persistence into a single autonomous workflow. Here's how it works under the hood.

The Problem — Content Creation Bottleneck

Marketing teams at B2B SaaS companies face a brutal equation. You need 12-16 blog posts per month to maintain SEO momentum. Each post requires: topic research, outlining, drafting, editing, SEO optimization, formatting, review, and publishing — roughly 6-8 hours per post, totaling 72-128 hours monthly. That's a full-time role just for blog production.

The result? Most teams compromise. They publish less frequently, rely on expensive agencies, or burn out their writers. Consistency slips, organic traffic plateaus, and the content gap widens against competitors who've automated. AIKit's Auto Blog plugin was built to solve precisely this equation.

The Solution — LLM Orchestration Inside EmDash CMS

Rather than bolt an AI feature onto an existing CMS, the Auto Blog plugin lives as a first-class EmDash plugin — the same architecture powering AIKit's entire content platform. This gives it direct access to the CMS's content model, storage layer, and publishing pipeline.

The plugin wraps three layers of LLM interaction:

1. **Generation Layer** — Prompt templates fed to OpenRouter models for article drafting

2. **Validation Layer** — A chain of LLM calls that fact-check, tone-audit, and SEO-score the output

3. **Transformation Layer** — Portable Text conversion and metadata extraction for D1 storage

Each layer is provider-agnostic. The plugin routes through OpenRouter, enabling fallback chains, model-specific routing (Claude for creative drafts, GPT-4o for structured SEO output), and cost optimization via smaller models for simpler tasks.

Architecture Overview — Three Runtime Environments

The pipeline spans Cloudflare Workers, D1, and OpenRouter:

Cloudflare Workers (Edge Compute)

The plugin's orchestrator runs as a Cloudflare Worker, triggered by a cron schedule (configurable per site — daily, hourly, or on-demand via webhook). The Worker coordinates the entire pipeline from topic selection to publication, keeping all LLM calls and database writes within the same edge region for minimal latency.

D1 (SQLite at the Edge)

All generated content, metadata, SEO scores, and generation logs persist in Cloudflare D1. This gives the pipeline transactional guarantees — if generation fails mid-way, partial artifacts are rolled back. D1 also enables rich analytics: which topics perform best, which models generate the highest-scoring drafts, and which prompt templates drive the most engagement.

OpenRouter API (LLM Gateway)

Every LLM call goes through OpenRouter, which provides model fallback chains, unified billing, per-model cost tracking, and streaming support. The plugin stores its OpenRouter configuration in a KV namespace:

```json

{

"openrouter_api_key": "sk-or-v1-...",

"default_model": "anthropic/claude-3.5-sonnet",

"fallback_models": ["openai/gpt-4o", "meta-llama/llama-3.1-70b-instruct"],

"max_tokens_per_call": 4096,

"temperature": 0.7,

"rate_limit_rpm": 30

}

```

Implementation Deep-Dive

1. Prompt Templates

Each blog post type has a dedicated prompt template stored as a Portable Text document in D1. Templates use Handlebars-style variables:

```

You are an expert B2B SaaS content writer. Write a blog post with:

Title: {{title}}

Target Keywords: {{keywords}}

Tone: {{tone}}

Length: {{word_count}} words

Structure: hook opening, H2 sections, examples, actionable takeaways.

Brand voice: {{brand_voice}}

Products: {{products}}

```

Templates are editable through the EmDash admin UI, so marketing teams can iterate on prompts without touching code.

2. LLM Provider Abstraction

The plugin defines a clean provider interface:

```

interface LLMProvider {

generate(config: GenerationConfig): Promise<LLMResponse>

validate(response: LLMResponse): ValidationResult

estimateCost(config: GenerationConfig): CostEstimate

}

```

Implemented providers include OpenRouterProvider (with fallback support), AnthropicDirectProvider, and OpenAIProvider. Switching providers is a config change, not a code change.

3. Content Validation Pipeline

Before any generated content touches the database, it passes through three validation gates, each powered by a separate LLM call:

**Gate 1 — Structural Integrity** — Checks H1/H2 hierarchy, word count compliance, and resolved placeholder variables.

**Gate 2 — Tone & Brand Consistency** — Checks brand voice alignment, inappropriate idioms, promotional language, and off-topic tangents.

**Gate 3 — SEO Score** — Evaluates keyword density, heading structure, meta description quality, readability, and internal linking opportunities.

Each gate returns a score (0-100) and issues list. If any gate scores below its threshold, the pipeline triggers a regeneration cycle with the gate's feedback appended to the prompt. After 3 failed attempts, the post is flagged for human review.

4. Portable Text Conversion

AIKit's content model uses Portable Text, the structured JSON format pioneered by Sanity. The plugin converts LLM markdown output to Portable Text blocks: markdown headers become block objects with style levels, inline formatting becomes mark annotations, and code blocks extract as separate types. This conversion is the trickiest part of the pipeline — LLMs are inconsistent with markdown, and the converter must handle nested formatting, malformed links, and escaped characters.

5. Publication Flow

Once validated and converted: (1) A draft record is inserted into D1 with status="draft"; (2) The LLM generates excerpt, slug, and SEO metadata; (3) The post is queued for its scheduled publication slot; (4) If enabled, Open Graph image generation triggers; (5) The sitemap is invalidated, RSS updated, and Telegram announcement dispatched.

Results

Since deploying on AIKit's own marketing site:

- **12x publishing frequency** — From 4 posts/month to 48+ posts/month

- **83% reduction** in per-post creation time (8 hours → ~80 minutes of human review)

- **94% acceptance rate** — Posts pass validation on first or second attempt

- **2.4x organic traffic growth** over 90 days from consistent publishing cadence

- **$4,200/month savings** vs. the previous agency retainer

About 6% of posts fail validation and get flagged. Even these failures are valuable — they identify prompt templates needing tuning or topics lacking source material.

Key Takeaways

- **Abstraction is everything.** The provider-agnostic LLM layer means the pipeline doesn't degrade when a model changes pricing or deprecates. Swap models with a config change.

- **Validation gates prevent garbage.** Raw LLM output is impressive but occasionally deranged. Three validation passes — structure, tone, SEO — catch edge cases before production.

- **D1 enables the transactional content pipeline.** SQLite at the edge means rollback, analytics, and draft management without a separate database layer.

- **Prompts are product — iterate on them.** The templated prompt system lets non-engineers tune generation quality. Our best prompts came from the content team, not engineering.

What's Next

The next iteration introduces multi-post campaigns with editorial calendars (pillar page + clusters), RAG-based topic sourcing from changelogs and support tickets, and A/B testing of prompt variants. The goal remains: make content production as automated and reliable as a CI/CD pipeline. Because for modern marketing teams, content IS code.