One Queue, Four Channels
AIKit publishes content once and distributes it to four platforms automatically. A single JSON queue file flows into the blog (D1), cross-posts to Dev.to via API, broadcasts a summary to Telegram, and prepares a thread for X/Twitter. The pipeline runs on cron with no manual copy-paste, no formatting drift. Here is how it works.
The Problem
Every content team hits the same wall. You write a solid technical post — architecture deep-dive, tutorial, case study — and publish it on your blog. Then you think: "I should post this on Dev.to for the developer audience." So you copy the markdown, adjust formatting for Dev.to's flavor, and hit publish. Then Telegram: write a summary, paste the link, send it. Then X/Twitter: draft a thread, break the post into tweet-sized chunks, schedule it. By the time you have distributed one post to four channels, you have spent 30-45 minutes on mechanical work that adds zero value.
This does not scale. At 12+ posts per week — AIKit's current cadence — manual distribution would consume 6-8 hours per week. It also creates inconsistency: the Telegram summary gets rushed, Dev.to omits code blocks, the X thread drops key points.
For developer tools especially, this is a self-inflicted wound. Developers live across platforms: blogs via RSS and Google, Dev.to for long-form tutorials, Telegram for real-time updates, X for quick signals. A single-platform strategy reaches only a fraction of the audience.
The Solution
AIKit solves this with a unified queue pipeline. Content is authored once as a structured JSON file in a filesystem queue directory. A set of publisher scripts reads that file and distributes it through channel-specific adapters:
- **blog-publisher.py** — Inserts into Cloudflare D1 (the blog database), making the post live on ai-kit.net with proper SEO metadata
- **devto-publisher.py** — Posts to Dev.to via API with draft mode for editorial review before publishing
- **telegram-publisher.py** — Sends a clean summary with link to the AIKit Telegram channel
- **x-publisher.py** — Generates a thread outline saved to a file for manual review and posting
Each adapter reads the same source JSON but formats the output for its platform's conventions. The content stays centralized; the distribution becomes atomic and reliable.
Architecture Overview
The pipeline runs as a cron-triggered workflow. Here is the data flow:
```
┌──────────────┐ ┌──────────────────┐ ┌───────────────────┐
│ Queue File │────▶│ Queue Manager │────▶│ Channel Adapters │
│ (JSON) │ │ (publisher.py) │ │ │
└──────────────┘ └──────────────────┘ └───────────────────┘
│ ┌───────┬──────┬──────┬────┐
│ │ Blog │Dev.to│Tele │ X │
│ │ D1 │ API │gram │Thread│
│ └───────┴──────┴──────┴────┘
```
Queue Manager
The queue manager (`publisher.py`) orchestrates the workflow: reads the next file, validates JSON, calls each adapter, moves the file to `published/`, and reports results.
blog-publisher.py
This adapter converts markdown `body_text` into Portable Text format (Sanity's block specification) and inserts it into Cloudflare D1 via wrangler CLI. It handles markdown parsing, canonical URL slug generation, category mapping, and excerpt extraction for meta descriptions.
```python
Simplified adapter logic
def publish_to_blog(queue_item: dict) -> dict:
portable_text = md_to_portable_text(queue_item["body_text"])
slug = slugify(queue_item["title"])
result = subprocess.run([
"wrangler", "d1", "execute", "AIKIT_DB",
"--command", build_insert_sql(slug, portable_text, queue_item)
], capture_output=True, text=True)
return {"channel": "blog", "status": "ok" if result.returncode == 0 else "error"}
```
devto-publisher.py
The Dev.to adapter uses the [Dev.to API](https://developers.forem.com/api) to create articles programmatically. Key design decisions:
- **Draft mode by default** — Articles are created with `published: false` so a human can review formatting. Dev.to's markdown renderer differs subtly from the blog's Portable Text renderer, and code blocks especially need eyeballs.
- **Canonical URL passthrough** — The API accepts a `canonical_url` parameter. AIKit passes the blog's permalink, ensuring Dev.to readers who click through land on the original and Google sees the blog as authoritative.
- **Tag alignment** — Queue tags are mapped to Dev.to's predefined tag taxonomy via fuzzy matching.
- **Rate limiting** — Dev.to allows 5 requests/minute for free tiers. The adapter batches posts and sleeps between them.
```python
def publish_to_devto(queue_item: dict, api_key: str) -> dict:
slug = slugify(queue_item["title"])
canonical_url = f"https://ai-kit.net/blog/{slug}"
payload = {
"article": {
"title": queue_item["title"],
"body_markdown": queue_item["body_text"],
"published": False,
"canonical_url": canonical_url,
"tags": map_tags(queue_item["tags"]),
"description": queue_item["excerpt"]
}
}
resp = requests.post(
"https://dev.to/api/articles",
json=payload,
headers={"api-key": api_key}
)
return {"channel": "devto", "status": "ok" if resp.ok else "error", "id": resp.json().get("id")}
```
telegram-publisher.py
Telegram requires a different format. A full 1000-word post as a single message would be overwhelming. The adapter: extracts the first 200-300 words as a preview, adds 3-5 bullet-point takeaways, appends the canonical URL and relevant hashtags, and uses Telegram's `send_message` API with MarkdownV2 formatting. The message stays scannable — a developer assesses relevance in 3 seconds and clicks through if interested.
```python
def escape_telegram_md(text: str) -> str:
special_chars = ["_", "*", "[", "]", "(", ")", "~", "`", ">", "#", "+", "-", "=", "|", "{", "}", ".", "!"]
for char in special_chars:
text = text.replace(char, f"\\{char}")
return text
```
x-publisher.py
X/Twitter is the trickiest channel. Long-form content does not fit in 280-character tweets. The adapter: parses `body_text` by `## ` headings, generates a thread outline with a hook tweet plus section summaries, and saves it as a `.thread` file for manual review. The heavy lifting of breaking 1000 words into tweet-sized chunks is done automatically; the human adjusts the hook and adds media.
Implementation Details
Queue File Format
Every queue file follows a strict schema with `title`, `body_text` (markdown), `excerpt`, `category`, and `tags`:
```json
{
"title": "Post Title",
"body_text": "## Section One\n\nContent here...",
"excerpt": "SEO excerpt...",
"category": "Tutorials",
"tags": ["aikit", "content-pipeline", "telegram"]
}
```
Canonical URLs and SEO
Cross-posting creates a canonicalization problem. Without explicit canonical URLs, Google might index the Dev.to version instead of the original, diluting SEO authority. The pipeline handles this: blog is the original (no canonical needed), Dev.to passes `canonical_url` to the blog's permalink, Telegram and X send links only. Search equity accumulates on the blog while other channels serve as distribution conduits.
Draft Mode on Dev.to
Dev.to's draft mode is deliberate. The blog publisher goes straight to production because the D1 pipeline is battle-tested across 400+ posts. Dev.to is a secondary channel with a different rendering engine. Draft mode catches formatting issues — code blocks without syntax highlighting, broken relative links — before developers see them.
Results
The multi-channel pipeline has been in production for over 3 months, processing 400+ posts:
| Channel | Posts | Avg. Views/Post | Pattern |
|---------|-------|-----------------|--------|
| Blog (ai-kit.net) | 400+ | 850 | SEO organic |
| Dev.to | 80+ | 2,100 | Developer referral |
| Telegram | 200+ | 35% CTR | Direct engagement |
| X/Twitter | 150+ threads | Varies | Brand awareness |
Dev.to has been the strongest secondary channel. Developer-focused posts routinely get 2x-5x more views on Dev.to than the blog because of Dev.to's built-in discovery feed. A post about content pipeline architecture that might languish in Google's long tail reaches thousands of developers on Dev.to within hours.
Telegram drives the highest click-through rate (35%) of any channel. Each notification lands in front of a warm, engaged audience. The channel has grown to 2,400+ subscribers through organic cross-promotion.
Key Takeaways
1. **One source of truth** — A single JSON queue file is the canonical content source. Every channel adapter reads from it. No copy-paste, no version drift, no "did I update Telegram?" uncertainty.
2. **Channel-specific adapters, not templates** — Each channel gets separate adapter code tuned to its platform's API and formatting rules. A shared template with platform switches would be too brittle.
3. **Draft mode on secondary platforms** — The blog goes live immediately; secondary platforms use draft workflows to catch rendering issues without slowing the primary channel.
4. **Canonical URLs protect SEO** — Without them, cross-posting can hurt search rankings. Dev.to's `canonical_url` parameter is essential.
5. **Automate the mechanical, keep the strategic** — The pipeline handles formatting, API calls, and file management. Strategic decisions — which posts to promote, thread hooks — stay human.
6. **Measure channel ROI separately** — Dev.to drives views but weaker signup conversion. Telegram drives high-intent clicks but smaller volume. Each channel earns its keep on different metrics.
A multi-channel content pipeline turns one piece of content into four distribution events with zero additional writing. For any developer tool or SaaS with a technical audience, it is the highest-ROI automation investment after the content creation pipeline itself.