> Every blog post you publish on ai-kit.net is automatically listed in /llms.txt and /llms-full.txt within seconds. No rebuild, no deploy, no extra work. Here’s why that matters and how it works.

The Problem: AI Invisible Content

Traditional SEO optimized for human readers and Google bots. But 2026 has a new class of crawler—AI agents reading your site to train RAG systems, answer user queries, and generate citations.

If your blog isn't structured for LLM consumption, you're invisible to:

- **Claude, ChatGPT, Gemini** — When users ask about tools in your category

- **Github Copilot, Cursor, Zed** — When developers search for documentation

- **Custom RAG pipelines** — When teams index your site for internal knowledge bases

- **Crawlers** — Like OpenAI's GPTBot, Google-Extended, Anthropic's Claude-Web

The Solution: /llms.txt + /llms-full.txt

The llms.txt standard (proposed by Anthropic) defines two files:

| File | Purpose | Content |

|------|---------|--------|

| `/llms.txt` | Summary index | URL + short excerpt for each page |

| `/llms-full.txt` | Full content dump | Complete text of all pages, newline-separated |

AI agents check these files first. If they exist, the agent reads the full content without crawling, parsing, or rendering HTML.

How AIKit Implements This

Both files are **dynamic server-side routes**, not static files:

```typescript

// src/pages/llms.txt.ts

import type { APIRoute } from "astro";

import { env } from "cloudflare:workers";

export const GET: APIRoute = async () => {

const db: D1Database = (env as any).DB;

const result = await db

.prepare("SELECT slug, title, excerpt, published_at FROM ec_posts WHERE status='published' ORDER BY published_at DESC")

.all();

let text = "# AIKit Blog\n";

text += "> Collection of articles about EmDash, SEO, AI content generation, and plugin development\n\n";

for (const post of result.results || []) {

if (post.slug && post.title) {

text += `https://ai-kit.net/blog/${post.slug}: ${post.title}\n`;

if (post.excerpt) text += `- ${post.excerpt}\n`;

text += "\n";

}

}

return new Response(text, {

headers: { "Content-Type": "text/plain; charset=utf-8" }

});

};

```

The key insight: **every new D1 insert automatically extends both files**. No build step, no CI pipeline, no manual regeneration.

The Technical Foundation

D1 Query at Runtime

Both `/llms.txt` and `/llms-full.txt` are Astro SSR routes that run a D1 SELECT query at request time:

```sql

SELECT slug, title, excerpt, created_at

FROM ec_posts

WHERE status = 'published'

ORDER BY published_at DESC;

```

For `/llms-full.txt`, the full `content` JSON is parsed back to plain text. Portable Text blocks like `h2`, `h3`, and `normal` are converted back to markdown:

```typescript

function portableTextToPlain(blocks: any[]): string {

return blocks.map((block: any) => {

const text = block.children?.map((c: any) => c.text).join("") || "";

if (block.style === "h2") return `## ${text}`;

if (block.style === "h3") return `### ${text}`;

return text;

}).join("\n\n");

}

```

Performance

D1 queries on the `ec_posts` table with a small dataset (138 rows) complete in under 2ms. Even at 10,000 posts, the query would take < 20ms—no pagination needed.

Why This Matters for Your Content Strategy

| Traditional SEO | AI-Discoverable Content |

|----------------|------------------------|

| Optimized for Google keyword ranking | Optimized for LLM context windows |

| Requires HTML parsing, JS rendering | Direct text access via /llms.txt |

| Takes weeks to index | Instant visibility |

| Depends on backlinks | Depends on structured, answer-first content |

| Title tags + meta descriptions | Full content in agent context |

AIKit blog posts use the **answer-first** format—the first 2-3 sentences answer the core question directly. This matches how LLMs process information: they read the top of the document and decide whether to continue.

Key Takeaways

1. **/llms.txt is not optional in 2026.** AI agents check it first. If you don't have one, you're invisible to LLM-based discovery.

2. **Dynamic > static.** A D1-backed /llms.txt auto-updates with every new post. No rebuild workflow needed.

3. **Structure matters for agents.** Clear headings (`##`, `###`), code blocks, and bullet points help LLMs parse your content efficiently.

4. **Zero extra effort with D1.** Every post you publish automatically feeds both /llms.txt and /llms-full.txt.

Check it yourself: [ai-kit.net/llms.txt](https://ai-kit.net/llms.txt) should list every published post with its excerpt.