- AEO (Answer Engine Optimization): Focused subset of GEO for short, direct-answer extraction and citation.
- Entity density: Number of named entities (products, standards, dates, version numbers) per relevant passage.
- Factual density: Ratio of verifiable claims and statistics to total content.
−- /llms.txt: Community proposal for a machine-readable Markdown file at site root to point LLMs to canonical docs and clean .md artifacts (see https://llmstxt.org/).
+- /llms.txt: Community proposal for a machine-readable Markdown file at site root to point LLMs to canonical docs and clean .md artifacts (see https://llmstxt.org/). The spec expects a single H1 project/site name, an optional blockquote summary, and ordered sections that point to canonical Markdown files. Projects are encouraged to provide plain .md mirrors at the same URL with a .md suffix for deterministic ingestion.
- Structured answer: A self-contained paragraph (lead sentence + evidence + link) that directly answers a single question.
- Attribution signal: Any element that helps models credit content (clear authorship, publication dates, canonical URLs, organizational identity, and primary-source links).
@@ −33 +33 @@
### Step 1: Implement /llms.txt (recommended but optional)
- Publish a /llms.txt at the site root following the community proposal (https://llmstxt.org/). The file should be plain Markdown, human- and machine-readable, and point to canonical docs, API references, changelogs, and highest-value long-form pages.
+- The spec notes a minimal required structure: an H1 with the project/site name; a short blockquote summary is recommended; follow with ordered sections (Priority docs, APIs, changelogs, etc.). See the official spec for exact formatting guidance.
−- Optionally publish clean Markdown versions of those pages at the same URL with a .md suffix (page.html.md). These make it easier for text-only ingestion and for deterministic context extraction used by many retrieval pipelines.
+- Optionally publish clean Markdown versions of those pages at the same URL with a .md appended (page.html.md or index.html.md). These make it easier for text-only ingestion and for deterministic context extraction used by retrieval pipelines.
- Practical minimal /llms.txt contents: a one-line site description, a prioritized docs list, and explicit canonical URLs. Treat /llms.txt as additive metadata — it helps discovery but does not replace good structured content.
Example head of /llms.txt
@@ −78 +79 @@
### Step 5: Platform-specific optimization (updated and source-backed)
−- OpenAI (Responses API & agent runtimes): OpenAI announced an agent-capable Responses API with a shell tool and hosted container workspace on Mar 11, 2026. Agent runtimes increase the value of clean, machine-readable artifacts and accessible files since the runtime can execute retrieval+tooling loops (OpenAI blog: https://openai.com/index/equip-responses-api-computer-environment/). Practical test: run a grounding flow in the Responses API environment (use the shell tool) and verify your canonical pages and .md artifacts are reachable from the container workspace and included in the agent's context.
+- OpenAI (Responses API & agent runtimes): OpenAI announced agent-capable Responses API functionality and a container/shell environment for agent runtimes (see OpenAI docs: "Migrate to the Responses API" and related Agents/Tools guides). Operational test: run a grounding flow using the Responses API/Agents SDK and the shell/tooling environment to verify your canonical pages and .md artifacts are reachable from the runtime and included in the agent's context (developers.openai.com guides).
−- Google / Gemini / AI Overviews: Google added Flex and Priority inference tiers to the Gemini API on Apr 2, 2026. These tiers let you route background (cost-sensitive) vs interactive (latency/reliability-sensitive) workloads through different inference guarantees (Google AI Blog: https://blog.google/innovation-and-ai/technology/developers-tools/introducing-flex-and-priority-inference/; Gemini docs: https://ai.google.dev/gemini-api/docs/optimization#inference-tiers). Operational guidance: when running citation tests, record the inference tier (flex, standard, priority) used for each query — interactive agent queries may surface different grounding choices than background batch runs.
+- Google / Gemini: Google introduced Flex and Priority inference tiers (Apr 2, 2026). You can set the service_tier parameter (e.g., "Flex" or "Priority") when calling the Gemini/GenerateContent endpoints and the API response will indicate which tier served the request. Operational guidance: record the service_tier and response metadata for each test query — interactive/priority queries may surface different grounding choices than cost-optimized/background (Flex) runs (Google AI Blog: "Introducing Flex and Priority Inference").
- Citation-first systems (Perplexity and similar): systems that place a premium on explicit source links will prefer pages with multiple short structured answers and clear primary-source hyperlinks. Supply inline links and short evidence snippets to increase probability of attribution.
@@ −88 +89 @@
Platform behaviors diverge rapidly. Use the platform docs cited above as the authoritative sources for expected behavior and test across model versions and inference tiers.
−### Step 6: Measure AI citability (operational)
+### Step 6: Measure AI citability (concrete)
- Run a controlled set of target queries across AI providers and models (include model name / inference tier when available). Record whether your page is directly cited (3), paraphrased (1), or absent (0). Compute citation rate = total score / (queries × 3).
+- For Gemini tests, include the service_tier value and any response metadata indicating tier fallback or downgrade.
+- For Responses API/agent tests, record whether the agent runtime successfully fetched .md artifacts or files from your canonical URLs in the shell/container environment.
- Track results over time and annotate tests with provider release notes or Search Central updates to identify correlations.
## Examples (concise)