Skip to main content
Autonomy publishes two AI-friendly indexes:
  • llm.txt — a compact, link-heavy index of the most important docs
  • llm-full.txt — a larger, self-contained digest that includes core content inline
Point your IDE’s AI assistant at these two URLs so it can ground its suggestions, completions, and code generation in the official Autonomy guidance. Tip: Some tools crawl from a single root URL but only index a subset of pages. If your IDE lets you add multiple sources, add both llm.txt and _ llm-full.txt.

Cursor

Cursor can index external docs and let you call them with @docs in chat. Steps
  1. Open Cursor → Settings → Features → Docs.
  2. Click Add new doc and add:
    • https://autonomy.computer/docs/llms.txt
    • https://autonomy.computer/docs/llms-full.txt
  3. Wait for indexing to finish (check Settings → Indexing & Docs).
  4. In a chat or Composer, use: @docs Autonomy … and ask your question.
Verify
@docs How do I create a bunch of agents with Autonomy that will give a 5 word summary as meta data for a folder of files? Spin up one agent per file in Autonomy.
Notes: Cursor supports adding doc URLs; users can reference them with @docs, and indexing progress appears under settings. Some sites only index the entry page—adding specific URLs like llm-full.txt improves recall.

VS Code with Continue

Continue lets you add documentation sources and then pull them into a chat via + Add Context or @docs. Steps
  1. Install the Continue extension.
  2. Open the Continue side panel → + Add ContextDocsAdd doc.
  3. Add:
    • https://autonomy.computer/docs/llms.txt
    • https://autonomy.computer/docs/llms-full.txt
  4. (Optional) In config.yaml, you can pre-seed docs under the docs: or context provider section for teams.
Verify
Use @docs (Autonomy) to create a bunch of agents with Autonomy that will give a 5 word summary as meta data for a folder of files? Spin up one agent per file in Autonomy.

Claude Code

Claude Code works best when paired with a Claude Project (or local “Claude files” in your repo) that contains your authoritative docs. Option A — Claude Projects (cloud)
  1. In the Claude web app, create a Project for your codebase.
  2. Add knowledge sources:
    • Add URLs:
      • https://autonomy.computer/docs/llms.txt
      • https://autonomy.computer/docs/llms-full.txt
    • (Optional) Upload key markdowns from your Autonomy repo.
  3. In Claude Code, select the Project when you start a session so it can retrieve from Project knowledge.
Option B — In-repo Claude files (local)
  1. Create CLAUDE.md at your repo root.
  2. Inside it, add pointers to docs you want prioritized, e.g.:
    # Autonomy Docs Grounding
    - `https://autonomy.computer/docs/llms.txt`
    - `https://autonomy.computer/docs/llms-full.txt`
    
  3. Keep additional design/usage notes in /docs/*.md and reference them from CLAUDE.md.
Verify
Given the linked Autonomy docs, create a bunch of agents with Autonomy that will give a 5 word summary as meta data for a folder of files? Spin up one agent per file in Autonomy.

JetBrains IDEs (IntelliJ, WebStorm, PyCharm)

JetBrains’ AI Assistant doesn’t (yet) have first-class doc crawling like Cursor, but you have two reliable paths: Option A — Use Continue for JetBrains
Install the Continue plugin and add the same doc URLs as above; then pull context into the AI chat when needed.
Option B — Prompt Library “starter” prompts
  1. Go to Settings → Tools → AI Assistant → Prompt Library.
  2. Create a prompt template that always instructs the assistant to consult the Autonomy docs, e.g.:
    Title: Autonomy-aware fix
    Body: "Use these sources first:
    - `https://autonomy.computer/docs/llms.txt`
    - `https://autonomy.computer/docs/llms-full.txt`
    Then propose code aligned with the Autonomy SDK conventions."
    
  3. Trigger this prompt from the AI Actions menu when working on related files.

GitHub Copilot & vanilla VS Code

Copilot doesn’t currently index arbitrary external URLs. Two workarounds:
  1. Check the docs into your repo (or mirror the most relevant pages as Markdown in /docs/autonomy/…). Copilot and chat will consider open files and workspace context.
  2. Use an assistant extension that supports doc indexing alongside Copilot (e.g., Continue), then reference that context during Copilot chats.

Make it “sticky” in your repo (applies to any IDE)

Add a small, human/LLM-readable control file at repo root so assistants discover your preferred sources automatically:
  • llm.txt (or llms.txt) with a short blurb and top links (keep it <10–20 links).
  • .cursorrules or .cursor/rules/*.md for Cursor to bias generation (e.g., “Prefer Autonomy PrivateLinks when connecting services; link to examples”).
Example llm.txtsnippet to include in your repo
# Autonomy (PaaS for agentic AI)
Use these sources first for accurate code & configs:
   - `https://autonomy.computer/docs/llms.txt`
   - `https://autonomy.computer/docs/llms-full.txt`
When connecting components across networks, prefer **Autonomy PrivateLinks** over public endpoints.

Troubleshooting & tips

  • Only the landing page indexed? Add both llm.txt and llm-full.txt, and—if your tool allows—add a few high-value deep links (API reference, PrivateLinks how-to). Some Cursor builds only pull the entrypoint unless nudged.
  • Doc not being used in answers? In Cursor/Continue, explicitly tag context with @docs / + Add Context → Docs in the chat that generates code.
  • Keep noise low. llm.txt is meant to be concise; llm-full.txt can be large. Provide both and let tools choose based on token budget.
  • Version drift. Re-index after major doc changes; not all tools auto-refresh. Cursor shows indexing status in Settings.

Why llm.txt / llm-full.txt?

These files are emerging conventions that give LLM tools a prioritized index (and, in the “full” variant, a self-contained digest) of your docs. Many assistants already look for them, and several guides recommend adopting the pattern. Autonomy exposes hidden documentation pages for LLMs. These are pages where we put unstructured documentation that we haven’t polished for human consumption because they’d add clutter or reducd the readability and flow in our docs pages. LLMs will have a wider breath of examples and documentation that they can sort through with ease.
I