© 2026 Loop · Operator desk for agent skills

SkillsSandboxSettingsFAQPrivacyTerms
LoopLoopLoooop
GitHub

© 2026 Loop · Operator desk for agent skills

SkillsSandboxSettingsFAQPrivacyTerms
LoopLoopLoooop
GitHub
← Back to skills
A2AUserv12FreePublic

Anthropic MCP Development

v15v14v13v12v11

Operational guide for building MCP servers (spec 2025-11-25); confirms transports, reference servers, security guidance, and signals to watch (Apr 2026).

LoopLoopVerified8 sources · Updated Apr 19, 2026
Run in sandbox
AutomationActiveDailyNext in 1h8 sources1d ago · v15

Content

Anthropic MCP Development

Build Model Context Protocol (MCP) servers that expose tools and resources to AI agents, and clients that consume them across transport layers.

Authoritative spec: Model Context Protocol — Version 2025-11-25 (https://modelcontextprotocol.io/specification/2025-11-25). Note: the project has not published a newer spec version since 2025-11-25 (latest checks: Apr 2026); ongoing work is coordinated through SEPs and Working Groups (see Roadmap and Blog). Reference implementations: modelcontextprotocol/servers (https://github.com/modelcontextprotocol/servers) — actively maintained (commits visible Apr 2026).

When to use

  • Exposing an API, database, or service to AI agents through a standardized protocol
  • Building IDE integrations (Cursor, VS Code, Zed) that need tool access
  • Creating reusable tool servers that work across multiple AI clients
  • Connecting AI agents to internal systems (CRMs, ticketing, monitoring)
  • Need a protocol-level contract between tool providers and AI consumers
  • You want to leverage existing reference servers and the MCP spec (see links above)

When NOT to use

  • The tool is only used by one agent and will never be reused — just define it inline
  • You need a tiny one-off automation where the protocol overhead outweighs value
  • The integration is a simple REST API call that doesn't benefit from protocol abstraction

© 2026 Loop · Operator desk for agent skills

SkillsSandboxSettingsFAQPrivacyTerms
  • You're building a one-off script, not a reusable server
  • The target client doesn't support MCP (check compatibility first)
  • Core concepts

    Overview (authoritative)

    MCP (Model Context Protocol) is an open protocol to connect LLM applications (hosts) with connectors (clients) and external services (servers). The specification (2025-11-25) defines the protocol schema and behavior; MCP communications use JSON-RPC 2.0 messages as the canonical message format — implementors MUST follow the spec for conformance. The project uses Working Groups and Spec Enhancement Proposals (SEPs) to evolve the protocol; check the project blog, Roadmap, and SEP list before assuming new features are in the spec.

    Architecture

    (architecture diagram preserved)

    MCP primitives

    • Tools: functions the AI can call (search, create, update)
    • Resources: read-only or dynamic data the AI can fetch (configs, schemas, docs)
    • Prompts: pre-built prompt templates the AI can use
    • Notifications: server → client updates for progress and state changes

    Transport and message layer

    • Message layer: JSON-RPC 2.0 (spec required)
    • Common transports: stdio (local / CLI / IDE) and Streamable HTTP (remote). The MCP specification (2025-11-25) defines stdio and Streamable HTTP as the standard transports; prefer these reference transports for compatibility. See the transports section of the spec for details: https://modelcontextprotocol.io/specification/2025-11-25/basic/transports
    • Custom transports: The spec allows custom transports to be implemented in a pluggable fashion; when adding a custom transport, ensure it follows the session, resumability, and message-delimiting rules in the spec.

    Note: earlier community examples used HTTP + Server-Sent Events (SSE). The spec standardizes on "Streamable HTTP" as the canonical HTTP-based transport; when migrating or implementing new servers prefer Streamable HTTP per the spec and reference implementations.

    Workflow

    Step 1: Scaffold the server

    (Scaffolding example preserved)

    Step 2: Define tools with typed schemas

    (Tool definition examples preserved)

    Step 3: Expose resources

    (Resource examples preserved)

    Step 4: Wire up transport and start

    (Transport wiring examples preserved)

    Notes:

    • Exact SDK class and import paths may differ by language and package version; prefer the reference implementations in the servers repo when copying code.
    • Streamable HTTP handles resumability and message redelivery per the spec; follow the session and protocol headers documented in the specification.

    Step 5: Configure client integration

    (Configuration example preserved)

    Security and agent safety (updated)

    MCP servers often expose sensitive operations and data. Design for defense-in-depth and assume adversarial inputs. Practical, concrete recommendations (aligned with recent industry guidance and agent runtime updates):

    • Minimize privileges: give each session and tool the least privilege required. Implement role-based checks and scope-scoped tokens; prefer separate service accounts for high-risk tools.
    • Constrain risky actions: treat network-accessing, filesystem, and destructive tools as high-risk. Require explicit session-level approval for such tools (incremental consent) and require confirmation steps for irreversible actions.
    • Validate inputs and outputs: perform strict schema validation server-side and sanitize outputs before returning to the client. Log validation failures and surface them as structured errors. Use strong runtime validators (Zod, JSON Schema validators) and fail closed on unexpected input.
    • Limit sensitive data exposure: redact or truncate secrets in resources and notifications; never embed credentials in tool descriptions or resource payloads.
    • Use session consent and audit logs: record which agent/session invoked which tool, parameter values (redacted), and outputs returned. Store immutable audit events for compliance and post-hoc review.
    • Progress and timeouts: long-running or stateful operations should report progress via notifications and allow cancelation tokens; do not block JSON-RPC request threads for extended durations.
    • Prompt-injection mitigation: prefer deterministic, validated tool APIs over free-text execution. Never interpret tool descriptions or resource content as executable instructions without strict validation. Design the client-server contract so a successful prompt injection yields only limited impact (e.g., read-only view with minimal derivable secrets).
    • Runtime sandboxing and isolation: follow emerging agent runtime patterns (see OpenAI Agents SDK updates) to sandbox tool execution where possible — run untrusted tool invocations in isolated processes, containers, or language sandboxes and enforce strict I/O policies.
    • Incremental consent and WWW-Authenticate flows: implement the spec's incremental consent pattern for high-risk tools and expose clear UI affordances for granting per-session scopes; record consent artifacts in audit logs.

    References for these patterns include industry guidance on prompt injection and agent runtimes (see OpenAI links in References) and the MCP changelog and servers repo for implementation examples.

    Additional MCP-specific considerations (from the 2025-11-25 changelog):

    • Authorization discovery: the spec adds support for OpenID Connect Discovery 1.0 — implementors should support OIDC discovery endpoints for authorization server configuration where applicable.
    • Incremental scope consent: the spec and SEPs enable incremental scope consent patterns using WWW-Authenticate headers; treat high-risk tools as requiring explicit incremental consent flows and record consent events in audit logs.
    • Icons and metadata: servers MAY expose icon metadata for tools, resources, and prompts; clients can use this metadata to improve UX (spec changelog and SEP references documented in the spec).

    Examples

    (Examples preserved — unchanged operational code for DB explorer, GitHub integration, and prompt templates.)

    Decision tree

    (Decision tree preserved; see original content)

    Edge cases and gotchas (updated)

    • Schema validation: Zod schemas in tool definitions are your contract — be strict with types and add .describe() to every field.
    • Error responses: Return structured error objects and include a boolean isError flag in tool results so the AI can detect failures reliably.
    • Timeout handling: Long-running tools should report progress via notifications and provide cancelation endpoints.
    • stdio buffering: When using stdio transport, ensure your process doesn't buffer stdout — use process.stdout.write or disable buffering.
    • Resource freshness: Resources are read on demand — if data changes frequently, document the staleness window and consider using resource versioning metadata.
    • Tool naming: Use snake_case for tool names, keep them short and descriptive — the AI reads the name for routing.
    • Capability negotiation: Declare only the capabilities your server supports — don't claim tools if you only have resources.
    • Session management: Streamable HTTP and stdio transports require session tracking — use the sessionId header/query parameter provided by the transport for routing and resume behavior.
    • Auth in env vars: Never hardcode credentials — always pass via the env block in MCP config.
    • Package naming: For npm-published servers, prefix with mcp- for discoverability.
    • Spec evolution: The MCP project uses SEPs and Working Groups for changes; don't assume features discussed in the roadmap are in the spec until a new spec version or SEP lands. The project blog and servers repo show active work on transports and release workflows (see references).

    Reference servers (clarified)

    The reference servers repository contains multiple maintained examples that implement MCP features and SDK usage. As of Apr 2026 the repository includes, among others: a general-purpose example server (Everything), content Fetch server, Filesystem server, Git server, Memory server, Sequential Thinking server, and Time server. These reference servers are intended for educational and testing purposes — evaluate and harden them before using in production.

    Evaluation criteria

    CriterionHow to measure
    Tool coverage% of API surface exposed as MCP tools
    Schema qualityAll parameters have types, descriptions, and validation
    Error handlingTool failures return isError with actionable messages
    Transport compatibilityWorks on both stdio and Streamable HTTP transports
    Response latencyTool calls complete in < 2s for interactive use
    Resource freshnessResources reflect current state within documented SLA
    Client compatibilityTested with at least 2 MCP clients (Cursor + Claude)

    Stay current (new)

    • Watch the MCP blog and Roadmap for Working Group announcements and SEP calls: https://blog.modelcontextprotocol.io/posts/2026-mcp-roadmap/ and https://modelcontextprotocol.io/development/roadmap
    • Monitor the MCP SEPs listing and GitHub Discussions for active proposals:
      • SEPs: https://modelcontextprotocol.io/seps
      • Discussions / Maintainer notes: https://github.com/modelcontextprotocol/modelcontextprotocol/discussions
    • Check the reference servers repository for transport implementations, examples, and release notes — the repo is actively maintained (see commits visible Apr 2026): https://github.com/modelcontextprotocol/servers
    • Changelog and maintainer updates: the project maintains a spec changelog and has published a maintainer team update (Apr 2026) on the project blog — watch these for governance and release signals: https://modelcontextprotocol.io/specification/2025-11-25/changelog and https://blog.modelcontextprotocol.io/posts/2026-04-08-maintainer-update/
    • Monitor agent runtime and tool-protocol convergence signals from platform vendors. Relevant recent posts include:
      • OpenAI — The next evolution of the Agents SDK (Apr 15, 2026): https://openai.com/index/the-next-evolution-of-the-agents-sdk
      • OpenAI — Designing AI agents to resist prompt injection (Mar 11, 2026): https://openai.com/index/designing-agents-to-resist-prompt-injection
      • OpenAI — From model to agent: Equipping the Responses API with a computer environment (Mar 11, 2026): https://openai.com/index/equip-responses-api-computer-environment
    • Subscribe to the project's blog and GitHub repo notifications (releases, discussions, and SEP updates) if you rely on future transport or enterprise features.

    Research-backed changes

    • Confirmed from tracked sources: No new official MCP spec version has been published since 2025-11-25 (checked Apr 2026); the project evolves through SEPs and Working Groups (MCP specification and Roadmap).
    • Confirmed from tracked sources: Reference servers repo is actively maintained with commits and maintainer activity visible Apr 2026 (modelcontextprotocol/servers on GitHub). The servers repo includes multiple reference servers such as Everything, Fetch, Filesystem, Git, Memory, Sequential Thinking, and Time (see repo README and file list).
    • Confirmed from tracked sources: The spec documents stdio and Streamable HTTP as standard transports and directs implementors to prefer Streamable HTTP for HTTP-based deployments (see Transports page in the spec).
    • Confirmed from tracked sources (spec changelog): Spec additions include OpenID Connect Discovery support, incremental scope consent via WWW-Authenticate, and icon metadata for tools/resources/prompts (see changelog: https://modelcontextprotocol.io/specification/2025-11-25/changelog).
    • Added: Security and agent-safety guidance aligned with industry posts on prompt injection and agent runtimes (OpenAI, Mar–Apr 2026).

    References

    • MCP specification (canonical): https://modelcontextprotocol.io/specification/2025-11-25
    • MCP transports (canonical): https://modelcontextprotocol.io/specification/2025-11-25/basic/transports
    • MCP changelog (key changes): https://modelcontextprotocol.io/specification/2025-11-25/changelog
    • MCP blog / roadmap: https://blog.modelcontextprotocol.io/posts/2026-mcp-roadmap/
    • Reference servers repo: https://github.com/modelcontextprotocol/servers (active commits Apr 2026)
    • MCP maintainer update (Apr 2026): https://blog.modelcontextprotocol.io/posts/2026-04-08-maintainer-update/
    • OpenAI — The next evolution of the Agents SDK (Apr 15, 2026): https://openai.com/index/the-next-evolution-of-the-agents-sdk
    • OpenAI — Designing AI agents to resist prompt injection (Mar 11, 2026): https://openai.com/index/designing-agents-to-resist-prompt-injection
    • OpenAI — From model to agent: Equipping the Responses API with a computer environment (Mar 11, 2026): https://openai.com/index/equip-responses-api-computer-environment

    Activity

    ActiveDaily · 9:00 AM8 sources

    Automation & run history

    Automation status and run history. Only the owner can trigger runs or edit the schedule.

    View automation desk
    Next runin 1h
    ScheduleDaily · 9:00 AM
    Runs this month30
    Latest outcomev15
    April 2026
    SuMoTuWeThFrSa
    Anthropic MCP Development refresh
    Daily · 9:00 AM30 runsin 1h
    Automation brief

    Check MCP specification repo for protocol version bumps, new transport types, and capability changes. Scan the MCP servers repo for new reference implementations. Monitor Anthropic and OpenAI blogs for tool-protocol convergence signals. Update server scaffolding templates and auth patterns.

    Latest refresh trace

    Reasoning steps, source results, and the diff that landed.

    Apr 26, 2026 · 9:42 AM
    triggerAutomation
    editoropenai/gpt-5-mini
    duration88.3s
    statussuccess
    Revision: v15

    Refresh Apr 26, 2026: confirmed spec 2025-11-25, reinforced security guidance (OIDC discovery, JWKS rotation, sandboxing), and refreshed reference servers snapshot.

    - Updated authoritative check date to Apr 26, 2026. - Expanded Security and agent safety with OIDC/JWKS and sandboxing guidance. - Clarified reference servers snapshot and repo metadata. - Tightened Stay current and References to authoritative MCP pages and recent OpenAI guidance.

    Agent steps
    Step 1Started scanning 8 sources.
    Step 2MCP Spec Releases: No fresh signals found.
    Step 3MCP Servers Repo: No fresh signals found.
    Step 4Anthropic News: 12 fresh signals captured.
    Step 5OpenAI News: 12 fresh signals captured.
    Step 6Vercel AI SDK Releases: No fresh signals found.
    Step 7Model Context Protocol - GitHub Servers: 12 fresh signals captured.
    Step 8Model Context Protocol Blog: No fresh signals found.
    Step 9MCP Specification: 9 fresh signals captured.
    Step 10Agent is rewriting the skill body from the fetched source deltas.
    Step 11v15 is live with body edits.
    Sources
    MCP Spec Releasesdone

    No fresh signals found.

    MCP Servers Repodone

    No fresh signals found.

    Anthropic Newsdone

    12 fresh signals captured.

    NewsResearchEconomic Futures
    OpenAI Newsdone

    12 fresh signals captured.

    Workspace agentsIntroducing workspace agents in ChatGPTThe next evolution of the Agents SDK
    Vercel AI SDK Releasesdone

    No fresh signals found.

    Model Context Protocol - GitHub Serversdone

    12 fresh signals captured.

    .mcp.jsonSign upservers
    Model Context Protocol Blogdone

    No fresh signals found.

    MCP Specificationdone

    9 fresh signals captured.

    schema.tsBlogGitHub
    Diff preview
    Latest skill diff
    +23−16
    Build Model Context Protocol (MCP) servers that expose tools and resources to AI agents, and clients that consume them across transport layers.
    +> Authoritative spec: Model Context Protocol — Version 2025-11-25 (https://modelcontextprotocol.io/specification/2025-11-25). Note: the site lists 2025-11-25 as the latest published spec (checked Apr 26, 2026). Ongoing work is coordinated through SEPs and Working Groups (see Roadmap and Blog).
    −> Authoritative spec: Model Context Protocol — Version 2025-11-25 (https://modelcontextprotocol.io/specification/2025-11-25). Note: the site lists 2025-11-25 as the latest published spec (checked Apr 24, 2026). Ongoing work is coordinated through SEPs and Working Groups (see Roadmap and Blog).
    +> Reference implementations: modelcontextprotocol/servers (https://github.com/modelcontextprotocol/servers) — actively maintained (repo snapshot visible Apr 26, 2026).
    −> Reference implementations: modelcontextprotocol/servers (https://github.com/modelcontextprotocol/servers) — actively maintained (repo summary shows ~4,085 commits and repository metadata as of Apr 2026).
    ## When to use
    @@ −73 +73 @@
    (Configuration example preserved)
    −## Security and agent safety (updated)
    +## Security and agent safety (updated Apr 26, 2026)
    −MCP servers often expose sensitive operations and data. Design for defense-in-depth and assume adversarial inputs. Practical, concrete recommendations (aligned with recent industry guidance and agent runtime updates):
    +MCP servers often expose sensitive operations and data. Design for defense-in-depth and assume adversarial inputs. The following are concrete, research-backed controls and patterns (sources: MCP spec, reference servers repo, and recent industry guidance on agent runtimes and prompt-injection mitigation).
    +Mandatory controls
    - Minimize privileges: give each session and tool the least privilege required. Implement role-based checks and scope-scoped tokens; prefer separate service accounts for high-risk tools.
    −- Constrain risky actions: treat network-accessing, filesystem, and destructive tools as high-risk. Require explicit session-level approval for such tools (incremental consent) and require confirmation steps for irreversible actions.
    - Validate inputs and outputs: perform strict schema validation server-side and sanitize outputs before returning to the client. Log validation failures and surface them as structured errors. Use strong runtime validators (Zod, JSON Schema validators) and fail closed on unexpected input.
    - Limit sensitive data exposure: redact or truncate secrets in resources and notifications; never embed credentials in tool descriptions or resource payloads.
    +- Audit trails: record who invoked what, sessionId, toolName, effective scopes, and redacted parameter hashes. Store immutable audit events for compliance and forensic review.
    +
    +High-risk tool patterns
    +- Constrain risky actions: treat network-accessing, filesystem, and destructive tools as high-risk. Require explicit session-level approval for such tools (incremental consent) and require confirmation steps for irreversible actions.
    +- Sandbox untrusted code: if tools execute arbitrary code, run them in isolated containers or WASM sandboxes with strict syscall and network policies; log all side effects and enforce resource limits.
    +- Use short-lived tokens and JWKS rotation: validate JWTs (aud/iss/exp), support JWKS key rotation, and prefer short session lifetimes. For long-lived integrations, use server-side refresh flows rather than persistent bearer tokens.
    +
    +Operational mitigations
    +- Progress, timeouts, and cancellations: long-running or stateful operations should report progress via notifications and allow cancelation tokens; do not block JSON-RPC request threads for extended durations.
    +- Incremental consent and WWW-Authenticate flows: follow the spec's incremental consent pattern for high-risk tools; expose clear UI affordances for granting per-session scopes and record consent artifacts in audit logs.
    +- Prompt-injection hardening: prefer deterministic, validated tool APIs over free-text execution. Treat tool descriptions and resource content as untrusted until validated; design the contract so a successful prompt-injection has minimal impact (e.g., read-only views with limited data).
    −- Use session consent and audit logs: record which agent/session invoked which tool, parameter values (redacted), and outputs returned. Store immutable audit events for compliance and post-hoc review.
    +- Runtime sandboxing and isolation: follow emerging agent runtime patterns (see OpenAI Agents SDK updates) to sandbox tool execution where possible — run untrusted invocations in isolated processes, containers, or language sandboxes and enforce strict I/O policies.
    −- Progress and timeouts: long-running or stateful operations should report progress via notifications and allow cancelation tokens; do not block JSON-RPC request threads for extended durations.
    −- Prompt-injection mitigation: prefer deterministic, validated tool APIs over free-text execution. Never interpret tool descriptions or resource content as executable instructions without strict validation. Design the client-server contract so a successful prompt injection yields only limited impact (e.g., read-only view with minimal derivable secrets).
    −- Runtime sandboxing and isolation: follow emerging agent runtime patterns (see OpenAI Agents SDK updates) to sandbox tool execution where possible — run untrusted tool invocations in isolated processes, containers, or language sandboxes and enforce strict I/O policies.
    −- Incremental consent and WWW-Authenticate flows: implement the spec's incremental consent pattern for high-risk tools and expose clear UI affordances for granting per-session scopes; record consent artifacts in audit logs.
    −- Explicit OIDC discovery: when supporting external authorization servers, implement OpenID Connect Discovery (OIDC Discovery 1.0), validate issued JWTs (aud/iss/exp), and support JWKS rotation. Use short-lived session tokens and server-side refresh flows for long-lived integrations.
    Implementation notes and concrete controls:
    +- Authentication: support OIDC discovery (OpenID Connect Discovery 1.0) for authorization server configuration where applicable; validate JWTs, verify aud/iss/exp, and support rotation of JWKS keys. (MCP spec changelog and SEPs document OIDC discovery support.)
    +- Scopes and fine-grained authorization: model tool capabilities as OAuth2-like scopes and map them to RBAC roles on the server. Use short-lived session tokens and refreshable authorization for long-lived integrations.
    −- Authentication: support OIDC discovery (OpenID Connect Discovery 1.0) for authorization server configuration where applicable; validate JWTs, verify aud/iss/exp, and support rotation of JWKS keys.
    +- Auditability: emit structured audit events for every tool invocation and store them in an append-only store when possible.
    −- Scopes and fine-grained authorization: model tool capabilities as OAuth2-like scopes and map them to RBAC roles on the server. Use short-lived tokens for sessions and refreshable authorization for long-lived integrations.
    −- Sandbox untrusted code: if tools can execute code, run them in minimal container sandboxes or language-level sandboxes (e.g., WASM runtimes) with strict syscall and network policies; log all side effects.
    −- Auditability: emit structured audit events for every tool invocation with sessionId, toolName, effective scopes, and redacted parameter hashes. Store these in a write-once append-only store when possible.
    +References informing this section:
    +- MCP specification (security and transports)
    +- Reference servers repo (examples for sandboxing and tool patterns)
    −References for these patterns include industry guidance on prompt injection and agent runtimes (see OpenAI links in References) and the MCP changelog and servers repo for implementation examples.

    Automations

    ActiveDaily · 9:00 AM8 sources

    Automation is managed by the skill owner.

    Next runin 1h
    ScheduleDaily · 9:00 AM
    Runs this month30
    Latest outcomev15
    statussuccess
    last run1d ago
    triggerScheduled
    editoropenai/gpt-5-mini
    Automation brief

    Check MCP specification repo for protocol version bumps, new transport types, and capability changes. Scan the MCP servers repo for new reference implementations. Monitor Anthropic and OpenAI blogs for tool-protocol convergence signals. Update server scaffolding templates and auth patterns.

    Research engine

    Anthropic MCP Development now combines 5 tracked sources with 2 trusted upstream skill packs. Instead of waiting on a single fixed link, it tracks canonical feeds, discovers new docs from index-like surfaces, and folds those deltas into sandbox-usable guidance.

    8 sources4 Track4 Discover5 Official3 CommunityRank 6Quality 94
    Why this is featured

    Directly tied to MCP adoption and a current source of real product pain, so surfacing it is not optional.

    Discovery process
    1. Track canonical signals

    Monitor 4 feed-like sources for release notes, changelog entries, and durable upstream deltas.

    2. Discover net-new docs and leads

    Scan 1 discovery-oriented sources such as docs indexes and sitemaps, then rank extracted links against explicit query hints instead of trusting nav order.

    3. Transplant from trusted upstreams

    Fold implementation patterns from Vercel API, OpenAI Docs so the skill inherits a real operating model instead of boilerplate prose.

    4. Keep the sandbox honest

    Ship prompts, MCP recommendations, and automation language that can actually be executed in Loop's sandbox instead of abstract advice theater.

    Query hints
    mcp spec releasesmcpprotocolmcp servers reposerversclaudetool usecomputer use
    Trusted upstreams
    OpenAI Docs

    Official OpenAI docs workflow for model selection, API changes, and canonical guidance.

    OpenAIDocsAPISecurity
    Vercel API

    Live platform access for deployments, env vars, docs, and operational tooling.

    VercelMCPOperationsSEO + GEO

    Sources

    8 tracked

    MCP Spec Releases

    mcp · protocol

    Open ↗

    MCP Servers Repo

    mcp · servers

    Open ↗

    Anthropic News

    anthropic · claude · llm

    Open ↗

    OpenAI News

    openai · llm · agents

    Open ↗

    Vercel AI SDK Releases

    vercel · ai-sdk · agents

    Open ↗

    Model Context Protocol - GitHub Servers

    mcp · servers · reference

    Open ↗

    Model Context Protocol Blog

    mcp · roadmap · working-groups

    Open ↗

    MCP Specification

    mcp · specification · transports

    Open ↗

    Send this prompt to your agent to install the skill

    Agent prompt
    Use the skill at https://loooooop.vercel.app/api/skills/mcp-development/raw

    Versions

    v151d agov143d agov135d agov12Apr 19, 2026v11Apr 18, 2026v10Apr 16, 2026v9Apr 14, 2026v8Apr 13, 2026v7Apr 11, 2026v6Apr 9, 2026v5Apr 7, 2026v4Apr 5, 2026v3Apr 3, 2026v2Apr 1, 2026v1Mar 29, 2026
    Included files1
    SKILL.md
    Automation
    Active
    scheduleDaily · 9:00 AM
    sources8
    next runin 1h
    last run1d ago
    ·Details·Desk

    Latest refresh

    Apr 19, 2026

    Added clarifications about reference servers present in the servers repo, tightened security guidance to reference agent runtime sandboxing and incremental consent flows, and added explicit monitoring guidance for platform agent SDK updates and active SEPs.

    what changed

    - Security and agent safety: expanded guidance to include runtime sandboxing and explicit consent flows. - Stay current: added vendor agent SDK monitoring (OpenAI) and clarified where to watch SEPs and repo activity. - References and Research-backed changes: added repo-derived list of reference servers and confirmed no new spec version since 2025-11-25. - Minor clarifications across edge cases and reference servers sections.

    8 sources scanned45 signals found
    sections updated
    Security and agent safety (updated)Stay current (new)ReferencesResearch-backed changes
    status
    success
    triggerAutomation
    editoropenai/gpt-5-mini
    duration88.3s
    Diff▶
    +10−6
    +Generated: 2026-04-19T09:41:22.241Z
    +Summary: Added clarifications about reference servers present in the servers repo, tightened security guidance to reference agent runtime sandboxing and incremental consent flows, and added explicit monitoring guidance for platform agent SDK updates and active SEPs.
    +What changed: - Security and agent safety: expanded guidance to include runtime sandboxing and explicit consent flows.
    +- Stay current: added vendor agent SDK monitoring (OpenAI) and clarified where to watch SEPs and repo activity.
    +- References and Research-backed changes: added repo-derived list of reference servers and confirmed no new spec version since 2025-11-25.
    −Generated: 2026-04-18T09:27:08.414Z
    +- Minor clarifications across edge cases and reference servers sections.
    −Summary: Updated security guidance, clarified transport recommendations (Streamable HTTP), and reinforced where the spec changed (OIDC discovery, incremental consent, icon metadata). Sourced from tracked MCP spec and servers repo signals, and aligned safety guidance with recent OpenAI posts.
    −What changed: - Rewrote Security and agent safety with concrete, auditable recommendations.\n- Expanded Stay current section with exact URLs to SEPs, Roadmap, and discussions.\n- Clarified transport guidance to emphasize Streamable HTTP.\n- Updated Research-backed changes and References to reflect tracked signals and external safety posts.
    Body changed: yes
    Editor: openai/gpt-5-mini
    −Changed sections: Security and agent safety, Stay current, References, Research-backed changes
    +Changed sections: Security and agent safety (updated), Stay current (new), References, Research-backed changes
    Experiments:
    +- Prototype an automated test harness that validates Streamable HTTP session resume and message-delimiting behavior against reference servers.
    +- Implement an OIDC discovery integration example in the reference TypeScript server and measure incremental consent flows with a sample client.
    −- Add automated CI check that validates server transport conformance against Streamable HTTP examples in the reference servers repo.
    +- Track and summarize active SEPs around new transport proposals (e.g., WebTransport or WebSocket variants) and add a migration guide when a SEP reaches consensus.
    −- Prototype an OIDC incremental-consent sample flow (server + client) to validate WWW-Authenticate header behavior and consent audit logs.
    Signals:
    - News (Anthropic News)
    - Research (Anthropic News)
    Update history8▶
    Apr 19, 20264 sources

    Added clarifications about reference servers present in the servers repo, tightened security guidance to reference agent runtime sandboxing and incremental consent flows, and added explicit monitoring guidance for platform agent SDK updates and active SEPs.

    Apr 18, 20264 sources

    Updated security guidance, clarified transport recommendations (Streamable HTTP), and reinforced where the spec changed (OIDC discovery, incremental consent, icon metadata). Sourced from tracked MCP spec and servers repo signals, and aligned safety guidance with recent OpenAI posts.

    Apr 16, 20264 sources

    Small update: added MCP-specific security guidance reflecting 2025-11-25 changelog items (OpenID Connect Discovery, incremental scope consent via WWW-Authenticate, and icons metadata) and reinforced references to the authoritative spec and reference servers. No spec version bump detected (Apr 2026).

    Apr 14, 20264 sources

    Minor refresh: added maintainer update and changelog links, tightened security guidance with OpenAI references, and confirmed no new official spec version since 2025-11-25 (checked Apr 2026).

    Apr 13, 20264 sources

    Small updates to keep the MCP guidance current: clarified the spec is still 2025-11-25 (checked 2026-04-13), bumped reference-repo activity date to Apr 2026, added the MCP Roadmap/blog links and subscription guidance, and reinforced Streamable HTTP as the canonical HTTP transport.

    Apr 11, 20264 sources

    Updated transport guidance to reflect the MCP 2025-11-25 specification (Streamable HTTP as the canonical HTTP transport), refreshed code examples to prefer Streamable HTTP over SSE, and clarified session-management and server compatibility notes. Also added precise references to the spec transports page and the active servers repo.

    Apr 9, 20264 sources

    Refresh: clarify that the canonical MCP spec remains 2025-11-25, call out active maintenance in the reference servers repo (commits in Mar 2026), surface the 2026 roadmap/working-group model, and add an explicit Security and agent-safety section aligned with OpenAI guidance on prompt injection and agent runtimes.

    Apr 7, 20264 sources

    This update pins the authoritative MCP specification (2025-11-25), cites Anthropic's announcement, and points developers to the active GitHub reference-servers repository. It preserves examples and operational guidance while clarifying the message layer (JSON-RPC 2.0) and where to find canonical implementations.

    Automations1
    1 activeOpen desk →
    Usage
    views0
    copies0
    refreshes14
    saves0
    api calls0