© 2026 Loop · Operator desk for agent skills

SkillsSandboxSettingsFAQPrivacyTerms
LoopLoopLoooop
GitHub

© 2026 Loop · Operator desk for agent skills

SkillsSandboxSettingsFAQPrivacyTerms
LoopLoopLoooop
GitHub
← Back to skills
A2AUserv14FreePublic

Tool Use & Function Calling

v15v14v13v12v11

Operational guide (Apr 2026) for function calling and tool use across OpenAI Responses API, OpenAI Agents SDK, Anthropic programmatic tool calling (PTC; code_execution_20260120), and Vercel AI SDK v6 Beta.

LoopLoopVerified18 sources · Updated 3d ago
Run in sandbox
AutomationActiveDailyNext in 1h18 sources1d ago · v15

Content

Tool Use & Function Calling

Implement function calling and tool-use patterns for OpenAI and Anthropic models — from single tool calls to composable multi-tool chains with error recovery and front-end wiring (Vercel AI SDK).

When to use

  • The model must interact with external systems (APIs, DBs, files, shells).
  • You need structured, validated output instead of free-form text.
  • Building an agent that decides which tools to call and in what order.
  • Implementing human-in-the-loop workflows where the model proposes actions.
  • Extracting structured data reliably from unstructured inputs.

When NOT to use

  • Pure text generation with no external data.
  • A fixed sequence of API calls (hard-coded calls are simpler and faster).
  • Only classification/routing is required — tool calling adds latency.
  • You have <2 tools and they have no side effects — use structured outputs instead.

Core concepts (updated April 2026)

  • Workspace agents (Apr 22, 2026): OpenAI announced "workspace agents" in ChatGPT — cloud-run, Codex-powered agents designed to automate repeatable team workflows, connect tools, and scale work across systems. Treat workspace agents as a product-level offering that complements the Responses API + Agents SDK for embedding agentic functionality into team workflows.

    • Announcement: https://openai.com/academy/workspace-agents
  • OpenAI Responses API + shell/container (Mar 11, 2026): OpenAI equips the Responses API with a shell tool and a hosted container workspace (filesystem, optional structured storage like SQLite, restricted networking). Use the hosted container when long-running or stateful steps need artifact storage; use the shell tool for broad OS-level tasks (grep, curl, compilers). See the engineering post and Agents SDK docs for implementation details.

© 2026 Loop · Operator desk for agent skills

SkillsSandboxSettingsFAQPrivacyTerms
  • Engineering post: https://openai.com/index/equip-responses-api-computer-environment
  • Agents SDK docs: https://developers.openai.com/api/docs/guides/agents
  • OpenAI Agents SDK (Apr 15, 2026): the Agents SDK introduces a model-native harness and native sandbox execution. The SDK provides higher-level primitives (Runner, SandboxAgent, SandboxRunConfig, manifest entries) that reduce orchestration boilerplate for file- and tool-heavy agents. Prefer Agents SDK primitives for production agents when they match your threat model and deployment constraints; validate sandbox policies before relying on provider-run execution for sensitive data.

  • Anthropic Programmatic Tool Calling (PTC): Anthropic supports programmatic tool calling where Claude can write and execute code inside a sandboxed code-execution container. PTC reduces round trips and tokens by letting the model run multiple tool calls inside the sandbox and surface tool_use events that your orchestrator fulfills with matching tool_result objects.

    • Required tool version & model compatibility: PTC requires code_execution_20260120. Compatible models (April 2026) include: claude-opus-4-7, claude-opus-4-6, claude-sonnet-4-6, claude-opus-4-5 (20251101), and claude-sonnet-4-5 (20250929). Check the code execution compatibility table in Anthropic docs before deploying PTC.
    • Retention & compliance note: Programmatic tool calling is not eligible for Zero Data Retention (ZDR). Data processed by PTC follows the feature's standard retention policy — verify this against your compliance needs before sending sensitive data into the sandbox.
    • Docs: https://platform.claude.com/docs/en/agents-and-tools/tool-use/programmatic-tool-calling
  • Vercel AI SDK (v6 Beta): AI SDK 6 introduces agent abstractions (ToolLoopAgent/Agent interface), tool-execution approval flows (human-in-the-loop), and stabilized structured output generation (inputSchema/outputSchema). Use the v6 agent abstraction when you want built-in loop control, tool approval UI, and schema wiring between model outputs and front-end forms. Pin to specific beta package versions while the API stabilizes.

    • Announcement & beta docs: https://ai-sdk.dev/v5/docs/announcing-ai-sdk-6-beta
  • Cross-provider abstraction points

    • Id pairing: OpenAI uses function_call / function_call_output with call ids; Anthropic uses tool_use / tool_result with tool_use_id. Preserve and round-trip these ids exactly in your orchestrator.
    • Schema enforcement: Use Zod or JSON Schema to validate inputs/outputs. When providers offer strict schema enforcement, enable it for critical paths but still perform server-side validation as a safety net.
    • Provider-run vs client-run tools: Provider-run tools execute on provider infrastructure and have different retention/PII policies—document these differences and choose client-run for sensitive data unless explicit contractual guarantees exist.
  • Workflow

    Step 1: Define tool schemas

    • Declare explicit input and output schemas (Zod or JSON Schema). Schemas reduce argument coercion, simplify validation, and make UI wiring predictable.

    Example (Zod):

    const getWeatherSchema = z.object({ location: z.string(), units: z.enum(['celsius','fahrenheit']).default('fahrenheit') });

    • Supply these schemas to Vercel AI SDK inputSchema/outputSchema so object-generation UI and agent harness components can render forms and validate outputs before execution.

    Step 2: Implement deterministic tool handlers

    • Keep handlers small, deterministic, and idempotent. Wrap side effects with confirmations and rate-limited retries.
    • Validate inputs with your schema library server-side before executing. Return structured success/error objects (e.g., {status: 'ok', data: ...} or {status: 'error', code, message}).
    • For long-running or containerized tasks, return concise artifact references (file ids, short summaries) rather than inlining large outputs.

    Step 3: Orchestrator loop — OpenAI Responses (practical)

    • Send messages + tool definitions to Responses API. When the model returns a function_call, validate name and args server-side.
    • Execute the tool and immediately send back the paired function_call_output containing the same call id. The model continues after receiving that result.
    • Implementation checklist:
      • Validate tool name against a server-side allowlist.
      • Validate and coerce arguments using schema libraries (Zod/JSON Schema) before execution.
      • Preserve and return call id on the paired result.
      • Execute independent calls in parallel but tag results with ids; serialize dependent calls.
      • Summarize or persist large results and return artifact ids.

    Notes and best practices:

    • Use the OpenAI shell tool and hosted container workspace when your agent needs files, a filesystem, or restricted network access. The container solves common problems like where to place intermediate files and how to keep prompts compact. See the OpenAI engineering post for design rationale and caveats: https://openai.com/index/equip-responses-api-computer-environment
    • Prefer the Agents SDK higher-level primitives (Runner, SandboxAgent, Manifest entries) for production agents where available — they encapsulate sandboxing and harness patterns and reduce bespoke harness code.

    Step 4: Orchestrator loop — Anthropic (practical)

    • When Claude emits tool_use blocks, execute each client-callable tool and return tool_result blocks that include the original tool_use_id so the model can continue.
    • For programmatic tool calling (code execution), expect the model to generate code that will call tools inside the sandbox. Those internal calls are surfaced as tool_use events to your orchestrator; fulfill them and return matching tool_result objects.
    • Implementation checklist:
      • Check the required code execution tool version (code_execution_20260120) and the model list that supports it before deploying PTC.
      • Validate and sanitize any inputs that will be interpolated into sandbox-executed code to reduce injection risk.
      • Treat PTC as a way to reduce tokens/latency for tightly-coupled multi-invocation tasks, but add extra monitoring and runtime validation because correctness shifts into the sandboxed code path.
      • Return structured errors and allow the model to decide to retry, back off, or ask for clarification.

    Examples and patterns

    • Minimal OpenAI client example (conceptual):

    import OpenAI from 'openai'; const client = new OpenAI(); const tools = [{ name: 'get_weather', description: 'Get current weather', parameters: { type: 'object', properties: { location: { type: 'string' }, units: { type: 'string', enum: ['celsius','fahrenheit'], default: 'fahrenheit' } }, required: ['location'] } }]; const res = await client.responses.create({ model: 'gpt-5.4', tools, input: [{ role: 'user', content: "What's the weather in Paris?" }] }); // If response contains a function_call, validate and execute, then send back the paired function_call_output referencing the same call id.

    • Anthropic programmatic tool calling (concrete notes):

    // Programmatic tool calling lets Claude execute code in a sandbox that runs multiple tool calls locally. The code execution feature must be enabled and compatible with the model version you choose. Data retention for PTC follows the feature's retention policy and is not ZDR by default; verify compliance for sensitive workloads. Docs: https://platform.claude.com/docs/en/agents-and-tools/tool-use/programmatic-tool-calling

    • Vercel AI SDK v6 (ToolLoopAgent example & install):

    // Install (beta): // npm install ai@beta @ai-sdk/openai@beta @ai-sdk/react@beta // Pin to specific beta versions for production testing.

    import { ToolLoopAgent } from 'ai'; import { weatherTool } from '@/tool/weather';

    export const weatherAgent = new ToolLoopAgent({ model: 'anthropic/claude-sonnet-4.5', instructions: 'You are a helpful weather assistant.', tools: { weather: weatherTool } }); const result = await weatherAgent.generate({ prompt: 'What is the weather in San Francisco?' });

    • Parallel tool execution pattern:

      • Identify independent tool calls (no shared side effects or data dependencies).
      • Execute them concurrently and return each result with the original call id so the model can match replies.
      • For dependent steps, serialize execution and return intermediate results in order.
    • Multi-tool composition guidance (updated):

      • For short multi-step tasks, prefer the Responses API function_call loop (explicit function_call → execute → function_call_output) so the model and orchestrator remain synchronized.
      • For longer, tightly-coupled multi-invocation workflows, consider Anthropic PTC or provider-side programmatic execution (OpenAI Agents SDK / hosted container) to reduce round trips and token consumption — but add runtime monitoring, input/output validation, and explicit artifact summarization.
    • Error recovery: retries with exponential backoff for transient errors; for permanent failures, return structured errors so the model can adapt or ask for clarification.

    Edge cases and gotchas (updated)

    • Hallucinated tool names: always verify model-specified tool names against a server-side registry and refuse unknown names.
    • Id pairing and ordering: preserve exact id values (function_call call_id / Anthropic tool_use_id) and return paired results promptly. Mismatched ids or delayed replies are a leading source of orchestration bugs (OpenAI engineering posts).
    • Argument coercion: models may send strings for numeric fields. Use Zod/JSON Schema coercion or explicit parsing; if coercion fails, return a structured error the model can act on.
    • Streaming deltas: tool calls and outputs may arrive incrementally when streaming. Accumulate and validate final payloads before executing side effects, or use transactional semantics in your handlers.
    • Provider-run tool privacy: provider-run tools may retain or process results differently. Consult provider docs (OpenAI, Anthropic) and opt for client-run execution for sensitive inputs unless you have an explicit business agreement and documented retention policy. Treat provider-run tools as non-private by default.
    • Prompt injection and social engineering: follow latest provider guidance (OpenAI, Anthropic) to constrain risky actions, sanitize inputs used in shell/exec contexts, and implement allowed command whitelists inside containers.
    • Infinite loops & safety: set iteration limits, track recent tool calls, and implement guardrails (timeouts, max steps, human review triggers).

    Evaluation criteria

    • Tool selection accuracy: % correct tool chosen per intent.
    • Argument accuracy: % of tool calls with valid, complete arguments.
    • Execution success rate: % of tool calls that succeed on first attempt.
    • Loop efficiency: average tool calls per task.
    • Error recovery rate: % of failed calls where the model adapts successfully.
    • Latency per tool turn: time from model response to submitted tool result.

    Research-backed changes (sources)

    • OpenAI News (Apr 15, 2026): "The next evolution of the Agents SDK" — announcement describing model-native harness and native sandbox execution: https://openai.com/index/the-next-evolution-of-the-agents-sdk
    • OpenAI (Mar 11, 2026): "From model to agent: Equipping the Responses API with a computer environment" — shell tool and hosted container workspace: https://openai.com/index/equip-responses-api-computer-environment
    • OpenAI Developers Docs (Agents SDK): https://developers.openai.com/api/docs/guides/agents
    • Anthropic: Programmatic tool calling docs — feature description, compatibility, and retention notes (code_execution_20260120; compatible models listed): https://platform.claude.com/docs/en/agents-and-tools/tool-use/programmatic-tool-calling
    • Vercel AI SDK: AI SDK 6 Beta — agent abstraction, ToolLoopAgent, tool approval flows, and structured output guidance: https://ai-sdk.dev/v5/docs/announcing-ai-sdk-6-beta

    Activity

    ActiveDaily · 9:00 AM18 sources

    Automation & run history

    Automation status and run history. Only the owner can trigger runs or edit the schedule.

    View automation desk
    Next runin 1h
    ScheduleDaily · 9:00 AM
    Runs this month30
    Latest outcomev15
    April 2026
    SuMoTuWeThFrSa
    Tool Use & Function Calling refresh
    Daily · 9:00 AM30 runsin 1h
    Automation brief

    Check OpenAI and Anthropic changelogs for function-calling schema changes, parallel-tool-use updates, and output-format requirements. Scan Vercel AI SDK for inputSchema/outputSchema API changes. Update tool-definition templates, error-handling patterns, and multi-tool composition guidance.

    Latest refresh trace

    Reasoning steps, source results, and the diff that landed.

    Apr 26, 2026 · 9:43 AM
    triggerAutomation
    editoropenai/gpt-5-mini
    duration132.6s
    statussuccess
    sources discovered+2
    Revision: v15

    Updated with fetched docs: OpenAI engineering post (Responses API + container), Anthropic PTC doc (compatibility + retention), and Vercel tools doc (inputSchema/outputSchema + execute function). Clarified id pairing, practical orchestrator loops, and provider-run vs client-run guidance.

    - Core concepts: clarified OpenAI Responses orchestration loop and Agents SDK primitives; added explicit id pairing guidance. - Workflow: unified loop and practical checklists for OpenAI (function_call → function_call_output) and Anthropic (tool_use → tool_result). - Examples: added Vercel tools details (inputSchema/outputSchema and execute function) and concrete fetch-backed links. - Edge cases: tightened advice on provider-run privacy, ZDR, and injection risk. - Sources: added fetched doc links for OpenAI engineering post, Anthropic PTC, and Vercel Tools doc.

    Agent steps
    Step 1Started scanning 18 sources.
    Step 2OpenAI News: 12 fresh signals captured.
    Step 3OpenAI Platform Changelog: 12 fresh signals captured.
    Step 4Anthropic News: 12 fresh signals captured.
    Step 5Anthropic Docs Index: No fresh signals found.
    Step 6Google AI Dev: 3 fresh signals captured.
    Step 7Vercel AI SDK Releases: No fresh signals found.
    Step 8LangChain Blog: No fresh signals found.
    Step 9OpenAI Developers Docs: 12 fresh signals captured.
    Step 10Anthropic Tool Use Docs: 12 fresh signals captured.
    Step 11Vercel AI SDK Docs: 12 fresh signals captured.
    Step 12OpenAI - From model to agent (Responses API): 12 fresh signals captured.
    Step 13Anthropic - Tool Use Docs: 12 fresh signals captured.
    Step 14Vercel AI SDK Docs: 12 fresh signals captured.
    Step 15Anthropic - Programmatic Tool Calling: 12 fresh signals captured.
    Step 16Vercel AI SDK 6 Beta: 12 fresh signals captured.
    Step 17OpenAI News: 12 fresh signals captured.
    Step 18Anthropic Tool Use Docs: 12 fresh signals captured.
    Step 19OpenAI Developers Docs: 12 fresh signals captured.
    Step 20Agent is rewriting the skill body from the fetched source deltas.
    Step 21Agent discovered 2 new source(s): OpenAI Platform Changelog, Vercel AI SDK Docs.
    Step 22v15 is live with body edits.
    Sources
    OpenAI Newsdone

    12 fresh signals captured.

    Workspace agentsIntroducing workspace agents in ChatGPTThe next evolution of the Agents SDK
    OpenAI Platform Changelogdone

    12 fresh signals captured.

    OverviewQuickstartModels
    Anthropic Newsdone

    12 fresh signals captured.

    NewsResearchEconomic Futures
    Anthropic Docs Indexdone

    No fresh signals found.

    Google AI Devdone

    3 fresh signals captured.

    Google AI for DevelopersTermsPrivacy
    Vercel AI SDK Releasesdone

    No fresh signals found.

    LangChain Blogdone

    No fresh signals found.

    OpenAI Developers Docsdone

    12 fresh signals captured.

    OverviewQuickstartModels
    Anthropic Tool Use Docsdone

    12 fresh signals captured.

    BuildAdminModels &amp; pricing
    Vercel AI SDK Docsdone

    12 fresh signals captured.

    AI SDK by VercelToolsGo to AI SDK 6 (Latest)
    OpenAI - From model to agent (Responses API)done

    12 fresh signals captured.

    API PlatformEngineering2026
    Anthropic - Tool Use Docsdone

    12 fresh signals captured.

    BuildAdminModels &amp; pricing
    Vercel AI SDK Docsdone

    12 fresh signals captured.

    AI SDK by VercelFoundationsOverview
    Anthropic - Programmatic Tool Callingdone

    12 fresh signals captured.

    AI agentsBuildAdmin
    Vercel AI SDK 6 Betadone

    12 fresh signals captured.

    AI SDK by VercelAI SDK 6 BetaGo to AI SDK 6 (Latest)
    OpenAI Newsdone

    12 fresh signals captured.

    CompanyResearchProduct
    Anthropic Tool Use Docsdone

    12 fresh signals captured.

    BuildAdminModels &amp; pricing
    OpenAI Developers Docsdone

    12 fresh signals captured.

    OverviewQuickstartModels
    Diff preview
    Latest skill diff
    +18−11
    - Workspace agents (Apr 22, 2026): OpenAI announced "workspace agents" in ChatGPT — cloud-run, Codex-powered agents designed to automate repeatable team workflows, connect tools, and scale work across systems. Treat workspace agents as a product-level offering that complements the Responses API + Agents SDK for embedding agentic functionality into team workflows.
    - Announcement: https://openai.com/academy/workspace-agents
    −- OpenAI Responses API + shell/container (Mar 11, 2026): OpenAI equips the Responses API with a shell tool and a hosted container workspace (filesystem, optional structured storage like SQLite, restricted networking). Use the hosted container when long-running or stateful steps need artifact storage; use the shell tool for broad OS-level tasks (grep, curl, compilers). See the engineering post and Agents SDK docs for implementation details.
    +- OpenAI Responses API + shell/container (Mar 11, 2026): OpenAI equips the Responses API with a shell tool and a hosted container workspace (filesystem, optional structured storage like SQLite, restricted networking). Use the hosted container when long-running or stateful steps need artifact storage; use the shell tool for OS-level tasks (grep, curl, compilers). The engineering post documents the orchestration loop: model proposes commands, the platform or your orchestrator runs them, and results are returned so the model continues (function_call → execute → function_call_output with preserved call ids).
    - Engineering post: https://openai.com/index/equip-responses-api-computer-environment
    +
    +- OpenAI Agents SDK (Apr 15, 2026): the Agents SDK introduces a model-native harness and native sandbox execution with higher-level primitives (Runner, SandboxAgent, SandboxRunConfig, manifest entries). Prefer Agents SDK primitives for production agents when they match your threat model and deployment constraints; validate sandbox policies before relying on provider-run execution for sensitive data.
    - Agents SDK docs: https://developers.openai.com/api/docs/guides/agents
    −- OpenAI Agents SDK (Apr 15, 2026): the Agents SDK introduces a model-native harness and native sandbox execution. The SDK provides higher-level primitives (Runner, SandboxAgent, SandboxRunConfig, manifest entries) that reduce orchestration boilerplate for file- and tool-heavy agents. Prefer Agents SDK primitives for production agents when they match your threat model and deployment constraints; validate sandbox policies before relying on provider-run execution for sensitive data.
    +- Anthropic Programmatic Tool Calling (PTC): Anthropic supports programmatic tool calling where Claude can write and execute code inside a code-execution container. PTC reduces round trips and tokens by letting the model run multiple tool calls inside the sandbox and surface `tool_use` events that your orchestrator fulfills with matching `tool_result` objects. PTC requires the code execution tool version code_execution_20260120 and is not eligible for Zero Data Retention (ZDR). Confirm model compatibility and retention constraints before sending sensitive data.
    −
    −- Anthropic Programmatic Tool Calling (PTC): Anthropic supports programmatic tool calling where Claude can write and execute code inside a sandboxed code-execution container. PTC reduces round trips and tokens by letting the model run multiple tool calls inside the sandbox and surface `tool_use` events that your orchestrator fulfills with matching `tool_result` objects.
    − - Required tool version & model compatibility: PTC requires code_execution_20260120. Compatible models (April 2026) include: claude-opus-4-7, claude-opus-4-6, claude-sonnet-4-6, claude-opus-4-5 (20251101), and claude-sonnet-4-5 (20250929). Check the code execution compatibility table in Anthropic docs before deploying PTC.
    − - Retention & compliance note: Programmatic tool calling is not eligible for Zero Data Retention (ZDR). Data processed by PTC follows the feature's standard retention policy — verify this against your compliance needs before sending sensitive data into the sandbox.
    - Docs: https://platform.claude.com/docs/en/agents-and-tools/tool-use/programmatic-tool-calling
    + - Compatibility excerpt (Apr 2026): claude-opus-4-7, claude-opus-4-6, claude-sonnet-4-6, claude-opus-4-5 (20251101), claude-sonnet-4-5 (20250929).
    −- Vercel AI SDK (v6 Beta): AI SDK 6 introduces agent abstractions (ToolLoopAgent/Agent interface), tool-execution approval flows (human-in-the-loop), and stabilized structured output generation (inputSchema/outputSchema). Use the v6 agent abstraction when you want built-in loop control, tool approval UI, and schema wiring between model outputs and front-end forms. Pin to specific beta package versions while the API stabilizes.
    +- Vercel AI SDK (v6 Beta): AI SDK 6 introduces agent abstractions (ToolLoopAgent / Agent interface), tool-execution approval flows (human-in-the-loop), and stabilized structured output generation (inputSchema/outputSchema). The SDK consumes Zod or JSON Schema tool definitions and will validate LLM tool calls against those schemas; tools declare an execute function to run server-side. Pin to specific beta package versions while the API stabilizes.
    - Announcement & beta docs: https://ai-sdk.dev/v5/docs/announcing-ai-sdk-6-beta
    + - Tools doc (schemas + execute): https://ai-sdk.dev/v5/docs/foundations/tools
    - Cross-provider abstraction points
    − - Id pairing: OpenAI uses function_call / function_call_output with call ids; Anthropic uses tool_use / tool_result with tool_use_id. Preserve and round-trip these ids exactly in your orchestrator.
    + - Id pairing: OpenAI function_call / function_call_output use call ids; Anthropic emits tool_use / tool_result with tool_use_id. Preserve and round-trip these ids exactly in your orchestrator — mismatched ids are a leading source of bugs.
    - Schema enforcement: Use Zod or JSON Schema to validate inputs/outputs. When providers offer strict schema enforcement, enable it for critical paths but still perform server-side validation as a safety net.
    − - Provider-run vs client-run tools: Provider-run tools execute on provider infrastructure and have different retention/PII policies—document these differences and choose client-run for sensitive data unless explicit contractual guarantees exist.
    + - Provider-run vs client-run tools: Provider-run tools execute on provider infrastructure and have different retention/PII policies — document these differences and choose client-run for sensitive data unless explicit contractual guarantees exist.
    ## Workflow
    +
    +High-level pattern (both providers):
    +1) Declare tools and explicit input/output schemas.
    +2) Send prompt + tool definitions to the model.
    +3) When the model returns a tool call (OpenAI: function_call; Anthropic: tool_use), validate name and args server-side.
    +4) Execute the tool (client-run or provider-run) and return the paired result (OpenAI: function_call_output with the same call id; Anthropic: tool_result with the same tool_use_id).
    +5) Continue the loop until the model finishes.
    ### Step 1: Define tool schemas
    @@ −71 +78 @@
    - Summarize or persist large results and return artifact ids.
    Notes and best practices:
    −- Use the OpenAI shell tool and hosted container workspace when your agent needs files, a filesystem, or restricted network access. The container solves common problems like where to place intermediate files and how to keep prompts compact. See the OpenAI engineering post for design rationale and caveats: https://openai.com/index/equip-responses-api-computer-environment
    +- Use the OpenAI shell tool and hosted container workspace when your agent needs files, a filesystem, or restricted network access. The container addresses intermediate artifact storage and context compaction for large outputs. See the OpenAI engineering post for design rationale and orchestration details: https://openai.com/index/equip-responses-api-computer-environment
    - Prefer the Agents SDK higher-level primitives (Runner, SandboxAgent, Manifest entries) for production agents where available — they encapsulate sandboxing and harness patterns and reduce bespoke harness code.
    ### Step 4: Orchestrator loop — Anthropic (practical)
    @@ −96 +103 @@
    - Anthropic programmatic tool calling (concrete notes):
    −// Programmatic tool calling lets Claude execute code in a sandbox that runs multiple tool calls locally. The code execution feature must be enabled and compatible with the model version you choose. Data retention for PTC follows the feature's retention policy and is not ZDR by default; verify compliance for sensitive workloads. Docs: https://platform.claude.com/docs/en/agents-and-tools/tool-use/programmatic-tool-calling
    +// Programmatic tool calling lets Claude execute code in a sandbox that runs multiple tool calls programmatically. The code execution feature must be enabled and compatible with the model version you choose. Data retention for PTC follows the feature's retention policy and is not ZDR by default; verify compliance for sensitive workloads. Docs: https://platform.claude.com/docs/en/agents-and-tools/tool-use/programmatic-tool-calling
    - Vercel AI SDK v6 (ToolLoopAgent example & install):

    Automations

    ActiveDaily · 9:00 AM18 sources

    Automation is managed by the skill owner.

    Next runin 1h
    ScheduleDaily · 9:00 AM
    Runs this month30
    Latest outcomev15
    statussuccess
    last run1d ago
    triggerScheduled
    editoropenai/gpt-5-mini
    Automation brief

    Check OpenAI and Anthropic changelogs for function-calling schema changes, parallel-tool-use updates, and output-format requirements. Scan Vercel AI SDK for inputSchema/outputSchema API changes. Update tool-definition templates, error-handling patterns, and multi-tool composition guidance.

    Research engine

    Tool Use & Function Calling now combines 7 tracked sources with 2 trusted upstream skill packs. Instead of waiting on a single fixed link, it tracks canonical feeds, discovers new docs from index-like surfaces, and folds those deltas into sandbox-usable guidance.

    18 sources3 Track15 Discover5 Official13 CommunityRank 7Quality 95
    Why this is featured

    Tool Use & Function Calling has unusually strong source quality and broad utility, so it deserves prominent placement.

    Discovery process
    1. Track canonical signals

    Monitor 3 feed-like sources for release notes, changelog entries, and durable upstream deltas.

    2. Discover net-new docs and leads

    Scan 4 discovery-oriented sources such as docs indexes and sitemaps, then rank extracted links against explicit query hints instead of trusting nav order.

    3. Transplant from trusted upstreams

    Fold implementation patterns from OpenAI Docs, Vercel AI SDK so the skill inherits a real operating model instead of boilerplate prose.

    4. Keep the sandbox honest

    Ship prompts, MCP recommendations, and automation language that can actually be executed in Loop's sandbox instead of abstract advice theater.

    Query hints
    openai newsopenaillmagentsresponses apiagents sdkstructured outputstool calling
    Trusted upstreams
    OpenAI Docs

    Official OpenAI docs workflow for model selection, API changes, and canonical guidance.

    OpenAIDocsAPISecurity
    Vercel AI SDK

    Official AI SDK skill for chat, structured output, tool calling, agents, and streaming.

    VercelAI SDKAgentsSEO + GEO

    Sources

    18 tracked

    OpenAI News

    openai · llm · agents

    Open ↗

    OpenAI Platform Changelog

    openai · api · changelog

    Open ↗

    Anthropic News

    anthropic · claude · llm

    Open ↗

    Anthropic Docs Index

    anthropic · models · api

    Open ↗

    Google AI Dev

    google · gemini · sdk

    Open ↗

    Vercel AI SDK Releases

    vercel · ai-sdk · agents

    Open ↗

    LangChain Blog

    langchain · agents · rag

    Open ↗

    OpenAI Developers Docs

    openai · responses · tools

    Open ↗

    Anthropic Tool Use Docs

    anthropic · tools · claude

    Open ↗

    Vercel AI SDK Docs

    vercel · ai-sdk · tools

    Open ↗

    OpenAI - From model to agent (Responses API)

    openai · responses-api · agents

    Open ↗

    Anthropic - Tool Use Docs

    anthropic · tool-use · agents

    Open ↗

    Vercel AI SDK Docs

    vercel · ai-sdk · agents

    Open ↗

    Anthropic - Programmatic Tool Calling

    anthropic · tool_use · agents

    Open ↗

    Vercel AI SDK 6 Beta

    vercel · ai-sdk · inputschema

    Open ↗

    OpenAI News

    openai · agents · agents-sdk

    Open ↗

    Anthropic Tool Use Docs

    anthropic · toolcalling · docs

    Open ↗

    OpenAI Developers Docs

    openai · agents · responses-api

    Open ↗

    Send this prompt to your agent to install the skill

    Agent prompt
    Use the skill at https://loooooop.vercel.app/api/skills/tool-use-patterns/raw

    Versions

    v151d agov143d agov135d agov12Apr 19, 2026v11Apr 18, 2026v10Apr 16, 2026v9Apr 14, 2026v8Apr 13, 2026v7Apr 11, 2026v6Apr 9, 2026v5Apr 7, 2026v4Apr 5, 2026v3Apr 3, 2026v2Apr 1, 2026v1Mar 29, 2026
    Included files1
    SKILL.md
    Automation
    Active
    scheduleDaily · 9:00 AM
    sources18
    next runin 1h
    last run1d ago
    ·Details·Desk

    Latest refresh

    3d ago

    Minor update: add Workspace agents and explicit links to OpenAI engineering post and Agents SDK docs; clarify Anthropic PTC compatibility and retention; confirm Vercel AI SDK v6 Beta details. Emphasize provider-run privacy defaults and validate schema/ID pairing for orchestrators.

    what changed

    Added Workspace agents note and link; expanded Core concepts with explicit OpenAI Agents SDK links; clarified Anthropic PTC compatibility and retention; updated Vercel AI SDK v6 Beta wording; tightened provider-run privacy guidance.

    20 sources scanned171 signals found2 sources discovered
    sections updated
    Core concepts (updated April 2026)WorkflowExamples and patternsEdge cases and gotchas (updated)Research-backed changes (sources)
    status
    success
    triggerAutomation
    editoropenai/gpt-5-mini
    duration132.6s
    Diff▶
    +7−9
    +Generated: 2026-04-24T09:41:24.307Z
    +Summary: Minor update: add Workspace agents and explicit links to OpenAI engineering post and Agents SDK docs; clarify Anthropic PTC compatibility and retention; confirm Vercel AI SDK v6 Beta details. Emphasize provider-run privacy defaults and validate schema/ID pairing for orchestrators.
    −Generated: 2026-04-22T09:41:14.998Z
    +What changed: Added Workspace agents note and link; expanded Core concepts with explicit OpenAI Agents SDK links; clarified Anthropic PTC compatibility and retention; updated Vercel AI SDK v6 Beta wording; tightened provider-run privacy guidance.
    −Summary: This update verifies and tightens provider-specific guidance: confirms Anthropic PTC requires code_execution_20260120 and lists compatible models; emphasizes OpenAI Responses API shell and hosted container usage with a pointer to the Agents SDK docs; and aligns Vercel AI SDK guidance with the AI SDK 6 Beta announcement. It keeps prior orchestration guidance while adding specific doc links and experiment suggestions.
    −What changed: - Updated "Core concepts" with verified Anthropic PTC compatibility and retention note (code_execution_20260120 and model list).
    −- Clarified and linked OpenAI Responses API / Agents SDK guidance to developer docs.
    −- Updated Vercel AI SDK guidance to reflect AI SDK 6 Beta features and install guidance.
    −- Added explicit doc links in "Research-backed changes (sources)".
    Body changed: yes
    Editor: openai/gpt-5-mini
    −Changed sections: Core concepts (updated April 2026), Workflow, Examples and patterns, Research-backed changes (sources)
    +Changed sections: Core concepts (updated April 2026), Workflow, Examples and patterns, Edge cases and gotchas (updated), Research-backed changes (sources)
    Experiments:
    +- Benchmark Agents SDK sandbox vs Responses hosted container for typical file-heavy workflows (latency, token cost, security tradeoffs).
    +- Measure token & latency savings using Anthropic programmatic tool calling (code_execution_20260120) vs per-call round-trips on multi-invocation tasks.
    −- Track OpenAI Agents SDK changelog and sample manifests for at least two minor releases to confirm recommended sandbox APIs and default security settings.
    +- Prototype Vercel AI SDK v6 ToolLoopAgent with inputSchema/outputSchema UI for human-in-the-loop tool approval and measure developer friction and runtime errors.
    −- Prototype a small agent using Vercel AI SDK v6 ToolLoopAgent + Anthropic PTC to measure round-trip token and latency savings vs the Responses API loop.
    Signals:
    - News (Anthropic News)
    - Research (Anthropic News)
    Update history8▶
    3d ago4 sources

    Minor update: add Workspace agents and explicit links to OpenAI engineering post and Agents SDK docs; clarify Anthropic PTC compatibility and retention; confirm Vercel AI SDK v6 Beta details. Emphasize provider-run privacy defaults and validate schema/ID pairing for orchestrators.

    5d ago4 sources

    This update verifies and tightens provider-specific guidance: confirms Anthropic PTC requires code_execution_20260120 and lists compatible models; emphasizes OpenAI Responses API shell and hosted container usage with a pointer to the Agents SDK docs; and aligns Vercel AI SDK guidance with the AI SDK 6 Beta announcement. It keeps prior orchestration guidance while adding specific doc links and experiment suggestions.

    Apr 19, 20264 sources

    Made provider-specific details explicit: added Anthropic programmatic tool calling required tool version (code_execution_20260120), listed compatible Claude models, noted PTC is not eligible for ZDR; added Vercel AI SDK v6 Beta install and pinning guidance; reinforced Agents SDK recommendation and sandbox caution. Tightened examples and sources to point to canonical docs.

    Apr 18, 20264 sources

    This update adds concrete, research-backed details: OpenAI Agents SDK sample version (openai-agents>=0.14.0), clarifies Responses API shell/container guidance, records Anthropic PTC model/tool-version compatibility and retention caveat, and incorporates Vercel AI SDK v6 agent/ToolLoopAgent patterns for front-end integration.

    Apr 16, 20264 sources

    This update incorporates April 2026 provider signals: OpenAI’s Responses API shell and hosted container details (March 11), the Agents SDK native sandbox + harness (April 15), Anthropic’s programmatic tool calling guidance, and Vercel AI SDK v6 beta inputSchema changes. Changes clarify when to use provider-side programmatic execution vs function_call loops, tighten privacy recommendations for provider-run tools, and add multi-tool composition guidance and experiments for future validation.

    Apr 14, 20264 sources

    Updated examples and guidance to reflect OpenAI's March 11, 2026 Responses API shell/container announcement, Anthropic's programmatic tool calling docs, and Vercel AI SDK 6 beta's schema-driven UI for agent front-ends. Emphasizes id round-tripping, schema enforcement, and provider-run privacy considerations.

    Apr 13, 20264 sources

    Updated examples and operational guidance to reflect OpenAI Responses API shell/computer environment, strict id pairing (function_call ↔ function_call_output and tool_use ↔ tool_result), Anthropic programmatic tool calling, and Vercel AI SDK inputSchema/outputSchema wiring.

    Apr 11, 20264 sources

    Clarified Responses API call_id pairing, reinforced Anthropic programmatic tool calling flow and allowed_callers, and updated Vercel AI SDK guidance to prefer v6 inputSchema/outputSchema wiring for front-end agent UIs. Added explicit orchestration checklists and research-backed source links.

    Automations1
    1 activeOpen desk →
    Usage
    views0
    copies0
    refreshes14
    saves0
    api calls0