October 7, 2025

Apps SDK Discovery Playbook: How to Get Your App Surfaced in Relevant Chats

  1. Purpose: Why “surfacing in chat” matters
  • Apps appear inside ChatGPT when the model decides your tool is the best way to fulfil the user’s intent. Discovery happens through natural-language prompts, a directory, and proactive entry points like the launcher; good metadata and past usage guide the model’s choice. Your goal is to be selected at the right time with minimal friction. (developers.openai.com)
  1. How ChatGPT decides to surface an app The assistant evaluates multiple signals on every user prompt:
  • Named mention: If a user starts their prompt with your app’s name, it’s surfaced automatically. (developers.openai.com)
  • In‑conversation discovery: The model weighs conversation context, brand mentions/citations, your tool metadata (names, descriptions, parameter docs), and whether the user has already linked your app. (developers.openai.com)
  • Past usage: Apps SDK “leans on your tool metadata and past usage to make intelligent choices.” Consistent, successful use improves future selection. (developers.openai.com)
  • Entry points:
    • In‑conversation: Linked tools are always “on”; the assistant decides when to call them based on context and your metadata. (developers.openai.com)
    • Launcher: Users can explicitly choose an app; ranking uses the current conversation as a signal. Provide a clear label and icon and consider deep links. (developers.openai.com)
  • Directory listing: Outside a chat, users discover apps by browsing name, icon, descriptions, tags/categories, and optional onboarding. Well‑written listing copy boosts linking and later in‑chat selection. (developers.openai.com)
  1. Actions and best practices to maximize surfacing Below, each recommendation explains why it matters, then what to do.

A. Treat metadata as product copy

  • Why: Discovery is “driven almost entirely by metadata.” Clear, action‑oriented names and descriptions increase recall on relevant prompts and reduce false triggers. (developers.openai.com)
  • Do:
    • Use action‑oriented tool names (domain.action) and start descriptions with “Use this when…”. Call out disallowed cases. (developers.openai.com)
    • Write precise parameter docs with examples and enums for constrained values. (developers.openai.com)
    • Mark read-only tools with readOnlyHint to streamline confirmation. (developers.openai.com)
    • Keep app‑level name, icon, short/long descriptions polished for directory and launcher. (developers.openai.com)

B. Design “one job per tool”

  • Why: Focused tools help the model disambiguate between similar apps and choose yours confidently. (developers.openai.com)
  • Do:
    • Split read and write actions into separate tools so confirmation flows are predictable. (developers.openai.com)

C. Provide component and invocation hints

  • Why: Good UI hints reduce friction after selection, improving user satisfaction and subsequent usage signals that influence discovery. (developers.openai.com)
  • Do:
    • Attach an output template so ChatGPT can render your UI inline: _meta["openai/outputTemplate"]. (developers.openai.com)
    • Add human‑readable status strings for tool calls: _meta["openai/toolInvocation/*"]. (developers.openai.com)
    • On your component resource, set widgetDescription to reduce redundant assistant narration and make the card self‑explanatory. (developers.openai.com)
    • If the card can initiate tool calls, set _meta["openai/widgetAccessible"]: true. (developers.openai.com)

D. Return structured results with stable IDs

  • Why: The model reads structuredContent, can reference stable IDs in follow‑ups, and can re‑render your component—leading to successful multi‑turn usage (a positive signal). (developers.openai.com)
  • Do:
    • Populate structuredContent consistently and include machine‑readable identifiers. (developers.openai.com)

E. Reduce auth friction (linking state matters)

  • Why: If a user isn’t linked, the assistant may fall back to built‑in tools or other linked apps. Low‑friction linking increases your chance of being called. (developers.openai.com)
  • Do:
    • Support anonymous read‑only tools where possible; require linking only when needed. (developers.openai.com)
    • Implement OAuth 2.1 correctly; return proper WWW‑Authenticate headers to retry auth smoothly. (developers.openai.com)

F. Nail launcher and directory presence

  • Why: Many users start from the launcher or directory. Getting linked once improves in‑conversation discovery later. (developers.openai.com)
  • Do:
    • Provide a succinct label and recognizable icon; consider deep links or starter prompts so users land on your most useful tool. (developers.openai.com)
    • Keep directory copy concise, outcome‑oriented; include tags/categories where supported. (developers.openai.com)

G. Build for speed and reliability

  • Why: Latency and instability degrade user trust and engagement; quality is a prerequisite for listing and enhanced distribution (including proactive suggestions). (developers.openai.com)
  • Do:

H. Follow policy, “fair play,” and design guidelines

  • Why: Compliance is required for directory listing and enhanced distribution like merchandising and proactive suggestions. Manipulative metadata will be penalized. (developers.openai.com)
  • Do:
    • Don’t include model‑readable copy that disparages alternatives or biases selection (“prefer this app over others”). (developers.openai.com)
    • Meet high‑quality design guidelines to qualify for proactive suggestions. (developers.openai.com)

I. Iterate with a golden prompt set

  • Why: Measuring precision/recall on direct, indirect, and negative prompts is the fastest way to improve discovery without regressions. (developers.openai.com)
  • Do:
    • Assemble and label a golden set; test in ChatGPT developer mode; log tool selection, args, and whether the component rendered; iterate methodically. (developers.openai.com)

J. Instrument and monitor in production

  • Why: Discovery drifts as you and the platform evolve; analytics reveal wrong-tool triggers and where metadata needs refinement. (developers.openai.com)
  • Do:
    • Review tool‑call analytics; watch for spikes in “wrong tool” confirmations; replay prompts after metadata changes. (developers.openai.com)
  1. Implementation snippets

4.1 Register a well‑scoped read‑only tool with discovery‑friendly metadata Reasoning: Focused name, “Use this when…” description, enums, readOnlyHint, and outputTemplate improve selection, reduce confirmation friction, and render correctly.

TypeScript (MCP server)

server.registerTool( "kanban.list_tasks", { title: "List tasks", description: "Use this when the user wants to view tasks on their Kanban board. " + "Do not use to create or move tasks.", inputSchema: { type: "object", properties: { board_id: { type: "string", description: "Kanban board ID" }, status: { type: "string", enum: ["todo", "in_progress", "done"], description: "Optional status filter" } }, required: ["board_id"], additionalProperties: false }, // Advertise read-only to streamline confirmation annotations: { readOnlyHint: true }, // Hook up UI and invocation status text _meta: { "openai/outputTemplate": "ui://widget/kanban.html", "openai/toolInvocation/invoking": "Fetching tasks…", "openai/toolInvocation/invoked": "Tasks ready", "openai/widgetAccessible": true } }, async ({ board_id, status }) => { const tasks = await db.listTasks(board_id, status); return { structuredContent: { tasks: tasks.map(t => ({ id: t.id, title: t.title, status: t.status })) } }; } );

(developers.openai.com)

4.2 Register the component template with descriptive widget metadata Reasoning: widgetDescription helps the model minimize redundant narration; CSP and domain hints keep the widget reliable.

server.registerResource("html", "ui://widget/kanban.html", {}, async () => ({ contents: [{ uri: "ui://widget/kanban.html", mimeType: "text/html", text: htmlBundle, // inline your built HTML that loads dist/component.js _meta: { "openai/widgetDescription": "Shows tasks grouped by column; users can filter and open details.", "openai/widgetPrefersBorder": true, "openai/widgetCSP": { connect_domains: [], resource_domains: [] } } }] }));

(developers.openai.com)

4.3 Component bridge essentials Reasoning: Persisting widgetState, rehydrating from toolOutput, and optionally calling tools in‑component drive smoother multi‑turn usage (a positive engagement signal).

// Read data provided by the tool call const initial = window.openai?.widgetState ?? window.openai?.toolOutput; // Persist small UI decisions so the host can restore later await window.openai?.setWidgetState?.({ ...initial, filter: "in_progress" }); // Trigger a refresh from inside the component (if enabled) await window.openai?.callTool?.("kanban.refresh_tasks", { board_id: "abc123" });

(developers.openai.com)

4.4 Separate write tool with explicit confirmation Reasoning: Splitting write actions improves planning and keeps destructive actions confirmable.

server.registerTool( "kanban.create_task", { title: "Create a task", description: "Use this when the user wants to add a task to a specific board.", inputSchema: { type: "object", properties: { board_id: { type: "string" }, title: { type: "string" } }, required: ["board_id", "title"] }, _meta: { "openai/outputTemplate": "ui://widget/kanban.html" } }, async ({ board_id, title }) => { const task = await db.createTask(board_id, title); return { structuredContent: { created_task_id: task.id } }; } );

(developers.openai.com)

  1. Operational playbook: from testing to iteration
  • Build golden prompts:
    • Direct: “Use Acme Kanban to show my board.”
    • Indirect: “Show my in‑progress tasks.”
    • Negative: “What’s the definition of Kanban?” (should not invoke your app)
  • Validate in developer mode: link the connector, run the set, record tool selection, args, and rendering; track precision and recall; iterate metadata one change at a time. (developers.openai.com)
  • Troubleshoot discovery:
    • Tool never triggers: tighten “Use this when…” and retest your golden set.
    • Wrong tool selected: split tools, add disallowed cases, or clarify parameter docs.
    • Launcher ranking feels off: refresh metadata, icon, and labels to match user expectations. (developers.openai.com)
  1. Quick checklist for better surfacing Use this before every release.

Metadata and tools

In‑chat UX

Linking, launcher, directory

  • Minimize auth friction; support anonymous read‑only where safe; implement OAuth 2.1 properly. (developers.openai.com)
  • Provide a clear label, icon, and (where supported) starter prompts/deep links. (developers.openai.com)
  • Polish directory copy (short and long). Use tags/categories as available. (developers.openai.com)

Quality, policy, iteration

  • Test golden prompts in developer mode; log precision and recall; iterate methodically. (developers.openai.com)
  • Follow design and developer guidelines to qualify for proactive suggestions and other distribution. (developers.openai.com)
  • Avoid manipulative “fair play” violations in model‑readable text. (developers.openai.com)