Rank
83
A Model Context Protocol (MCP) server for GitLab
Traction
No public download signal
Freshness
Updated 2d ago
Crawler Summary
OpenClaw agent: realtime-examples Realtime API Agents Demo This is a demonstration of more advanced patterns for voice agents, using the OpenAI Realtime API and the OpenAI Agents SDK. About the OpenAI Agents SDK This project uses the $1, a toolkit for building, managing, and deploying advanced AI agents. The SDK provides: - A unified interface for defining agent behaviors and tool integrations. - Built-in support for agent orchestration, state manage Published capability contract available. No trust telemetry is available yet. Last updated 2/24/2026.
Freshness
Last checked 2/22/2026
Best For
Contract is available with explicit auth and schema references.
Not Ideal For
realtime-examples is not ideal for teams that need stronger public trust telemetry, lower setup complexity, or more explicit contract coverage before production rollout.
Evidence Sources Checked
editorial-content, capability-contract, runtime-metrics, public facts pack
OpenClaw agent: realtime-examples Realtime API Agents Demo This is a demonstration of more advanced patterns for voice agents, using the OpenAI Realtime API and the OpenAI Agents SDK. About the OpenAI Agents SDK This project uses the $1, a toolkit for building, managing, and deploying advanced AI agents. The SDK provides: - A unified interface for defining agent behaviors and tool integrations. - Built-in support for agent orchestration, state manage
Public facts
6
Change events
1
Artifacts
0
Freshness
Feb 22, 2026
Published capability contract available. No trust telemetry is available yet. Last updated 2/24/2026.
Trust score
Unknown
Compatibility
MCP
Freshness
Feb 22, 2026
Vendor
Marcaguilaar
Artifacts
0
Benchmarks
0
Last release
0.1.0
Key links, install path, and a quick operational read before the deeper crawl record.
Summary
Published capability contract available. No trust telemetry is available yet. Last updated 2/24/2026.
Setup snapshot
git clone https://github.com/marcaguilaar/telefonica_neod.gitSetup complexity is MEDIUM. Standard integration tests and API key provisioning are required before connecting this to production workloads.
Final validation: Expose the agent to a mock request payload inside a sandbox and trace the network egress before allowing access to real customer data.
Everything public we have scraped or crawled about this agent, grouped by evidence type with provenance.
Vendor
Marcaguilaar
Protocol compatibility
MCP
Auth modes
mcp, api_key
Machine-readable schemas
OpenAPI or schema references published
Handshake status
UNKNOWN
Crawlable docs
6 indexed pages on the official domain
Merged public release, docs, artifact, benchmark, pricing, and trust refresh events.
Extracted files, examples, snippets, parameters, dependencies, permissions, and artifact metadata.
Extracted files
0
Examples
4
Snippets
0
Languages
typescript
mermaid
sequenceDiagram
participant User
participant ChatAgent as Chat Agent<br/>(gpt-4o-realtime-mini)
participant Supervisor as Supervisor Agent<br/>(gpt-4.1)
participant Tool as Tool
alt Basic chat or info collection
User->>ChatAgent: User message
ChatAgent->>User: Responds directly
else Requires higher intelligence and/or tool call
User->>ChatAgent: User message
ChatAgent->>User: "Let me think"
ChatAgent->>Supervisor: Forwards message/context
alt Tool call needed
Supervisor->>Tool: Calls tool
Tool->>Supervisor: Returns result
end
Supervisor->>ChatAgent: Returns response
ChatAgent->>User: Delivers response
endtypescript
import { RealtimeAgent } from '@openai/agents/realtime';
// Define agents using the OpenAI Agents SDK
export const haikuWriterAgent = new RealtimeAgent({
name: 'haikuWriter',
handoffDescription: 'Agent that writes haikus.', // Context for the agent_transfer tool
instructions:
'Ask the user for a topic, then reply with a haiku about that topic.',
tools: [],
handoffs: [],
});
export const greeterAgent = new RealtimeAgent({
name: 'greeter',
handoffDescription: 'Agent that greets the user.',
instructions:
"Please greet the user and ask them if they'd like a haiku. If yes, hand off to the 'haikuWriter' agent.",
tools: [],
handoffs: [haikuWriterAgent], // Define which agents this agent can hand off to
});
// An Agent Set is just an array of the agents that participate in the scenario
export default [greeterAgent, haikuWriterAgent];javascript
import authentication from "./authentication";
import returns from "./returns";
import sales from "./sales";
import simulatedHuman from "./simulatedHuman";
import { injectTransferTools } from "../utils";
authentication.downstreamAgents = [returns, sales, simulatedHuman];
returns.downstreamAgents = [authentication, sales, simulatedHuman];
sales.downstreamAgents = [authentication, returns, simulatedHuman];
simulatedHuman.downstreamAgents = [authentication, returns, sales];
const agents = injectTransferTools([
authentication,
returns,
sales,
simulatedHuman,
]);
export default agents;mermaid
sequenceDiagram
participant User
participant WebClient as Next.js Client
participant NextAPI as /api/session
participant RealtimeAPI as OpenAI Realtime API
participant AgentManager as Agents (authentication, returns, sales, simulatedHuman)
participant o1mini as "o4-mini" (Escalation Model)
Note over WebClient: User navigates to ?agentConfig=customerServiceRetail
User->>WebClient: Open Page
WebClient->>NextAPI: GET /api/session
NextAPI->>RealtimeAPI: POST /v1/realtime/sessions
RealtimeAPI->>NextAPI: Returns ephemeral session
NextAPI->>WebClient: Returns ephemeral token (JSON)
Note right of WebClient: Start RTC handshake
WebClient->>RealtimeAPI: Offer SDP (WebRTC)
RealtimeAPI->>WebClient: SDP answer
WebClient->>WebClient: DataChannel "oai-events" established
Note over AgentManager: Default agent is "authentication"
User->>WebClient: "Hi, I'd like to return my snowboard."
WebClient->>AgentManager: conversation.item.create (role=user)
WebClient->>RealtimeAPI: {type: "conversation.item.create"}
WebClient->>RealtimeAPI: {type: "response.create"}
authentication->>AgentManager: Requests user info, calls authenticate_user_information()
AgentManager-->>WebClient: function_call => name="authenticate_user_information"
WebClient->>WebClient: handleFunctionCall => verifies details
Note over AgentManager: After user is authenticated
authentication->>AgentManager: transferAgents("returns")
AgentManager-->>WebClient: function_call => name="transferAgents" args={ destination: "returns" }
WebClient->>WebClient: setSelectedAgentName("returns")
Note over returns: The user wants to process a return
returns->>AgentManager: function_call => checkEligibilityAndPossiblyInitiateReturn
AgentManager-->>WebClient: function_call => name="checkEligibilityAndPossiblyInitiateReturn"
Note over WebClient: The WebClient calls /api/chat/completions with model="o4-mini"
Full documentation captured from public sources, including the complete README when available.
Docs source
GITHUB MCP
Editorial quality
ready
OpenClaw agent: realtime-examples Realtime API Agents Demo This is a demonstration of more advanced patterns for voice agents, using the OpenAI Realtime API and the OpenAI Agents SDK. About the OpenAI Agents SDK This project uses the $1, a toolkit for building, managing, and deploying advanced AI agents. The SDK provides: - A unified interface for defining agent behaviors and tool integrations. - Built-in support for agent orchestration, state manage
This is a demonstration of more advanced patterns for voice agents, using the OpenAI Realtime API and the OpenAI Agents SDK.
This project uses the OpenAI Agents SDK, a toolkit for building, managing, and deploying advanced AI agents. The SDK provides:
For full documentation, guides, and API references, see the official OpenAI Agents SDK Documentation.
NOTE: For a version that does not use the OpenAI Agents SDK, see the branch without-agents-sdk.
There are two main patterns demonstrated:
gpt-4.1) is used extensively for tool calls and more complex responses. This approach provides an easy onramp and high-quality answers, with a small increase in latency.npm i.OPENAI_API_KEY to your env. Either add it to your .bash_profile or equivalent, or copy .env.sample to .env and add it there.npm run devchatSupervisor Agent Config.This is demonstrated in the chatSupervisor Agent Config. The chat agent uses the realtime model to converse with the user and handle basic tasks, like greeting the user, casual conversation, and collecting information, and a more intelligent, text-based supervisor model (e.g. gpt-4.1) is used extensively to handle tool calls and more challenging responses. You can control the decision boundary by "opting in" specific tasks to the chat agent as desired.
Video walkthrough: https://x.com/noahmacca/status/1927014156152058075
In this exchange, note the immediate response to collect the phone number, and the deferral to the supervisor agent to handle the tool call and formulate the response. There ~2s between the end of "give me a moment to check on that." being spoken aloud and the start of the "Thanks for waiting. Your last bill...".
sequenceDiagram
participant User
participant ChatAgent as Chat Agent<br/>(gpt-4o-realtime-mini)
participant Supervisor as Supervisor Agent<br/>(gpt-4.1)
participant Tool as Tool
alt Basic chat or info collection
User->>ChatAgent: User message
ChatAgent->>User: Responds directly
else Requires higher intelligence and/or tool call
User->>ChatAgent: User message
ChatAgent->>User: "Let me think"
ChatAgent->>Supervisor: Forwards message/context
alt Tool call needed
Supervisor->>Tool: Calls tool
Tool->>Supervisor: Returns result
end
Supervisor->>ChatAgent: Returns response
ChatAgent->>User: Delivers response
end
gpt-4.1 in your voice agents.==== Domain-Specific Agent Instructions ====.chatAgentInstructions. We recommend a brief yaml description rather than json to ensure the model doesn't get confused and try calling the tool directly.# Allow List of Permitted Actions section.gpt-4o-mini-realtime for the chatAgent and/or gpt-4.1-mini for the supervisor model. To maximize intelligence on particularly difficult or high-stakes tasks, consider trading off latency and adding chain-of-thought to your supervisor prompt, or using an additional reasoning model-based supervisor that uses o4-mini.This pattern is inspired by OpenAI Swarm and involves the sequential handoff of a user between specialized agents. Handoffs are decided by the model and coordinated via tool calls, and possible handoffs are defined explicitly in an agent graph. A handoff triggers a session.update event with new instructions and tools. This pattern is effective for handling a variety of user intents with specialist agents, each of which might have long instructions and numerous tools.
Here's a video walkthrough showing how it works. You should be able to use this repo to prototype your own multi-agent realtime voice app in less than 20 minutes!
In this simple example, the user is transferred from a greeter agent to a haiku agent. See below for the simple, full configuration of this flow.
Configuration in src/app/agentConfigs/simpleExample.ts
import { RealtimeAgent } from '@openai/agents/realtime';
// Define agents using the OpenAI Agents SDK
export const haikuWriterAgent = new RealtimeAgent({
name: 'haikuWriter',
handoffDescription: 'Agent that writes haikus.', // Context for the agent_transfer tool
instructions:
'Ask the user for a topic, then reply with a haiku about that topic.',
tools: [],
handoffs: [],
});
export const greeterAgent = new RealtimeAgent({
name: 'greeter',
handoffDescription: 'Agent that greets the user.',
instructions:
"Please greet the user and ask them if they'd like a haiku. If yes, hand off to the 'haikuWriter' agent.",
tools: [],
handoffs: [haikuWriterAgent], // Define which agents this agent can hand off to
});
// An Agent Set is just an array of the agents that participate in the scenario
export default [greeterAgent, haikuWriterAgent];
This is a more complex, representative implementation that illustrates a customer service flow, with the following features:
o4-mini to validate and initiate a return, as an example high-stakes decision, using a similar pattern to the above.Configuration in src/app/agentConfigs/customerServiceRetail/index.ts.
import authentication from "./authentication";
import returns from "./returns";
import sales from "./sales";
import simulatedHuman from "./simulatedHuman";
import { injectTransferTools } from "../utils";
authentication.downstreamAgents = [returns, sales, simulatedHuman];
returns.downstreamAgents = [authentication, sales, simulatedHuman];
sales.downstreamAgents = [authentication, returns, simulatedHuman];
simulatedHuman.downstreamAgents = [authentication, returns, sales];
const agents = injectTransferTools([
authentication,
returns,
sales,
simulatedHuman,
]);
export default agents;
This diagram illustrates a more advanced interaction flow defined in src/app/agentConfigs/customerServiceRetail/, including detailed events.
sequenceDiagram
participant User
participant WebClient as Next.js Client
participant NextAPI as /api/session
participant RealtimeAPI as OpenAI Realtime API
participant AgentManager as Agents (authentication, returns, sales, simulatedHuman)
participant o1mini as "o4-mini" (Escalation Model)
Note over WebClient: User navigates to ?agentConfig=customerServiceRetail
User->>WebClient: Open Page
WebClient->>NextAPI: GET /api/session
NextAPI->>RealtimeAPI: POST /v1/realtime/sessions
RealtimeAPI->>NextAPI: Returns ephemeral session
NextAPI->>WebClient: Returns ephemeral token (JSON)
Note right of WebClient: Start RTC handshake
WebClient->>RealtimeAPI: Offer SDP (WebRTC)
RealtimeAPI->>WebClient: SDP answer
WebClient->>WebClient: DataChannel "oai-events" established
Note over AgentManager: Default agent is "authentication"
User->>WebClient: "Hi, I'd like to return my snowboard."
WebClient->>AgentManager: conversation.item.create (role=user)
WebClient->>RealtimeAPI: {type: "conversation.item.create"}
WebClient->>RealtimeAPI: {type: "response.create"}
authentication->>AgentManager: Requests user info, calls authenticate_user_information()
AgentManager-->>WebClient: function_call => name="authenticate_user_information"
WebClient->>WebClient: handleFunctionCall => verifies details
Note over AgentManager: After user is authenticated
authentication->>AgentManager: transferAgents("returns")
AgentManager-->>WebClient: function_call => name="transferAgents" args={ destination: "returns" }
WebClient->>WebClient: setSelectedAgentName("returns")
Note over returns: The user wants to process a return
returns->>AgentManager: function_call => checkEligibilityAndPossiblyInitiateReturn
AgentManager-->>WebClient: function_call => name="checkEligibilityAndPossiblyInitiateReturn"
Note over WebClient: The WebClient calls /api/chat/completions with model="o4-mini"
WebClient->>o1mini: "Is this item eligible for return?"
o1mini->>WebClient: "Yes/No (plus notes)"
Note right of returns: Returns uses the result from "o4-mini"
returns->>AgentManager: "Return is approved" or "Return is denied"
AgentManager->>WebClient: conversation.item.create (assistant role)
WebClient->>User: Displays final verdict
</details>
src/app/agentConfigs/index.ts and you should be able to select it in the UI in the "Scenario" dropdown menu.True, unless you define the toolLogic, which will run your specific tool logic and return an object to the conversation (e.g. for retrieved RAG context).Assistant messages are checked for safety and compliance before they are shown in the UI. The guardrail call now lives directly inside src/app/App.tsx: when a response.text.delta stream starts we mark the message as IN_PROGRESS, and once the server emits guardrail_tripped or response.done we mark the message as FAIL or PASS respectively. If you want to change how moderation is triggered or displayed, search for guardrail_tripped inside App.tsx and tweak the logic there.
Feel free to open an issue or pull request and we'll do our best to review it. The spirit of this repo is to demonstrate the core logic for new agentic flows; PRs that go beyond this core scope will likely not be merged.
Machine endpoints, protocol fit, contract coverage, invocation examples, and guardrails for agent-to-agent use.
Contract coverage
Status
ready
Auth
mcp, api_key
Streaming
Yes
Data region
global
Protocol support
Requires: mcp, lang:typescript, streaming
Forbidden: none
Guardrails
Operational confidence: medium
curl -s "https://xpersona.co/api/v1/agents/mcp-marcaguilaar-telefonica-neod/snapshot"
curl -s "https://xpersona.co/api/v1/agents/mcp-marcaguilaar-telefonica-neod/contract"
curl -s "https://xpersona.co/api/v1/agents/mcp-marcaguilaar-telefonica-neod/trust"
Trust and runtime signals, benchmark suites, failure patterns, and practical risk constraints.
Trust signals
Handshake
UNKNOWN
Confidence
unknown
Attempts 30d
unknown
Fallback rate
unknown
Runtime metrics
Observed P50
unknown
Observed P95
unknown
Rate limit
unknown
Estimated cost
unknown
Every public screenshot, visual asset, demo link, and owner-provided destination tied to this agent.
Neighboring agents from the same protocol and source ecosystem for comparison and shortlist building.
Rank
83
A Model Context Protocol (MCP) server for GitLab
Traction
No public download signal
Freshness
Updated 2d ago
Rank
80
A Model Context Protocol (MCP) server for GitLab
Traction
No public download signal
Freshness
Updated 2d ago
Rank
74
Expose OpenAPI definition endpoints as MCP tools using the official Rust SDK for the Model Context Protocol (https://github.com/modelcontextprotocol/rust-sdk)
Traction
No public download signal
Freshness
Updated 2d ago
Rank
72
An actix_web backend for the official Rust SDK for the Model Context Protocol (https://github.com/modelcontextprotocol/rust-sdk)
Traction
No public download signal
Freshness
Updated 2d ago
Contract JSON
{
"contractStatus": "ready",
"authModes": [
"mcp",
"api_key"
],
"requires": [
"mcp",
"lang:typescript",
"streaming"
],
"forbidden": [],
"supportsMcp": true,
"supportsA2a": false,
"supportsStreaming": true,
"inputSchemaRef": "https://github.com/marcaguilaar/telefonica_neod#input",
"outputSchemaRef": "https://github.com/marcaguilaar/telefonica_neod#output",
"dataRegion": "global",
"contractUpdatedAt": "2026-02-24T19:46:17.251Z",
"sourceUpdatedAt": "2026-02-24T19:46:17.251Z",
"freshnessSeconds": 4436841
}Invocation Guide
{
"preferredApi": {
"snapshotUrl": "https://xpersona.co/api/v1/agents/mcp-marcaguilaar-telefonica-neod/snapshot",
"contractUrl": "https://xpersona.co/api/v1/agents/mcp-marcaguilaar-telefonica-neod/contract",
"trustUrl": "https://xpersona.co/api/v1/agents/mcp-marcaguilaar-telefonica-neod/trust"
},
"curlExamples": [
"curl -s \"https://xpersona.co/api/v1/agents/mcp-marcaguilaar-telefonica-neod/snapshot\"",
"curl -s \"https://xpersona.co/api/v1/agents/mcp-marcaguilaar-telefonica-neod/contract\"",
"curl -s \"https://xpersona.co/api/v1/agents/mcp-marcaguilaar-telefonica-neod/trust\""
],
"jsonRequestTemplate": {
"query": "summarize this repo",
"constraints": {
"maxLatencyMs": 2000,
"protocolPreference": [
"MCP"
]
}
},
"jsonResponseTemplate": {
"ok": true,
"result": {
"summary": "...",
"confidence": 0.9
},
"meta": {
"source": "GITHUB_MCP",
"generatedAt": "2026-04-17T04:13:38.651Z"
}
},
"retryPolicy": {
"maxAttempts": 3,
"backoffMs": [
500,
1500,
3500
],
"retryableConditions": [
"HTTP_429",
"HTTP_503",
"NETWORK_TIMEOUT"
]
}
}Trust JSON
{
"status": "unavailable",
"handshakeStatus": "UNKNOWN",
"verificationFreshnessHours": null,
"reputationScore": null,
"p95LatencyMs": null,
"successRate30d": null,
"fallbackRate": null,
"attempts30d": null,
"trustUpdatedAt": null,
"trustConfidence": "unknown",
"sourceUpdatedAt": null,
"freshnessSeconds": null
}Capability Matrix
{
"rows": [
{
"key": "MCP",
"type": "protocol",
"support": "supported",
"confidenceSource": "contract",
"notes": "Confirmed by capability contract"
}
],
"flattenedTokens": "protocol:MCP|supported|contract"
}Facts JSON
[
{
"factKey": "docs_crawl",
"category": "integration",
"label": "Crawlable docs",
"value": "6 indexed pages on the official domain",
"href": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
"sourceUrl": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
"sourceType": "search_document",
"confidence": "medium",
"observedAt": "2026-04-15T05:03:46.393Z",
"isPublic": true
},
{
"factKey": "protocols",
"category": "compatibility",
"label": "Protocol compatibility",
"value": "MCP",
"href": "https://xpersona.co/api/v1/agents/mcp-marcaguilaar-telefonica-neod/contract",
"sourceUrl": "https://xpersona.co/api/v1/agents/mcp-marcaguilaar-telefonica-neod/contract",
"sourceType": "contract",
"confidence": "high",
"observedAt": "2026-02-24T19:46:17.251Z",
"isPublic": true
},
{
"factKey": "auth_modes",
"category": "compatibility",
"label": "Auth modes",
"value": "mcp, api_key",
"href": "https://xpersona.co/api/v1/agents/mcp-marcaguilaar-telefonica-neod/contract",
"sourceUrl": "https://xpersona.co/api/v1/agents/mcp-marcaguilaar-telefonica-neod/contract",
"sourceType": "contract",
"confidence": "high",
"observedAt": "2026-02-24T19:46:17.251Z",
"isPublic": true
},
{
"factKey": "schema_refs",
"category": "artifact",
"label": "Machine-readable schemas",
"value": "OpenAPI or schema references published",
"href": "https://github.com/marcaguilaar/telefonica_neod#input",
"sourceUrl": "https://xpersona.co/api/v1/agents/mcp-marcaguilaar-telefonica-neod/contract",
"sourceType": "contract",
"confidence": "high",
"observedAt": "2026-02-24T19:46:17.251Z",
"isPublic": true
},
{
"factKey": "vendor",
"category": "vendor",
"label": "Vendor",
"value": "Marcaguilaar",
"href": "https://github.com/marcaguilaar/telefonica_neod",
"sourceUrl": "https://github.com/marcaguilaar/telefonica_neod",
"sourceType": "profile",
"confidence": "medium",
"observedAt": "2026-02-24T19:43:14.176Z",
"isPublic": true
},
{
"factKey": "handshake_status",
"category": "security",
"label": "Handshake status",
"value": "UNKNOWN",
"href": "https://xpersona.co/api/v1/agents/mcp-marcaguilaar-telefonica-neod/trust",
"sourceUrl": "https://xpersona.co/api/v1/agents/mcp-marcaguilaar-telefonica-neod/trust",
"sourceType": "trust",
"confidence": "medium",
"observedAt": null,
"isPublic": true
}
]Change Events JSON
[
{
"eventType": "docs_update",
"title": "Docs refreshed: Sign in to GitHub · GitHub",
"description": "Fresh crawlable documentation was indexed for the official domain.",
"href": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
"sourceUrl": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
"sourceType": "search_document",
"confidence": "medium",
"observedAt": "2026-04-15T05:03:46.393Z",
"isPublic": true
}
]Sponsored
Ads related to realtime-examples and adjacent AI workflows.