Rank
83
A Model Context Protocol (MCP) server for GitLab
Traction
No public download signal
Freshness
Updated 2d ago
Crawler Summary
Converse MCP Server - Converse with other LLMs with chat and consensus tools Converse MCP Server $1 An MCP (Model Context Protocol) server that lets Claude talk to other AI models. Use it to chat with models from OpenAI, Google, Anthropic, X.AI, Mistral, DeepSeek, or OpenRouter. You can either talk to one model at a time or get multiple models to weigh in on complex decisions. ๐ Requirements - **Node.js**: Version 20 or higher - **Package Manager**: npm (or pnpm/yarn) - **API Keys**: At leas Capability contract not published. No trust telemetry is available yet. 1 GitHub stars reported by the source. Last updated 2/25/2026.
Freshness
Last checked 2/25/2026
Best For
converse-mcp-server is best for mcp, server, ai workflows where MCP compatibility matters.
Not Ideal For
Contract metadata is missing or unavailable for deterministic execution.
Evidence Sources Checked
editorial-content, GITHUB MCP, runtime-metrics, public facts pack
Converse MCP Server - Converse with other LLMs with chat and consensus tools Converse MCP Server $1 An MCP (Model Context Protocol) server that lets Claude talk to other AI models. Use it to chat with models from OpenAI, Google, Anthropic, X.AI, Mistral, DeepSeek, or OpenRouter. You can either talk to one model at a time or get multiple models to weigh in on complex decisions. ๐ Requirements - **Node.js**: Version 20 or higher - **Package Manager**: npm (or pnpm/yarn) - **API Keys**: At leas
Public facts
5
Change events
1
Artifacts
0
Freshness
Feb 25, 2026
Capability contract not published. No trust telemetry is available yet. 1 GitHub stars reported by the source. Last updated 2/25/2026.
Trust score
Unknown
Compatibility
MCP
Freshness
Feb 25, 2026
Vendor
Falldownthesystem
Artifacts
0
Benchmarks
0
Last release
2.19.2
Key links, install path, and a quick operational read before the deeper crawl record.
Summary
Capability contract not published. No trust telemetry is available yet. 1 GitHub stars reported by the source. Last updated 2/25/2026.
Setup snapshot
git clone https://github.com/FallDownTheSystem/converse.gitSetup complexity is MEDIUM. Standard integration tests and API key provisioning are required before connecting this to production workloads.
Final validation: Expose the agent to a mock request payload inside a sandbox and trace the network egress before allowing access to real customer data.
Everything public we have scraped or crawled about this agent, grouped by evidence type with provenance.
Vendor
Falldownthesystem
Protocol compatibility
MCP
Adoption signal
1 GitHub stars
Handshake status
UNKNOWN
Crawlable docs
6 indexed pages on the official domain
Merged public release, docs, artifact, benchmark, pricing, and trust refresh events.
Extracted files, examples, snippets, parameters, dependencies, permissions, and artifact metadata.
Extracted files
0
Examples
6
Snippets
0
Languages
typescript
bash
# Add the server with your API keys claude mcp add converse \ -e OPENAI_API_KEY=your_key_here \ -e GEMINI_API_KEY=your_key_here \ -e XAI_API_KEY=your_key_here \ -e ANTHROPIC_API_KEY=your_key_here \ -e MISTRAL_API_KEY=your_key_here \ -e DEEPSEEK_API_KEY=your_key_here \ -e OPENROUTER_API_KEY=your_key_here \ -e ENABLE_RESPONSE_SUMMARIZATION=true \ -e SUMMARIZATION_MODEL=gpt-5 \ -s user \ npx converse-mcp-server
json
{
"mcpServers": {
"converse": {
"command": "npx",
"args": ["converse-mcp-server"],
"env": {
"OPENAI_API_KEY": "your_key_here",
"GEMINI_API_KEY": "your_key_here",
"XAI_API_KEY": "your_key_here",
"ANTHROPIC_API_KEY": "your_key_here",
"MISTRAL_API_KEY": "your_key_here",
"DEEPSEEK_API_KEY": "your_key_here",
"OPENROUTER_API_KEY": "your_key_here",
"ENABLE_RESPONSE_SUMMARIZATION": "true",
"SUMMARIZATION_MODEL": "gpt-5"
}
}
}
}json
{
"command": "cmd",
"args": ["/c", "npx", "converse-mcp-server"],
"env": {
"ENABLE_RESPONSE_SUMMARIZATION": "true",
"SUMMARIZATION_MODEL": "gpt-5"
// ... add your API keys here
}
}javascript
// Synchronous execution (default)
{
"prompt": "How should I structure the authentication module for this Express.js API?",
"model": "gemini-2.5-flash", // Routes to Google
"files": ["/path/to/src/auth.js", "/path/to/config.json"],
"images": ["/path/to/architecture.png"],
"temperature": 0.5,
"reasoning_effort": "medium",
"use_websearch": false
}
// Asynchronous execution (for long-running tasks)
{
"prompt": "Analyze this large codebase and provide optimization recommendations",
"model": "gpt-5",
"files": ["/path/to/large-project"],
"async": true, // Enables background processing
"continuation_id": "my-analysis-task" // Optional: custom ID for tracking
}
// Codex - Agentic coding assistant with local file access
{
"prompt": "Analyze this codebase and suggest improvements",
"model": "codex",
"files": ["/path/to/your/project"],
"async": true // Recommended for Codex (responses take 6-20+ seconds)
}javascript
// Synchronous consensus (default)
{
"prompt": "Should we use microservices or monolith architecture for our e-commerce platform?",
"models": ["gpt-5", "gemini-2.5-flash", "grok-4"],
"files": ["/path/to/requirements.md"],
"enable_cross_feedback": true,
"temperature": 0.2
}
// Asynchronous consensus (for complex analysis)
{
"prompt": "Review our system architecture and provide comprehensive recommendations",
"models": ["gpt-5", "gemini-2.5-pro", "claude-sonnet-4"],
"files": ["/path/to/architecture-docs"],
"async": true, // Run in background
"enable_cross_feedback": true
}javascript
// Check status of a specific job
{
"continuation_id": "my-analysis-task"
}
// List recent jobs (shows last 10)
// With summarization enabled, displays titles and final summaries
{}
// Get full conversation history for completed job
{
"continuation_id": "my-analysis-task",
"full_history": true
}Full documentation captured from public sources, including the complete README when available.
Docs source
GITHUB MCP
Editorial quality
ready
Converse MCP Server - Converse with other LLMs with chat and consensus tools Converse MCP Server $1 An MCP (Model Context Protocol) server that lets Claude talk to other AI models. Use it to chat with models from OpenAI, Google, Anthropic, X.AI, Mistral, DeepSeek, or OpenRouter. You can either talk to one model at a time or get multiple models to weigh in on complex decisions. ๐ Requirements - **Node.js**: Version 20 or higher - **Package Manager**: npm (or pnpm/yarn) - **API Keys**: At leas
An MCP (Model Context Protocol) server that lets Claude talk to other AI models. Use it to chat with models from OpenAI, Google, Anthropic, X.AI, Mistral, DeepSeek, or OpenRouter. You can either talk to one model at a time or get multiple models to weigh in on complex decisions.
You need at least one API key from these providers:
| Provider | Where to Get | Example Format |
| ----------------- | ---------------------------------------------------------------------------- | ----------------------- |
| OpenAI | platform.openai.com/api-keys | sk-proj-... |
| Google/Gemini | makersuite.google.com/app/apikey | AIzaSy... |
| X.AI | console.x.ai | xai-... |
| Anthropic | console.anthropic.com | sk-ant-... |
| Mistral | console.mistral.ai | wfBMkWL0... |
| DeepSeek | platform.deepseek.com | sk-... |
| OpenRouter | openrouter.ai/keys | sk-or-... |
| Codex | ChatGPT login (system-wide) | Local agentic assistant |
Note: Codex uses your ChatGPT login (not an API key). If you have an active ChatGPT session, Codex will work automatically. For headless/server deployments, set CODEX_API_KEY in your environment.
# Add the server with your API keys
claude mcp add converse \
-e OPENAI_API_KEY=your_key_here \
-e GEMINI_API_KEY=your_key_here \
-e XAI_API_KEY=your_key_here \
-e ANTHROPIC_API_KEY=your_key_here \
-e MISTRAL_API_KEY=your_key_here \
-e DEEPSEEK_API_KEY=your_key_here \
-e OPENROUTER_API_KEY=your_key_here \
-e ENABLE_RESPONSE_SUMMARIZATION=true \
-e SUMMARIZATION_MODEL=gpt-5 \
-s user \
npx converse-mcp-server
Add this configuration to your Claude Desktop settings:
{
"mcpServers": {
"converse": {
"command": "npx",
"args": ["converse-mcp-server"],
"env": {
"OPENAI_API_KEY": "your_key_here",
"GEMINI_API_KEY": "your_key_here",
"XAI_API_KEY": "your_key_here",
"ANTHROPIC_API_KEY": "your_key_here",
"MISTRAL_API_KEY": "your_key_here",
"DEEPSEEK_API_KEY": "your_key_here",
"OPENROUTER_API_KEY": "your_key_here",
"ENABLE_RESPONSE_SUMMARIZATION": "true",
"SUMMARIZATION_MODEL": "gpt-5"
}
}
}
}
Windows Troubleshooting: If npx converse-mcp-server doesn't work on Windows, try:
{
"command": "cmd",
"args": ["/c", "npx", "converse-mcp-server"],
"env": {
"ENABLE_RESPONSE_SUMMARIZATION": "true",
"SUMMARIZATION_MODEL": "gpt-5"
// ... add your API keys here
}
}
Once installed, you can:
async: true for long-running operations that you can check later/converse:help in ClaudeTalk to any AI model with support for files, images, and conversation history. The tool automatically routes your request to the right provider based on the model name. When AI summarization is enabled, generates smart titles and summaries for better context understanding.
// Synchronous execution (default)
{
"prompt": "How should I structure the authentication module for this Express.js API?",
"model": "gemini-2.5-flash", // Routes to Google
"files": ["/path/to/src/auth.js", "/path/to/config.json"],
"images": ["/path/to/architecture.png"],
"temperature": 0.5,
"reasoning_effort": "medium",
"use_websearch": false
}
// Asynchronous execution (for long-running tasks)
{
"prompt": "Analyze this large codebase and provide optimization recommendations",
"model": "gpt-5",
"files": ["/path/to/large-project"],
"async": true, // Enables background processing
"continuation_id": "my-analysis-task" // Optional: custom ID for tracking
}
// Codex - Agentic coding assistant with local file access
{
"prompt": "Analyze this codebase and suggest improvements",
"model": "codex",
"files": ["/path/to/your/project"],
"async": true // Recommended for Codex (responses take 6-20+ seconds)
}
Codex Notes:
continuation_id)CODEX_SANDBOX_MODE environment variableGet multiple AI models to analyze the same question simultaneously. Each model can see and respond to the others' answers, creating a rich discussion.
// Synchronous consensus (default)
{
"prompt": "Should we use microservices or monolith architecture for our e-commerce platform?",
"models": ["gpt-5", "gemini-2.5-flash", "grok-4"],
"files": ["/path/to/requirements.md"],
"enable_cross_feedback": true,
"temperature": 0.2
}
// Asynchronous consensus (for complex analysis)
{
"prompt": "Review our system architecture and provide comprehensive recommendations",
"models": ["gpt-5", "gemini-2.5-pro", "claude-sonnet-4"],
"files": ["/path/to/architecture-docs"],
"async": true, // Run in background
"enable_cross_feedback": true
}
Monitor the progress and retrieve results from asynchronous operations. When AI summarization is enabled, provides intelligent summaries of ongoing and completed tasks.
// Check status of a specific job
{
"continuation_id": "my-analysis-task"
}
// List recent jobs (shows last 10)
// With summarization enabled, displays titles and final summaries
{}
// Get full conversation history for completed job
{
"continuation_id": "my-analysis-task",
"full_history": true
}
Cancel running asynchronous operations when needed.
// Cancel a running job
{
"continuation_id": "my-analysis-task"
}
When enabled, the server automatically generates intelligent titles and summaries for better context understanding:
Configuration:
# Enable in your environment
ENABLE_RESPONSE_SUMMARIZATION=true # Default: false
SUMMARIZATION_MODEL=gpt-5-nano # Default: gpt-5-nano
Benefits:
API Key Options:
GOOGLE_GENAI_USE_VERTEXAI=true with project/location settingsSupported Models:
pro, gemini): Enhanced reasoning with thinking levels (1M context, 64K output)flash): Ultra-fast (1M context, 65K output)pro 2.5): Deep reasoning with thinking budget (1M context, 65K output)Note: Default aliases (gemini, pro) now point to Gemini 3.0 Pro. Use gemini-2.5-pro explicitly if you need version 2.5.
grok, grok-4): Latest advanced model (256K context)Type these commands directly in Claude:
/converse:help - Full documentation/converse:help tools - Tool-specific help (includes async features)/converse:help models - Model information/converse:help parameters - Configuration details/converse:help examples - Usage examples (sync and async)/converse:help async - Async execution guideCreate a .env file in your project root:
# Required: At least one API key
OPENAI_API_KEY=sk-proj-your_openai_key_here
GEMINI_API_KEY=your_gemini_api_key_here # Or GOOGLE_API_KEY (GEMINI_API_KEY takes priority)
XAI_API_KEY=xai-your_xai_key_here
ANTHROPIC_API_KEY=sk-ant-your_anthropic_key_here
MISTRAL_API_KEY=your_mistral_key_here
DEEPSEEK_API_KEY=your_deepseek_key_here
OPENROUTER_API_KEY=sk-or-your_openrouter_key_here
# Optional: Server configuration
PORT=3157
LOG_LEVEL=info
# Optional: AI Summarization (Enhanced async status display)
ENABLE_RESPONSE_SUMMARIZATION=true # Enable AI-generated titles and summaries
SUMMARIZATION_MODEL=gpt-5-nano # Model to use for summarization (default: gpt-5-nano)
# Optional: OpenRouter configuration
OPENROUTER_REFERER=https://github.com/FallDownTheSystem/converse
OPENROUTER_TITLE=Converse
OPENROUTER_DYNAMIC_MODELS=true
# Optional: Codex configuration
CODEX_API_KEY=your_codex_api_key_here # Optional if ChatGPT login available
CODEX_SANDBOX_MODE=read-only # read-only (default), workspace-write, danger-full-access
CODEX_SKIP_GIT_CHECK=true # true (default), false
CODEX_APPROVAL_POLICY=never # never (default), untrusted, on-failure, on-request
| Variable | Description | Default | Example |
| ----------- | ------------- | ------- | ------------------------ |
| PORT | Server port | 3157 | 3157 |
| LOG_LEVEL | Logging level | info | debug, info, error |
These must be set in your system environment or when launching Claude Code, NOT in the project .env file:
| Variable | Description | Default | Example |
| ----------------------- | --------------------------- | -------- | ------------------------------------ |
| MAX_MCP_OUTPUT_TOKENS | Token response limit | 25000 | 200000 |
| MCP_TOOL_TIMEOUT | Tool execution timeout (ms) | 120000 | 5400000 (90 min for deep research) |
# Example: Set globally before starting Claude Code
export MAX_MCP_OUTPUT_TOKENS=200000
export MCP_TOOL_TIMEOUT=5400000 # 90 minutes for deep research models
claude # Then start Claude Code
Use "auto" for automatic model selection, or specify exact models:
// Auto-selection (recommended)
"auto";
// Specific models
"gemini-2.5-flash";
"gpt-5";
"grok-4-0709";
// Using aliases
"flash"; // -> gemini-2.5-flash
"pro"; // -> gemini-2.5-pro
"grok"; // -> grok-4-0709
"grok-4"; // -> grok-4-0709
Auto Model Behavior:
["auto"], automatically expands to the first 3 available providersProvider priority order (requires corresponding API key):
gpt-5)gemini-2.5-pro)grok-4)claude-sonnet-4-20250514)magistral-medium-2506)deepseek-reasoner)qwen/qwen3-coder)The system will use the first 3 providers that have valid API keys configured. This enables automatic multi-model consensus without manually specifying models.
If you've cloned the repository locally:
{
"mcpServers": {
"converse": {
"command": "node",
"args": [
"C:\\Users\\YourUsername\\Documents\\Projects\\converse\\src\\index.js"
],
"env": {
"OPENAI_API_KEY": "your_key_here",
"GEMINI_API_KEY": "your_key_here",
"XAI_API_KEY": "your_key_here",
"ANTHROPIC_API_KEY": "your_key_here",
"MISTRAL_API_KEY": "your_key_here",
"DEEPSEEK_API_KEY": "your_key_here",
"OPENROUTER_API_KEY": "your_key_here"
}
}
}
}
For local development with HTTP transport (optional, for debugging):
First, start the server manually with HTTP transport:
# In a terminal, navigate to the project directory
cd converse
MCP_TRANSPORT=http npm run dev # Starts server on http://localhost:3157/mcp
Then configure Claude to connect to it:
{
"mcpServers": {
"converse-local": {
"url": "http://localhost:3157/mcp"
}
}
}
Important: HTTP transport requires the server to be running before Claude can connect to it. Keep the terminal with the server open while using Claude.
The Claude configuration file is typically located at:
%APPDATA%\Claude\claude_desktop_config.json~/Library/Application Support/Claude/claude_desktop_config.json~/.config/Claude/claude_desktop_config.jsonFor more detailed instructions, see the official MCP configuration guide.
You can run the server directly without Claude for testing or development:
# Quick run (no installation needed)
npx converse-mcp-server
# Alternative package managers
pnpm dlx converse-mcp-server
yarn dlx converse-mcp-server
For development setup, see the Development section below.
Server won't start:
node --version (needs v20+)PORT=3001 npm startAPI key errors:
npm run test:real-apiModule import errors:
npm run clean# Enable debug logging
LOG_LEVEL=debug npm run dev
# Start with debugger
npm run debug
# Trace all operations
LOG_LEVEL=trace npm run dev
# Clone the repository
git clone https://github.com/FallDownTheSystem/converse.git
cd converse
npm install
# Copy environment file and add your API keys
cp .env.example .env
# Start development server
npm run dev
# Server management
npm start # Start server (auto-kills existing server on port 3157)
npm run start:clean # Start server without killing existing processes
npm run start:port # Start server on port 3001 (avoids port conflicts)
npm run dev # Development with hot reload (auto-kills existing server)
npm run dev:clean # Development without killing existing processes
npm run dev:port # Development on port 3001 (avoids port conflicts)
npm run dev:quiet # Development with minimal logging
npm run kill-server # Kill any server running on port 3157
# Testing
npm test # Run all tests
npm run test:unit # Unit tests only
npm run test:integration # Integration tests
npm run test:e2e # End-to-end tests (requires API keys)
# Integration test subcategories
npm run test:integration:mcp # MCP protocol tests
npm run test:integration:tools # Tool integration tests
npm run test:integration:providers # Provider integration tests
npm run test:integration:performance # Performance tests
npm run test:integration:general # General integration tests
# Other test categories
npm run test:mcp-client # MCP client tests (HTTP-based)
npm run test:providers # Provider unit tests
npm run test:tools # Tool tests
npm run test:coverage # Coverage report
npm run test:watch # Run tests in watch mode
# Code quality
npm run lint # Check code style
npm run lint:fix # Fix code style issues
npm run format # Format code with Prettier
npm run validate # Full validation (lint + test)
# Utilities
npm run build # Build for production
npm run debug # Start with debugger
npm run check-deps # Check for outdated dependencies
npm run kill-server # Kill any server running on port 3157
Port conflicts: The server uses port 3157 by default. If you get an "EADDRINUSE" error:
npm run kill-server to free the portPORT=3001 npm startTransport Modes:
MCP_TRANSPORT=http npm run dev)After setting up your API keys in .env:
# Run end-to-end tests
npm run test:e2e
# Test specific providers
npm run test:integration:providers
# Full validation
npm run validate
After installation, run these tests to verify everything works:
npm start # Should show startup message
npm test # Should pass all unit tests
npm run validate # Full validation suite
converse/
โโโ src/
โ โโโ index.js # Main server entry point
โ โโโ config.js # Configuration management
โ โโโ router.js # Central request dispatcher
โ โโโ continuationStore.js # State management
โ โโโ systemPrompts.js # Tool system prompts
โ โโโ providers/ # AI provider implementations
โ โ โโโ index.js # Provider registry
โ โ โโโ interface.js # Unified provider interface
โ โ โโโ openai.js # OpenAI provider
โ โ โโโ xai.js # XAI provider
โ โ โโโ google.js # Google provider
โ โ โโโ anthropic.js # Anthropic provider
โ โ โโโ mistral.js # Mistral AI provider
โ โ โโโ deepseek.js # DeepSeek provider
โ โ โโโ openrouter.js # OpenRouter provider
โ โ โโโ openai-compatible.js # Base for OpenAI-compatible APIs
โ โโโ tools/ # MCP tool implementations
โ โ โโโ index.js # Tool registry
โ โ โโโ chat.js # Chat tool
โ โ โโโ consensus.js # Consensus tool
โ โโโ utils/ # Utility modules
โ โโโ contextProcessor.js # File/image processing
โ โโโ errorHandler.js # Error handling
โ โโโ logger.js # Logging utilities
โโโ tests/ # Comprehensive test suite
โโโ docs/ # API and architecture docs
โโโ package.json # Dependencies and scripts
Note: This section is for maintainers. The package is already published as
converse-mcp-server.
# 1. Ensure clean working directory
git status
# 2. Run full validation
npm run validate
# 3. Test package contents
npm pack --dry-run
# 4. Test bin script
node bin/converse.js --help
# 5. Bump version (choose one)
npm version patch # Bug fixes: 1.0.1 โ 1.0.2
npm version minor # New features: 1.0.1 โ 1.1.0
npm version major # Breaking changes: 1.0.1 โ 2.0.0
# 6. Test publish (dry run)
npm publish --dry-run
# 7. Publish to npm
npm publish
# 8. Verify publication
npm view converse-mcp-server
npx converse-mcp-server --help
npm version patch): Bug fixes, documentation updates, minor improvementsnpm version minor): New features, new model support, new tool capabilitiesnpm version major): Breaking API changes, major architecture changesAfter publishing, update installation instructions if needed and verify:
# Test direct execution
npx converse-mcp-server
npx converse
# Test MCP client integration
# Update Claude Desktop config to use: "npx converse-mcp-server"
npm view converse-mcp-server versionsnpm whoamigit checkout -b feature/amazing-featurenpm run validategit commit -m 'Add amazing feature'git push origin feature/amazing-feature# Fork and clone your fork
git clone https://github.com/yourusername/converse.git
cd converse
# Install dependencies
npm install
# Create feature branch
git checkout -b feature/your-feature
# Make changes and test
npm run validate
# Commit and push
git add .
git commit -m "Description of changes"
git push origin feature/your-feature
This MCP Server was inspired by and builds upon the excellent work from BeehiveInnovations/zen-mcp-server.
MIT License - see LICENSE file for details.
Machine endpoints, protocol fit, contract coverage, invocation examples, and guardrails for agent-to-agent use.
Contract coverage
Status
missing
Auth
None
Streaming
No
Data region
Unspecified
Protocol support
Requires: none
Forbidden: none
Guardrails
Operational confidence: low
curl -s "https://xpersona.co/api/v1/agents/mcp-falldownthesystem-converse/snapshot"
curl -s "https://xpersona.co/api/v1/agents/mcp-falldownthesystem-converse/contract"
curl -s "https://xpersona.co/api/v1/agents/mcp-falldownthesystem-converse/trust"
Trust and runtime signals, benchmark suites, failure patterns, and practical risk constraints.
Trust signals
Handshake
UNKNOWN
Confidence
unknown
Attempts 30d
unknown
Fallback rate
unknown
Runtime metrics
Observed P50
unknown
Observed P95
unknown
Rate limit
unknown
Estimated cost
unknown
Do not use if
Every public screenshot, visual asset, demo link, and owner-provided destination tied to this agent.
Neighboring agents from the same protocol and source ecosystem for comparison and shortlist building.
Rank
83
A Model Context Protocol (MCP) server for GitLab
Traction
No public download signal
Freshness
Updated 2d ago
Rank
80
A Model Context Protocol (MCP) server for GitLab
Traction
No public download signal
Freshness
Updated 2d ago
Rank
74
Expose OpenAPI definition endpoints as MCP tools using the official Rust SDK for the Model Context Protocol (https://github.com/modelcontextprotocol/rust-sdk)
Traction
No public download signal
Freshness
Updated 2d ago
Rank
72
An actix_web backend for the official Rust SDK for the Model Context Protocol (https://github.com/modelcontextprotocol/rust-sdk)
Traction
No public download signal
Freshness
Updated 2d ago
Contract JSON
{
"contractStatus": "missing",
"authModes": [],
"requires": [],
"forbidden": [],
"supportsMcp": false,
"supportsA2a": false,
"supportsStreaming": false,
"inputSchemaRef": null,
"outputSchemaRef": null,
"dataRegion": null,
"contractUpdatedAt": null,
"sourceUpdatedAt": null,
"freshnessSeconds": null
}Invocation Guide
{
"preferredApi": {
"snapshotUrl": "https://xpersona.co/api/v1/agents/mcp-falldownthesystem-converse/snapshot",
"contractUrl": "https://xpersona.co/api/v1/agents/mcp-falldownthesystem-converse/contract",
"trustUrl": "https://xpersona.co/api/v1/agents/mcp-falldownthesystem-converse/trust"
},
"curlExamples": [
"curl -s \"https://xpersona.co/api/v1/agents/mcp-falldownthesystem-converse/snapshot\"",
"curl -s \"https://xpersona.co/api/v1/agents/mcp-falldownthesystem-converse/contract\"",
"curl -s \"https://xpersona.co/api/v1/agents/mcp-falldownthesystem-converse/trust\""
],
"jsonRequestTemplate": {
"query": "summarize this repo",
"constraints": {
"maxLatencyMs": 2000,
"protocolPreference": [
"MCP"
]
}
},
"jsonResponseTemplate": {
"ok": true,
"result": {
"summary": "...",
"confidence": 0.9
},
"meta": {
"source": "GITHUB_MCP",
"generatedAt": "2026-04-17T03:51:35.307Z"
}
},
"retryPolicy": {
"maxAttempts": 3,
"backoffMs": [
500,
1500,
3500
],
"retryableConditions": [
"HTTP_429",
"HTTP_503",
"NETWORK_TIMEOUT"
]
}
}Trust JSON
{
"status": "unavailable",
"handshakeStatus": "UNKNOWN",
"verificationFreshnessHours": null,
"reputationScore": null,
"p95LatencyMs": null,
"successRate30d": null,
"fallbackRate": null,
"attempts30d": null,
"trustUpdatedAt": null,
"trustConfidence": "unknown",
"sourceUpdatedAt": null,
"freshnessSeconds": null
}Capability Matrix
{
"rows": [
{
"key": "MCP",
"type": "protocol",
"support": "unknown",
"confidenceSource": "profile",
"notes": "Listed on profile"
},
{
"key": "mcp",
"type": "capability",
"support": "supported",
"confidenceSource": "profile",
"notes": "Declared in agent profile metadata"
},
{
"key": "server",
"type": "capability",
"support": "supported",
"confidenceSource": "profile",
"notes": "Declared in agent profile metadata"
},
{
"key": "ai",
"type": "capability",
"support": "supported",
"confidenceSource": "profile",
"notes": "Declared in agent profile metadata"
},
{
"key": "chat",
"type": "capability",
"support": "supported",
"confidenceSource": "profile",
"notes": "Declared in agent profile metadata"
},
{
"key": "consensus",
"type": "capability",
"support": "supported",
"confidenceSource": "profile",
"notes": "Declared in agent profile metadata"
},
{
"key": "openai",
"type": "capability",
"support": "supported",
"confidenceSource": "profile",
"notes": "Declared in agent profile metadata"
},
{
"key": "google",
"type": "capability",
"support": "supported",
"confidenceSource": "profile",
"notes": "Declared in agent profile metadata"
},
{
"key": "gemini",
"type": "capability",
"support": "supported",
"confidenceSource": "profile",
"notes": "Declared in agent profile metadata"
},
{
"key": "grok",
"type": "capability",
"support": "supported",
"confidenceSource": "profile",
"notes": "Declared in agent profile metadata"
},
{
"key": "cli",
"type": "capability",
"support": "supported",
"confidenceSource": "profile",
"notes": "Declared in agent profile metadata"
}
],
"flattenedTokens": "protocol:MCP|unknown|profile capability:mcp|supported|profile capability:server|supported|profile capability:ai|supported|profile capability:chat|supported|profile capability:consensus|supported|profile capability:openai|supported|profile capability:google|supported|profile capability:gemini|supported|profile capability:grok|supported|profile capability:cli|supported|profile"
}Facts JSON
[
{
"factKey": "docs_crawl",
"category": "integration",
"label": "Crawlable docs",
"value": "6 indexed pages on the official domain",
"href": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
"sourceUrl": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
"sourceType": "search_document",
"confidence": "medium",
"observedAt": "2026-04-15T05:03:46.393Z",
"isPublic": true
},
{
"factKey": "vendor",
"category": "vendor",
"label": "Vendor",
"value": "Falldownthesystem",
"href": "https://github.com/FallDownTheSystem/converse#readme",
"sourceUrl": "https://github.com/FallDownTheSystem/converse#readme",
"sourceType": "profile",
"confidence": "medium",
"observedAt": "2026-02-25T02:58:20.683Z",
"isPublic": true
},
{
"factKey": "protocols",
"category": "compatibility",
"label": "Protocol compatibility",
"value": "MCP",
"href": "https://xpersona.co/api/v1/agents/mcp-falldownthesystem-converse/contract",
"sourceUrl": "https://xpersona.co/api/v1/agents/mcp-falldownthesystem-converse/contract",
"sourceType": "contract",
"confidence": "medium",
"observedAt": "2026-02-25T02:58:20.683Z",
"isPublic": true
},
{
"factKey": "traction",
"category": "adoption",
"label": "Adoption signal",
"value": "1 GitHub stars",
"href": "https://github.com/FallDownTheSystem/converse",
"sourceUrl": "https://github.com/FallDownTheSystem/converse",
"sourceType": "profile",
"confidence": "medium",
"observedAt": "2026-02-25T02:58:20.683Z",
"isPublic": true
},
{
"factKey": "handshake_status",
"category": "security",
"label": "Handshake status",
"value": "UNKNOWN",
"href": "https://xpersona.co/api/v1/agents/mcp-falldownthesystem-converse/trust",
"sourceUrl": "https://xpersona.co/api/v1/agents/mcp-falldownthesystem-converse/trust",
"sourceType": "trust",
"confidence": "medium",
"observedAt": null,
"isPublic": true
}
]Change Events JSON
[
{
"eventType": "docs_update",
"title": "Docs refreshed: Sign in to GitHub ยท GitHub",
"description": "Fresh crawlable documentation was indexed for the official domain.",
"href": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
"sourceUrl": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
"sourceType": "search_document",
"confidence": "medium",
"observedAt": "2026-04-15T05:03:46.393Z",
"isPublic": true
}
]Sponsored
Ads related to converse-mcp-server and adjacent AI workflows.