Crawler Summary

model-router answer-first brief

Self-aware multi-provider model routing for OpenClaw. Auto-detects your available models, recommends the best routing mode, and adapts per task. Claude, Gemini, GPT, DeepSeek โ€” benchmarks are routing tables, not leaderboards. --- name: model-router description: Self-aware multi-provider model routing for OpenClaw. Auto-detects your available models, recommends the best routing mode, and adapts per task. Claude, Gemini, GPT, DeepSeek โ€” benchmarks are routing tables, not leaderboards. version: 2.2.0 homepage: https://github.com/chandika/openclaw-model-router metadata: {"clawdbot":{"emoji":"๐Ÿงญ"}} --- Model Router for OpenClaw Route the right Capability contract not published. No trust telemetry is available yet. 14 GitHub stars reported by the source. Last updated 3/1/2026.

Freshness

Last checked 3/1/2026

Best For

model-router is best for models, current workflows where MCP and OpenClaw compatibility matters.

Not Ideal For

Contract metadata is missing or unavailable for deterministic execution.

Evidence Sources Checked

editorial-content, GITHUB OPENCLEW, runtime-metrics, public facts pack

Claim this agent
Agent DossierGitHubSafety: 100/100

model-router

Self-aware multi-provider model routing for OpenClaw. Auto-detects your available models, recommends the best routing mode, and adapts per task. Claude, Gemini, GPT, DeepSeek โ€” benchmarks are routing tables, not leaderboards. --- name: model-router description: Self-aware multi-provider model routing for OpenClaw. Auto-detects your available models, recommends the best routing mode, and adapts per task. Claude, Gemini, GPT, DeepSeek โ€” benchmarks are routing tables, not leaderboards. version: 2.2.0 homepage: https://github.com/chandika/openclaw-model-router metadata: {"clawdbot":{"emoji":"๐Ÿงญ"}} --- Model Router for OpenClaw Route the right

MCPself-declared
OpenClawself-declared

Public facts

5

Change events

1

Artifacts

0

Freshness

Mar 1, 2026

Verifiededitorial-contentNo verified compatibility signals14 GitHub stars

Capability contract not published. No trust telemetry is available yet. 14 GitHub stars reported by the source. Last updated 3/1/2026.

14 GitHub starsTrust evidence available

Trust score

Unknown

Compatibility

MCP, OpenClaw

Freshness

Mar 1, 2026

Vendor

Chandika

Artifacts

0

Benchmarks

0

Last release

Unpublished

Executive Summary

Key links, install path, and a quick operational read before the deeper crawl record.

Verifiededitorial-content

Summary

Capability contract not published. No trust telemetry is available yet. 14 GitHub stars reported by the source. Last updated 3/1/2026.

Setup snapshot

git clone https://github.com/chandika/openclaw-model-router.git
  1. 1

    Setup complexity is LOW. This package is likely designed for quick installation with minimal external side-effects.

  2. 2

    Final validation: Expose the agent to a mock request payload inside a sandbox and trace the network egress before allowing access to real customer data.

Evidence Ledger

Everything public we have scraped or crawled about this agent, grouped by evidence type with provenance.

Verifiededitorial-content
Vendor (1)

Vendor

Chandika

profilemedium
Observed Mar 1, 2026Source linkProvenance
Compatibility (1)

Protocol compatibility

MCP, OpenClaw

contractmedium
Observed Mar 1, 2026Source linkProvenance
Adoption (1)

Adoption signal

14 GitHub stars

profilemedium
Observed Mar 1, 2026Source linkProvenance
Security (1)

Handshake status

UNKNOWN

trustmedium
Observed unknownSource linkProvenance
Integration (1)

Crawlable docs

6 indexed pages on the official domain

search_documentmedium
Observed Apr 15, 2026Source linkProvenance

Release & Crawl Timeline

Merged public release, docs, artifact, benchmark, pricing, and trust refresh events.

Self-declaredagent-index

Artifacts Archive

Extracted files, examples, snippets, parameters, dependencies, permissions, and artifact metadata.

Self-declaredGITHUB OPENCLEW

Extracted files

0

Examples

6

Snippets

0

Languages

typescript

Parameters

Executable Examples

json

{
  "lastScan": "2026-02-18T08:00:00Z",
  "models": {
    "anthropic/claude-opus-4-6": {
      "provider": "anthropic",
      "name": "Claude Opus 4.6",
      "addedAt": "2026-02-18",
      "pricing": { "input": 15.00, "output": 75.00, "unit": "1M tokens" },
      "context": 200000,
      "strengths": ["deep reasoning", "novel problems", "hard search", "complex coding"],
      "weaknesses": ["expensive", "slower"],
      "benchmarks": {
        "swe-bench": 80.8,
        "osworld": 72.7,
        "arc-agi-2": 75.2,
        "gpqa-diamond": 74.5,
        "gdpval-aa": 1559,
        "hle": 26.3
      },
      "routeTo": ["architecture", "deep-debugging", "novel-reasoning", "hard-search"],
      "tier": "premium"
    }
  },
  "routingRules": {
    "computer-use": "anthropic/claude-sonnet-4-6",
    "deep-reasoning": "anthropic/claude-opus-4-6",
    "office-finance": "anthropic/claude-sonnet-4-6",
    "standard-coding": "anthropic/claude-sonnet-4-6",
    "drafts-summaries": "cheapest-available",
    "hard-coding": "anthropic/claude-opus-4-6"
  }
}

text

๐Ÿงญ New model detected: [model name]
      
      Provider: [provider]
      Pricing: $X input / $Y output per 1M tokens
      Context: [N] tokens
      Tier: [tier]
      
      Key benchmarks:
      - SWE-bench: XX% (current best: YY% from [model])
      - [other relevant benchmarks]
      
      Routing recommendation:
      - [task type]: This model beats [current model] by X points. Switch?
      - [task type]: Close to [current model] but 3ร— cheaper. Consider for subagents?
      
      Want me to update routing? Or keep current setup?

json

{
  "agents": {
    "defaults": {
      "model": { "primary": "anthropic/claude-opus-4-6" },
      "subagents": { "model": "anthropic/claude-sonnet-4-6" }
    }
  }
}

json

{
  "agents": {
    "defaults": {
      "model": { "primary": "anthropic/claude-sonnet-4-6" },
      "subagents": { "model": "google/gemini-2.5-pro" }
    }
  }
}

json

{
  "agents": {
    "defaults": {
      "model": { "primary": "anthropic/claude-sonnet-4-6" },
      "subagents": { "model": "openai/gpt-4o" }
    }
  }
}

json

{
  "agents": {
    "defaults": {
      "model": { "primary": "google/gemini-2.5-pro" },
      "subagents": { "model": "google/gemini-2.5-flash" }
    }
  }
}

Docs & README

Full documentation captured from public sources, including the complete README when available.

Self-declaredGITHUB OPENCLEW

Docs source

GITHUB OPENCLEW

Editorial quality

ready

Self-aware multi-provider model routing for OpenClaw. Auto-detects your available models, recommends the best routing mode, and adapts per task. Claude, Gemini, GPT, DeepSeek โ€” benchmarks are routing tables, not leaderboards. --- name: model-router description: Self-aware multi-provider model routing for OpenClaw. Auto-detects your available models, recommends the best routing mode, and adapts per task. Claude, Gemini, GPT, DeepSeek โ€” benchmarks are routing tables, not leaderboards. version: 2.2.0 homepage: https://github.com/chandika/openclaw-model-router metadata: {"clawdbot":{"emoji":"๐Ÿงญ"}} --- Model Router for OpenClaw Route the right

Full README

name: model-router description: Self-aware multi-provider model routing for OpenClaw. Auto-detects your available models, recommends the best routing mode, and adapts per task. Claude, Gemini, GPT, DeepSeek โ€” benchmarks are routing tables, not leaderboards. version: 2.2.0 homepage: https://github.com/chandika/openclaw-model-router metadata: {"clawdbot":{"emoji":"๐Ÿงญ"}}

Model Router for OpenClaw

Route the right model to the right job. Auto-detects what you have, tells you what to use, adapts when you say "work harder" or "save money."

Security & Privacy

  • This skill does NOT read, store, or transmit API keys or credentials. It only reads provider names and model IDs from your gateway config to determine what's available.
  • No automatic scanning. All model detection and web searches are user-triggered only. The skill never runs on load or heartbeat unless you explicitly ask.
  • Web searches are used to fetch public benchmark data and pricing from model card pages when you add a new provider. This is outbound network activity you should expect.
  • Local state: The skill writes model-registry.json to your workspace (benchmark scores, pricing, routing rules). No secrets are stored in this file.

Step 0: Model Registry (Self-Learning)

This skill maintains a living model registry at model-registry.json in the workspace. This is how the router learns about new models automatically.

Registry File Format

{
  "lastScan": "2026-02-18T08:00:00Z",
  "models": {
    "anthropic/claude-opus-4-6": {
      "provider": "anthropic",
      "name": "Claude Opus 4.6",
      "addedAt": "2026-02-18",
      "pricing": { "input": 15.00, "output": 75.00, "unit": "1M tokens" },
      "context": 200000,
      "strengths": ["deep reasoning", "novel problems", "hard search", "complex coding"],
      "weaknesses": ["expensive", "slower"],
      "benchmarks": {
        "swe-bench": 80.8,
        "osworld": 72.7,
        "arc-agi-2": 75.2,
        "gpqa-diamond": 74.5,
        "gdpval-aa": 1559,
        "hle": 26.3
      },
      "routeTo": ["architecture", "deep-debugging", "novel-reasoning", "hard-search"],
      "tier": "premium"
    }
  },
  "routingRules": {
    "computer-use": "anthropic/claude-sonnet-4-6",
    "deep-reasoning": "anthropic/claude-opus-4-6",
    "office-finance": "anthropic/claude-sonnet-4-6",
    "standard-coding": "anthropic/claude-sonnet-4-6",
    "drafts-summaries": "cheapest-available",
    "hard-coding": "anthropic/claude-opus-4-6"
  }
}

New Model Detection Flow

When to scan: Only when the user explicitly asks (e.g., "check for new models," "scan models," "what models do I have"). Never on skill load. Never on heartbeat.

How it works:

  1. Read current config โ€” gateway config.get to get all configured providers and models

  2. Diff against registry โ€” compare config models vs model-registry.json

  3. For each NEW model found:

    a. Fetch the model card โ€” web search for "[model name] benchmarks pricing model card [year]"

    b. Extract key data:

    • Pricing (input/output per 1M tokens)
    • Context window size
    • Benchmark scores (prioritize: SWE-bench, OSWorld, GPQA, ARC-AGI-2, GDPval-AA, HLE, MATH-500)
    • Strengths and weaknesses from reviews

    c. Classify the model into a tier:

    • premium โ€” $10+ per 1M input (Opus-class)
    • mid โ€” $1-10 per 1M input (Sonnet, GPT-4o, Gemini Pro class)
    • economy โ€” $0.10-1 per 1M input (Flash, DeepSeek class)
    • free โ€” free tier or negligible cost

    d. Determine routing slots โ€” based on benchmarks, where does this model beat existing options?

    • Compare each benchmark score against current best-in-slot
    • If new model beats current router choice on a benchmark by >3pts, flag it
    • If new model is cheaper AND within 2pts, flag it as cost-efficient alternative

    e. Update registry โ€” write model entry to model-registry.json

    f. Notify user:

    ๐Ÿงญ New model detected: [model name]
    
    Provider: [provider]
    Pricing: $X input / $Y output per 1M tokens
    Context: [N] tokens
    Tier: [tier]
    
    Key benchmarks:
    - SWE-bench: XX% (current best: YY% from [model])
    - [other relevant benchmarks]
    
    Routing recommendation:
    - [task type]: This model beats [current model] by X points. Switch?
    - [task type]: Close to [current model] but 3ร— cheaper. Consider for subagents?
    
    Want me to update routing? Or keep current setup?
    
  4. Only apply changes with user permission. Always ask first.

Routing Rule Updates

When the user approves a routing change for a new model:

  1. Update model-registry.json routing rules
  2. Apply config via gateway config.patch if it's a permanent change
  3. Log the change to daily memory file

When a model is removed from config:

  1. Don't delete from registry (keep benchmark data for reference)
  2. Re-route any tasks that pointed to the removed model โ†’ next best available
  3. Notify user: "Model X was removed. Rerouted [task types] to [model Y]."

Keeping Data Fresh

  • Benchmark data ages. When a model entry is >90 days old, flag it for refresh on next scan.
  • New model versions. If a model ID changes (e.g., gemini-2.5-pro โ†’ gemini-3-pro), treat the new one as a new model. Don't assume scores carry over.
  • Web search for updates. When refreshing, search for "[model name] latest benchmarks [current year]" and update scores.

Step 1: Detect What's Available

When the user asks to check models or set up routing, check the OpenClaw config to determine which providers and models are available:

  1. Run gateway config.get or read openclaw.json

  2. Check agents.defaults.model.primary โ€” what's the current main model?

  3. Check agents.defaults.subagents.model โ€” what's the current subagent model?

  4. Check which providers are configured (by provider name and model ID only โ€” do not read or inspect API keys, tokens, or auth credentials)

  5. Report to user: "You have [X, Y, Z] available. Currently running [model] main / [model] subagents. Recommended mode: [mode]. Want me to apply it?"

Don't assume. Check first, recommend second, apply only with permission.

Step 2: Pick a Mode

Three modes. User picks one, or you recommend based on what's available.

๐Ÿ† Performance โ€” "Work hard"

Best results. Claude-only. Rate limits will feel it.

| Role | Model | Cost/1M (in/out) | |------|-------|-------------------| | Main | Opus 4.6 | $15 / $75 | | Subagents | Sonnet 4.6 | $3 / $15 |

When to recommend: User has Claude Max/API. Says "best quality," "don't cut corners," "work hard." Critical work โ€” architecture, deep debugging, novel problems.

{
  "agents": {
    "defaults": {
      "model": { "primary": "anthropic/claude-opus-4-6" },
      "subagents": { "model": "anthropic/claude-sonnet-4-6" }
    }
  }
}

โš–๏ธ Balanced โ€” "Normal" (recommended default)

Smart routing. Good quality. Rate limits survive the week.

| Role | Model | Cost/1M (in/out) | |------|-------|-------------------| | Main | Sonnet 4.6 | $3 / $15 | | Subagents | Gemini 2.5 Pro | $1.25 / $10 |

When to recommend: User has Claude + Google keys. Most daily work. Coding, research, content, office tasks. Sonnet handles main session perfectly; Gemini does background work at 2.4ร— less.

{
  "agents": {
    "defaults": {
      "model": { "primary": "anthropic/claude-sonnet-4-6" },
      "subagents": { "model": "google/gemini-2.5-pro" }
    }
  }
}

Variant โ€” Claude + OpenAI:

{
  "agents": {
    "defaults": {
      "model": { "primary": "anthropic/claude-sonnet-4-6" },
      "subagents": { "model": "openai/gpt-4o" }
    }
  }
}

๐Ÿ’ฐ Economy โ€” "Save money"

Minimum spend. High volume. Quality is good enough.

| Role | Model | Cost/1M (in/out) | |------|-------|-------------------| | Main | Gemini 2.5 Pro | $1.25 / $10 | | Subagents | Gemini 2.5 Flash | $0.18 / $0.75 |

When to recommend: User is API-only, high volume, cost constrained. Or says "save money," "be efficient," "economy mode."

{
  "agents": {
    "defaults": {
      "model": { "primary": "google/gemini-2.5-pro" },
      "subagents": { "model": "google/gemini-2.5-flash" }
    }
  }
}

Ultra-economy variant (DeepSeek subagents):

{
  "agents": {
    "defaults": {
      "model": { "primary": "google/gemini-2.5-pro" },
      "subagents": { "model": "openrouter/deepseek/deepseek-v3.2" }
    }
  }
}

Step 3: Adaptive Triggers

Listen for these signals and suggest mode changes (don't auto-apply):

| User Says | Action | |-----------|--------| | "work harder" / "try harder" / "best quality" | Suggest switching to Performance mode or /model anthropic/claude-opus-4-6 for this session | | "save money" / "be cheaper" / "economy" | Suggest switching to Economy mode | | "normal" / "balanced" / "default" | Suggest switching to Balanced mode | | "use opus for this" | Apply /model anthropic/claude-opus-4-6 for current session only | | "use gemini" / "use google" | Apply /model google/gemini-2.5-pro for current session only | | "use deepseek" | Apply /model openrouter/deepseek/deepseek-v3.2 for current session only | | "reset" / "back to normal" | Apply /model default to revert to config default |

Per-session vs permanent: /model X changes the current session only. Config changes via gateway config.patch are permanent across sessions.


Step 4: Task-Specific Hard Routes

Regardless of mode, some tasks have clear winners. Override automatically when the task type is obvious:

| Task Type | Always Use | Why | Override How | |-----------|-----------|-----|-------------| | Computer use / browser | Claude (Sonnet or Opus) | 72.5% OSWorld vs GPT's 38.2% โ€” 34pt gap | If in economy mode using Gemini, warn user: "Computer use tasks perform significantly better on Claude. Switch for this task?" | | Deep reasoning / novel problems | Opus 4.6 | 75.2% ARC-AGI-2 vs Sonnet's 58.3% โ€” 17pt gap | Suggest Opus when the problem is genuinely novel or requires multi-step deduction | | Office / financial / spreadsheets | Sonnet 4.6 | 1633 Elo GDPval-AA โ€” beats Opus (1559) and GPT (1524) | Sonnet is actually the best here, even better than Opus | | Simple drafts / summaries / formatting | Cheapest available | Don't burn premium tokens on grunt work | Route to subagent model or suggest DeepSeek | | Coding (standard) | Sonnet 4.6 or Opus 4.6 | SWE-bench 79.6% / 80.8% โ€” Claude dominates | Either Claude model; avoid GPT/Gemini for complex code | | Coding (hard debugging, architecture) | Opus 4.6 | Terminal-Bench gap: 62.7% vs 59.1% | Suggest Opus for the hard 20% |

The key insight: Don't route everything through one model. Even within a session, suggest model switches when the task type changes significantly.


Benchmark Tables

Cross-provider data, Feb 2026. This is your routing reference.

How to Read These

Each row is a routing decision, not a ranking. A 2-point gap is noise โ€” route by cost. A 17-point gap is signal โ€” route by capability. A 34-point gap is a hard rule โ€” never use the losing model for that task.

Coding

| Benchmark | Sonnet 4.6 | Opus 4.6 | GPT-5.2 | Gemini 2.5 Pro | |-----------|-----------|---------|---------|---------------| | SWE-bench Verified | 79.6% | 80.8% | 77.0% | ~75% | | Terminal-Bench 2.0 | 59.1% | 62.7% | 46.7% | โ€” |

โ†’ Claude territory. Sonnet for standard coding, Opus for hard debugging. GPT/Gemini lag 3-5pts.

Computer Use

| Benchmark | Sonnet 4.6 | Opus 4.6 | GPT-5.2 | |-----------|-----------|---------|---------| | OSWorld-Verified | 72.5% | 72.7% | 38.2% | | Pace Insurance | 94% | โ€” | โ€” |

โ†’ Hard rule. Claude for all computer use. 34-point gap over GPT is not a preference โ€” it's a different league.

Reasoning

| Benchmark | Sonnet 4.6 | Opus 4.6 | GPT-5.2 | |-----------|-----------|---------|---------| | GPQA Diamond | 74.1% | 74.5% | 73.8% | | ARC-AGI-2 | 58.3% | 75.2% | โ€” | | Humanity's Last Exam | 19.1% | 26.3% | 20.3% | | MATH-500 | 97.8% | 97.6% | 97.4% |

โ†’ GPQA and MATH: tied across all three โ€” route by cost. ARC-AGI-2 and HLE: Opus only.

Office & Domain

| Benchmark | Sonnet 4.6 | Opus 4.6 | GPT-5.2 | |-----------|-----------|---------|---------| | GDPval-AA (Office Elo) | 1633 | 1559 | 1524 | | Finance Agent | 63.3% | 62.0% | 60.7% | | MCP-Atlas Tool Use | 61.3% | 60.3% | โ€” |

โ†’ Sonnet's domain. Beats everything on office work, finance, and tool coordination. Even beats Opus.

Pricing (per 1M tokens)

| Model | Input | Output | OpenClaw Provider | Relative | |-------|-------|--------|-------------------|----------| | DeepSeek V3.2 | $0.14 | $0.28 | openrouter/deepseek/deepseek-v3.2 | 107ร— cheaper than Opus (in) | | Gemini 2.5 Flash | $0.18 | $0.75 | google/gemini-2.5-flash | 100ร— cheaper than Opus (out) | | Grok 4.1 Fast | $0.20 | $0.50 | xai/grok-4.1-fast | 75ร— cheaper than Opus (in) | | Gemini 2.5 Pro | $1.25 | $10.00 | google/gemini-2.5-pro | 12ร— cheaper than Opus (in) | | Sonnet 4.6 | $3.00 | $15.00 | anthropic/claude-sonnet-4-6 | 5ร— cheaper than Opus | | GPT-4o | $5.00 | $15.00 | openai/gpt-4o | 3ร— cheaper than Opus (in) | | GPT-5.2 | โ€” | โ€” | openai/gpt-5.2 | โ€” | | Opus 4.6 | $15.00 | $75.00 | anthropic/claude-opus-4-6 | Baseline (most expensive) |


Provider Detection

When checking what's available, use gateway config.get and look at the configured provider names and model IDs. Do not read or inspect API keys, tokens, or auth credentials. You only need to know which providers are configured, not how they authenticate.

Check models.providers in config for custom setups.

Fallback logic: If only Anthropic is available โ†’ recommend Performance mode. If Anthropic + Google โ†’ Balanced. If Google only โ†’ Economy. If everything โ†’ Balanced (best default).


Prerequisites

  • OpenClaw v2026.2.17 or later โ€” required for Sonnet 4.6 in model registry
    • Docker: docker pull openclaw/openclaw:latest
    • Git: openclaw update
  • At least one provider with auth configured
  • For multi-provider modes: configure additional provider API keys in OpenClaw

Switching Modes

Per session: /model google/gemini-2.5-pro โ†’ /model default to revert

Permanently: Ask agent to apply via gateway config.patch, or edit openclaw.json and restart

Quick commands the agent should understand:

  • "Switch to performance mode" โ†’ apply Performance config
  • "Switch to economy mode" โ†’ apply Economy config
  • "Go balanced" โ†’ apply Balanced config
  • "Use opus for this" โ†’ /model session override only
  • "Back to normal" โ†’ /model default

Initial Registry Seed

On first run (no model-registry.json exists), the skill should:

  1. Create model-registry.json with the benchmark data from the tables above
  2. Scan current config to mark which models are actually available
  3. Give the user a full status report:
๐Ÿงญ Model Router initialized.

Available providers: Anthropic โœ…, Google โœ…, OpenAI โŒ, xAI โŒ
Available models: Opus 4.6, Sonnet 4.6, Gemini 2.5 Pro, Gemini 2.5 Flash

Current config: Opus main / Sonnet subagents (Performance mode)
Recommended: Balanced mode โ€” Sonnet main / Gemini Pro subagents
  โ†’ Saves 2.4ร— on subagent costs, same quality for background tasks

Apply balanced mode? [yes/no]
  1. Seed the registry with all models from the benchmark tables, even ones not currently configured โ€” this gives the agent comparison data when new models appear later

The Philosophy

Benchmarks are routing tables, not leaderboards. A 2-point gap is noise. A 34-point gap is a hard rule.

The right model for the job depends on the job. The skill's job is to know what you have, know what each model is good at, and route accordingly.

Give an agent a selection of models and a framework for choosing. It picks well. That's what this enables.

Contract & API

Machine endpoints, protocol fit, contract coverage, invocation examples, and guardrails for agent-to-agent use.

MissingGITHUB OPENCLEW

Contract coverage

Status

missing

Auth

None

Streaming

No

Data region

Unspecified

Protocol support

MCP: self-declaredOpenClaw: self-declared

Requires: none

Forbidden: none

Guardrails

Operational confidence: low

No positive guardrails captured.
Invocation examples
curl -s "https://xpersona.co/api/v1/agents/chandika-openclaw-model-router/snapshot"
curl -s "https://xpersona.co/api/v1/agents/chandika-openclaw-model-router/contract"
curl -s "https://xpersona.co/api/v1/agents/chandika-openclaw-model-router/trust"

Reliability & Benchmarks

Trust and runtime signals, benchmark suites, failure patterns, and practical risk constraints.

Missingruntime-metrics

Trust signals

Handshake

UNKNOWN

Confidence

unknown

Attempts 30d

unknown

Fallback rate

unknown

Runtime metrics

Observed P50

unknown

Observed P95

unknown

Rate limit

unknown

Estimated cost

unknown

Do not use if

Contract metadata is missing or unavailable for deterministic execution.
No benchmark suites or observed failure patterns are available.

Media & Demo

Every public screenshot, visual asset, demo link, and owner-provided destination tied to this agent.

Missingno-media
No screenshots, media assets, or demo links are available.

Related Agents

Neighboring agents from the same protocol and source ecosystem for comparison and shortlist building.

Self-declaredprotocol-neighbors
GITLAB_AI_CATALOGgitlab-mcp

Rank

83

A Model Context Protocol (MCP) server for GitLab

Traction

No public download signal

Freshness

Updated 2d ago

MCP
GITLAB_PUBLIC_PROJECTSgitlab-mcp

Rank

80

A Model Context Protocol (MCP) server for GitLab

Traction

No public download signal

Freshness

Updated 2d ago

MCP
GITLAB_AI_CATALOGrmcp-openapi

Rank

74

Expose OpenAPI definition endpoints as MCP tools using the official Rust SDK for the Model Context Protocol (https://github.com/modelcontextprotocol/rust-sdk)

Traction

No public download signal

Freshness

Updated 2d ago

MCP
GITLAB_AI_CATALOGrmcp-actix-web

Rank

72

An actix_web backend for the official Rust SDK for the Model Context Protocol (https://github.com/modelcontextprotocol/rust-sdk)

Traction

No public download signal

Freshness

Updated 2d ago

MCP
Machine Appendix

Contract JSON

{
  "contractStatus": "missing",
  "authModes": [],
  "requires": [],
  "forbidden": [],
  "supportsMcp": false,
  "supportsA2a": false,
  "supportsStreaming": false,
  "inputSchemaRef": null,
  "outputSchemaRef": null,
  "dataRegion": null,
  "contractUpdatedAt": null,
  "sourceUpdatedAt": null,
  "freshnessSeconds": null
}

Invocation Guide

{
  "preferredApi": {
    "snapshotUrl": "https://xpersona.co/api/v1/agents/chandika-openclaw-model-router/snapshot",
    "contractUrl": "https://xpersona.co/api/v1/agents/chandika-openclaw-model-router/contract",
    "trustUrl": "https://xpersona.co/api/v1/agents/chandika-openclaw-model-router/trust"
  },
  "curlExamples": [
    "curl -s \"https://xpersona.co/api/v1/agents/chandika-openclaw-model-router/snapshot\"",
    "curl -s \"https://xpersona.co/api/v1/agents/chandika-openclaw-model-router/contract\"",
    "curl -s \"https://xpersona.co/api/v1/agents/chandika-openclaw-model-router/trust\""
  ],
  "jsonRequestTemplate": {
    "query": "summarize this repo",
    "constraints": {
      "maxLatencyMs": 2000,
      "protocolPreference": [
        "MCP",
        "OPENCLEW"
      ]
    }
  },
  "jsonResponseTemplate": {
    "ok": true,
    "result": {
      "summary": "...",
      "confidence": 0.9
    },
    "meta": {
      "source": "GITHUB_OPENCLEW",
      "generatedAt": "2026-04-17T05:54:21.568Z"
    }
  },
  "retryPolicy": {
    "maxAttempts": 3,
    "backoffMs": [
      500,
      1500,
      3500
    ],
    "retryableConditions": [
      "HTTP_429",
      "HTTP_503",
      "NETWORK_TIMEOUT"
    ]
  }
}

Trust JSON

{
  "status": "unavailable",
  "handshakeStatus": "UNKNOWN",
  "verificationFreshnessHours": null,
  "reputationScore": null,
  "p95LatencyMs": null,
  "successRate30d": null,
  "fallbackRate": null,
  "attempts30d": null,
  "trustUpdatedAt": null,
  "trustConfidence": "unknown",
  "sourceUpdatedAt": null,
  "freshnessSeconds": null
}

Capability Matrix

{
  "rows": [
    {
      "key": "MCP",
      "type": "protocol",
      "support": "unknown",
      "confidenceSource": "profile",
      "notes": "Listed on profile"
    },
    {
      "key": "OPENCLEW",
      "type": "protocol",
      "support": "unknown",
      "confidenceSource": "profile",
      "notes": "Listed on profile"
    },
    {
      "key": "models",
      "type": "capability",
      "support": "supported",
      "confidenceSource": "profile",
      "notes": "Declared in agent profile metadata"
    },
    {
      "key": "current",
      "type": "capability",
      "support": "supported",
      "confidenceSource": "profile",
      "notes": "Declared in agent profile metadata"
    }
  ],
  "flattenedTokens": "protocol:MCP|unknown|profile protocol:OPENCLEW|unknown|profile capability:models|supported|profile capability:current|supported|profile"
}

Facts JSON

[
  {
    "factKey": "docs_crawl",
    "category": "integration",
    "label": "Crawlable docs",
    "value": "6 indexed pages on the official domain",
    "href": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
    "sourceUrl": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
    "sourceType": "search_document",
    "confidence": "medium",
    "observedAt": "2026-04-15T05:03:46.393Z",
    "isPublic": true
  },
  {
    "factKey": "vendor",
    "category": "vendor",
    "label": "Vendor",
    "value": "Chandika",
    "href": "https://github.com/chandika/openclaw-model-router",
    "sourceUrl": "https://github.com/chandika/openclaw-model-router",
    "sourceType": "profile",
    "confidence": "medium",
    "observedAt": "2026-03-01T06:03:08.388Z",
    "isPublic": true
  },
  {
    "factKey": "protocols",
    "category": "compatibility",
    "label": "Protocol compatibility",
    "value": "MCP, OpenClaw",
    "href": "https://xpersona.co/api/v1/agents/chandika-openclaw-model-router/contract",
    "sourceUrl": "https://xpersona.co/api/v1/agents/chandika-openclaw-model-router/contract",
    "sourceType": "contract",
    "confidence": "medium",
    "observedAt": "2026-03-01T06:03:08.388Z",
    "isPublic": true
  },
  {
    "factKey": "traction",
    "category": "adoption",
    "label": "Adoption signal",
    "value": "14 GitHub stars",
    "href": "https://github.com/chandika/openclaw-model-router",
    "sourceUrl": "https://github.com/chandika/openclaw-model-router",
    "sourceType": "profile",
    "confidence": "medium",
    "observedAt": "2026-03-01T06:03:08.388Z",
    "isPublic": true
  },
  {
    "factKey": "handshake_status",
    "category": "security",
    "label": "Handshake status",
    "value": "UNKNOWN",
    "href": "https://xpersona.co/api/v1/agents/chandika-openclaw-model-router/trust",
    "sourceUrl": "https://xpersona.co/api/v1/agents/chandika-openclaw-model-router/trust",
    "sourceType": "trust",
    "confidence": "medium",
    "observedAt": null,
    "isPublic": true
  }
]

Change Events JSON

[
  {
    "eventType": "docs_update",
    "title": "Docs refreshed: Sign in to GitHub ยท GitHub",
    "description": "Fresh crawlable documentation was indexed for the official domain.",
    "href": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
    "sourceUrl": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
    "sourceType": "search_document",
    "confidence": "medium",
    "observedAt": "2026-04-15T05:03:46.393Z",
    "isPublic": true
  }
]

Sponsored

Ads related to model-router and adjacent AI workflows.