Crawler Summary

zeroapi answer-first brief

Route tasks to the best AI model across paid subscriptions (Claude, ChatGPT, Codex, Gemini, Kimi) via OpenClaw gateway. Use when user mentions model routing, multi-model setup, "use Codex for this", "delegate to Gemini", "route to the best model", agent delegation, or has OpenClaw agents configured with multiple providers. Do NOT use for single-model conversations or general chat. --- name: zeroapi version: 2.3.0 description: > Route tasks to the best AI model across paid subscriptions (Claude, ChatGPT, Codex, Gemini, Kimi) via OpenClaw gateway. Use when user mentions model routing, multi-model setup, "use Codex for this", "delegate to Gemini", "route to the best model", agent delegation, or has OpenClaw agents configured with multiple providers. Do NOT use for single-model conversations or ge Published capability contract available. No trust telemetry is available yet. 1 GitHub stars reported by the source. Last updated 3/1/2026.

Freshness

Last checked 3/1/2026

Best For

Contract is available with explicit auth and schema references.

Not Ideal For

zeroapi is not ideal for teams that need stronger public trust telemetry, lower setup complexity, or more explicit contract coverage before production rollout.

Evidence Sources Checked

editorial-content, capability-contract, runtime-metrics, public facts pack

Claim this agent
Agent DossierGitHubSafety: 89/100

zeroapi

Route tasks to the best AI model across paid subscriptions (Claude, ChatGPT, Codex, Gemini, Kimi) via OpenClaw gateway. Use when user mentions model routing, multi-model setup, "use Codex for this", "delegate to Gemini", "route to the best model", agent delegation, or has OpenClaw agents configured with multiple providers. Do NOT use for single-model conversations or general chat. --- name: zeroapi version: 2.3.0 description: > Route tasks to the best AI model across paid subscriptions (Claude, ChatGPT, Codex, Gemini, Kimi) via OpenClaw gateway. Use when user mentions model routing, multi-model setup, "use Codex for this", "delegate to Gemini", "route to the best model", agent delegation, or has OpenClaw agents configured with multiple providers. Do NOT use for single-model conversations or ge

OpenClawself-declared

Public facts

7

Change events

1

Artifacts

0

Freshness

Mar 1, 2026

Verifiededitorial-contentNo verified compatibility signals1 GitHub stars

Published capability contract available. No trust telemetry is available yet. 1 GitHub stars reported by the source. Last updated 3/1/2026.

1 GitHub starsSchema refs publishedTrust evidence available

Trust score

Unknown

Compatibility

OpenClaw

Freshness

Mar 1, 2026

Vendor

Dorukardahan

Artifacts

0

Benchmarks

0

Last release

Unpublished

Executive Summary

Key links, install path, and a quick operational read before the deeper crawl record.

Verifiededitorial-content

Summary

Published capability contract available. No trust telemetry is available yet. 1 GitHub stars reported by the source. Last updated 3/1/2026.

Setup snapshot

git clone https://github.com/dorukardahan/ZeroAPI.git
  1. 1

    Setup complexity is LOW. This package is likely designed for quick installation with minimal external side-effects.

  2. 2

    Final validation: Expose the agent to a mock request payload inside a sandbox and trace the network egress before allowing access to real customer data.

Evidence Ledger

Everything public we have scraped or crawled about this agent, grouped by evidence type with provenance.

Verifiededitorial-content
Vendor (1)

Vendor

Dorukardahan

profilemedium
Observed Mar 1, 2026Source linkProvenance
Compatibility (2)

Protocol compatibility

OpenClaw

contractmedium
Observed Feb 24, 2026Source linkProvenance

Auth modes

api_key, oauth

contracthigh
Observed Feb 24, 2026Source linkProvenance
Artifact (1)

Machine-readable schemas

OpenAPI or schema references published

contracthigh
Observed Feb 24, 2026Source linkProvenance
Adoption (1)

Adoption signal

1 GitHub stars

profilemedium
Observed Mar 1, 2026Source linkProvenance
Security (1)

Handshake status

UNKNOWN

trustmedium
Observed unknownSource linkProvenance
Integration (1)

Crawlable docs

6 indexed pages on the official domain

search_documentmedium
Observed Apr 15, 2026Source linkProvenance

Release & Crawl Timeline

Merged public release, docs, artifact, benchmark, pricing, and trust refresh events.

Self-declaredagent-index

Artifacts Archive

Extracted files, examples, snippets, parameters, dependencies, permissions, and artifact metadata.

Self-declaredGITHUB OPENCLEW

Extracted files

0

Examples

6

Snippets

0

Languages

typescript

Parameters

Executable Examples

text

openclaw models status

text

/agent <agent-id> <instruction>

text

/agent codex Write a Python function that parses RFC 3339 timestamps with timezone support. Return only the code.

/agent gemini-researcher Analyze the differences between SQLite WAL mode and journal mode. Include benchmarks and a recommendation.

/agent gemini-fast Convert the following list into a markdown table with columns: Name, Role, Status.

/agent kimi-orchestrator Coordinate: (1) gemini-researcher gathers data on X, (2) codex writes a parser, (3) report results.

text

/agent devops Set up a systemd service for the memory API with health checks and auto-restart

/agent researcher Analyze the latest papers on mixture-of-experts architectures. Focus on routing efficiency.

/agent content-writer Write a blog post about multi-model routing. Target audience: developers running self-hosted AI agents.

/agent community Review the last 24 hours of community posts. Flag any that need moderation.

text

~/.openclaw/workspace-devops/
├── AGENTS.md          # DevOps-specific instructions and runbooks
├── MEMORY.md          # Infrastructure decisions, deployment history
└── skills/            # DevOps-relevant skills only

json

"imageModel": {
  "primary": "google-gemini-cli/gemini-3-pro-preview",
  "fallbacks": [
    "google-gemini-cli/gemini-3-flash-preview",
    "anthropic/claude-opus-4-6"
  ]
}

Docs & README

Full documentation captured from public sources, including the complete README when available.

Self-declaredGITHUB OPENCLEW

Docs source

GITHUB OPENCLEW

Editorial quality

ready

Route tasks to the best AI model across paid subscriptions (Claude, ChatGPT, Codex, Gemini, Kimi) via OpenClaw gateway. Use when user mentions model routing, multi-model setup, "use Codex for this", "delegate to Gemini", "route to the best model", agent delegation, or has OpenClaw agents configured with multiple providers. Do NOT use for single-model conversations or general chat. --- name: zeroapi version: 2.3.0 description: > Route tasks to the best AI model across paid subscriptions (Claude, ChatGPT, Codex, Gemini, Kimi) via OpenClaw gateway. Use when user mentions model routing, multi-model setup, "use Codex for this", "delegate to Gemini", "route to the best model", agent delegation, or has OpenClaw agents configured with multiple providers. Do NOT use for single-model conversations or ge

Full README

name: zeroapi version: 2.3.0 description: > Route tasks to the best AI model across paid subscriptions (Claude, ChatGPT, Codex, Gemini, Kimi) via OpenClaw gateway. Use when user mentions model routing, multi-model setup, "use Codex for this", "delegate to Gemini", "route to the best model", agent delegation, or has OpenClaw agents configured with multiple providers. Do NOT use for single-model conversations or general chat. homepage: https://github.com/dorukardahan/ZeroAPI user-invocable: true compatibility: Requires OpenClaw 2026.2.6+ with at least one AI subscription. Bootstrap budget config requires 2026.2.14+. metadata: {"openclaw":{"emoji":"⚡","category":"routing","os":["darwin","linux"],"requires":{"anyBins":["openclaw","claude"],"config":["agents"]}}}

ZeroAPI — Subscription-Based Model Routing

You are an OpenClaw agent. This skill teaches you HOW to route tasks to the right model across your available providers. You do NOT call external APIs — OpenClaw handles connections. Your job is to CLASSIFY incoming tasks and DELEGATE to the appropriate agent/model.

First-Time Setup

When this skill is first loaded, determine the user's available providers:

  1. Ask: "Which AI subscriptions do you have?" (Claude Max 5x/20x, ChatGPT Plus/Pro, Gemini Advanced, Kimi)
  2. Map subscriptions to available tiers (see table below)
  3. Disable tiers for missing providers — those decision steps get skipped
  4. Confirm the active configuration with the user

If only Claude is available, all tasks stay on Opus. No routing needed — but conflict resolution and collaboration patterns still apply for judging task complexity.

To verify providers are actually working after setup, ask the user to run:

openclaw models status

Any model showing missing or auth_expired is not usable. Remove it from your active tiers until the user fixes it.

For full provider configuration details, consult references/provider-config.md (in the same directory as this SKILL.md).

Model Tiers

| Tier | Model | OpenClaw ID | Speed | TTFT | Intelligence | Context | Best At | |------|-------|-------------|-------|------|-------------|---------|---------| | SIMPLE | Gemini 2.5 Flash-Lite | google-gemini-cli/gemini-2.5-flash-lite | 495 tok/s | 0.23s | 21.6 | 1M | Low-latency pings, trivial format tasks | | FAST | Gemini 3 Flash | google-gemini-cli/gemini-3-flash-preview | 206 tok/s | 12.75s | 46.4 | 1M | Instruction following, structured output, heartbeats | | RESEARCH | Gemini 3 Pro | google-gemini-cli/gemini-3-pro-preview | 131 tok/s | 29.59s | 48.4 | 1M | Scientific research, long context analysis | | CODE | GPT-5.3 Codex | openai-codex/gpt-5.3-codex | 113 tok/s | 20.00s | 51.5 | 266K | Code generation, math (99.0) | | DEEP | Claude Opus 4.6 | anthropic/claude-opus-4-6 | 67 tok/s | 1.76s | 53.0 | 200K | Reasoning, planning, judgment | | ORCHESTRATE | Kimi K2.5 | kimi-coding/k2p5 | 39 tok/s | 1.65s | 46.7 | 128K | Multi-agent orchestration (TAU-2: 0.959) |

Key benchmark scores (higher = better):

  • GPQA (science): Gemini Pro 0.908, Opus 0.769, Codex 0.738*
  • Coding (SWE-bench): Codex 49.3*, Opus 43.3, Gemini Pro 35.1
  • Math (AIME '25): Codex 99.0*, Gemini Flash 97.0, Opus 54.0
  • IFBench (instruction following): Gemini Flash 0.780, Opus 0.639, Codex 0.590*
  • TAU-2 (agentic tool use): Kimi K2.5 0.959, Codex 0.811*, Opus 0.780

Scores marked with * are estimated from vendor reports, not independently verified. Source: Artificial Analysis API v4, February 2026. Structured data in benchmarks.json.

Decision Algorithm

Walk through these 9 steps IN ORDER for every incoming task. The FIRST match wins. If a required model is unavailable, skip that step and continue to the next.

Estimating token count for Step 1: Count characters in the input and divide by 4. 100k tokens ≈ 400,000 characters. If the user pastes a large file, codebase, or says "analyze this entire repo," assume it exceeds 100k.

Step 1: Context > 100k tokens?

Signals: large file, long document, paste, bulk, CSV, log dump, entire codebase, "analyze this PDF" → Route to RESEARCH (Gemini Pro, 1M context window) / fallback: Opus (200K limit)

Step 2: Math / proof / numerical reasoning?

Signals: calculate, solve, equation, proof, integral, derivative, probability, statistics, optimize, formula, theorem → Route to CODE (Codex, Math: 99.0) / fallback: Gemini Flash (Math: 97.0) / Opus

Step 3: Code writing / generation?

Signals: write code, implement, function, class, refactor, create script, migration, API endpoint, test, unit test, pull request, diff, patch → Route to CODE (Codex, Coding: 49.3) / fallback: Opus

Step 4: Code review / architecture / security?

Signals: review, audit, architecture, design, trade-off, should I use, which approach, security review, best practice, code smell → Stay on DEEP (Opus, Intelligence: 53.0) — always stays on main agent

Step 5: Speed critical / trivial task?

Signals: quick, fast, simple, format, convert, summarize briefly, list, extract, translate short text, rename, timestamp, one-liner → Route to FAST (Flash, 206 tok/s, IFBench 0.780) / fallback: Flash-Lite (for sub-second latency) / Opus

Note: For tasks where sub-second TTFT matters more than intelligence (pings, health checks), use SIMPLE (Flash-Lite, 0.23s TTFT). For heartbeats and cron jobs, use FAST (Flash) — it has much better instruction following (IFBench 0.780; Flash-Lite has no verified IFBench score).

Step 6: Research / scientific / factual?

Signals: research, find out, what is, explain, compare, analyze, paper, study, evidence, fact-check, deep dive, investigate → Route to RESEARCH (Gemini Pro, GPQA: 0.908) / fallback: Opus

Step 7: Multi-step tool pipeline?

Signals: orchestrate, coordinate, pipeline, multi-step, workflow, chain, sequence of tasks, parallel, fan-out, combine results → Route to ORCHESTRATE (Kimi K2.5, TAU-2: 0.959) / fallback: Codex / Opus

Step 8: Instruction following / structured output?

Signals: follow these rules exactly, format as, JSON schema, strict template, fill in, structured, comply, checklist, table generation → Route to FAST (Gemini Flash, IFBench: 0.780) / fallback: Opus

Step 9: Default

If no step above matched clearly: → Stay on DEEP (Opus, Intelligence: 53.0) — safest all-rounder

Disambiguation Examples

When a task matches multiple steps:

  • "Analyze this 200-page PDF and write a Python parser for it" → Step 1 wins (context size), route to RESEARCH. Then delegate code writing to CODE as a follow-up.
  • "Quickly solve this integral" → Step 2 wins over Step 5 (math trumps speed).
  • "Generate a JSON schema for this API" → Step 8 wins (structured output, not code writing).
  • "Review this code and refactor the authentication module" → Step 4 wins for review, then Step 3 for the refactor (delegate to CODE).

When NOT to Route

Do NOT route away from the current model when:

  1. User explicitly requests a model. "Use Opus for this" or "don't delegate this" — always respect direct instructions.
  2. Security-sensitive tasks. If the task involves credentials, private keys, secrets, or personally identifiable data, keep it on the main agent. Do not send sensitive content to sub-agents.
  3. Debugging a specific model. If the user is testing or comparing model behavior, route to the model they specify.
  4. Mid-conversation continuity. If you are deep in a multi-turn conversation and the user asks a quick follow-up, do not switch models just because the follow-up is "simple." Stay on the current model for context continuity unless the user explicitly asks to delegate.

Conflict Resolution

When multiple steps seem to match, resolve with these priority rules:

  1. Judgment trumps speed. If the task has ambiguity, nuance, or risk — stay on Opus.
  2. Specialist trumps generalist. If a model has a standout benchmark for the exact task type, prefer it.
  3. Code writing → Codex. Code review → Opus. Different models for writing vs judging.
  4. Context overflow → Gemini. Only Gemini models handle 1M context.
  5. TTFT matters for interactive tasks. Flash-Lite (0.23s), Kimi (1.65s), and Opus (1.76s) respond fast. Codex (20s) and Pro (29.59s) are slow to start — don't use them for quick back-and-forth.
  6. When truly tied → Opus. Highest general intelligence, lowest risk of subtle errors.

Sub-Agent Delegation

Use OpenClaw's agent system to delegate:

/agent <agent-id> <instruction>
  1. You send /agent codex <instruction> — OpenClaw spawns the sub-agent with that instruction.
  2. The sub-agent runs in its own workspace and returns a text response.
  3. Sub-agents do NOT share your conversation context or workspace files. Pass ALL necessary context in the instruction.

What to pass: The specific task, relevant code snippets, output format expectations, and constraints.

Examples

/agent codex Write a Python function that parses RFC 3339 timestamps with timezone support. Return only the code.

/agent gemini-researcher Analyze the differences between SQLite WAL mode and journal mode. Include benchmarks and a recommendation.

/agent gemini-fast Convert the following list into a markdown table with columns: Name, Role, Status.

/agent kimi-orchestrator Coordinate: (1) gemini-researcher gathers data on X, (2) codex writes a parser, (3) report results.

Error Handling and Retries

  1. Timeout (no response within 60s): Retry once on same model. If it fails again, fall to next fallback.
  2. Auth error (401/403): Do NOT retry — fall to next fallback immediately and tell user to re-authenticate. See references/oauth-setup.md.
  3. Rate limit (429): Wait 30 seconds, retry once. If still limited, fall to next fallback.
  4. Partial/garbage response: Retry once. If still broken, fall to next fallback.
  5. Model unavailable: Skip that tier entirely and continue.

Maximum retries: 1 retry on same model, then next fallback. If ALL fallbacks fail, stay on Opus. Never retry more than 3 times total across all fallbacks.

When a fallback is triggered, briefly inform the user:

"Codex is unavailable, routing to Opus instead."

Multi-Turn Conversation Routing

  • Stay on the same model for follow-up messages in the same topic. Context continuity matters more than optimal model selection.
  • Re-route only when the task type clearly changes. Example: user discusses architecture (Opus) → then says "now write the implementation" → delegate code writing to Codex.

When switching models mid-conversation:

  1. Summarize the relevant context from the current conversation.
  2. Pass that summary as part of the delegation instruction.
  3. Continue on the original model (Opus) with awareness of what the sub-agent produced.

Workspace Isolation

  • Sub-agents cannot read your files — paste content into the instruction.
  • Sub-agents cannot write to your workspace — output comes back as text.
  • Sub-agents share nothing with each other — complete isolation by design.

Specialist Agents (Optional)

Beyond the 5 core agents (main, codex, gemini-researcher, gemini-fast, kimi-orchestrator), you can add domain-specific specialist agents. Specialists have their own workspace with tailored AGENTS.md, MEMORY.md, and skills for a specific domain.

When to use specialists

  • You have distinct project domains (infrastructure, content, community, etc.)
  • Each domain needs its own persistent memory and context
  • You want the main agent to orchestrate without carrying all domain knowledge

Example specialists

| Agent | Primary Model | Why That Model | Use Case | |-------|--------------|----------------|----------| | devops | Codex | Code generation, shell scripts, config files | Infrastructure, deployment, monitoring scripts | | researcher | Gemini Pro | GPQA 0.908, 1M context | Deep research, fact-checking, literature review | | content-writer | Opus | Intelligence 53.0, best judgment | Blog posts, documentation, copywriting | | community | Flash | 206 tok/s, IFBench 0.780 | Moderation, quick responses, community engagement |

Delegating to specialists

/agent devops Set up a systemd service for the memory API with health checks and auto-restart

/agent researcher Analyze the latest papers on mixture-of-experts architectures. Focus on routing efficiency.

/agent content-writer Write a blog post about multi-model routing. Target audience: developers running self-hosted AI agents.

/agent community Review the last 24 hours of community posts. Flag any that need moderation.

Specialist workspace structure

Each specialist gets its own workspace directory with domain-specific files:

~/.openclaw/workspace-devops/
├── AGENTS.md          # DevOps-specific instructions and runbooks
├── MEMORY.md          # Infrastructure decisions, deployment history
└── skills/            # DevOps-relevant skills only

This keeps domain context separate. The main orchestrator does not load devops runbooks, and the devops agent does not carry content writing guidelines.

Note: Workspace directory names are arbitrary — workspace-devops, workspace-infra, workspace-ops all work. The agent id and workspace path don't need to match.

See examples/specialist-agents/ for a ready-to-use config with 4 specialist agents.

Fallback depth: Specialist agents in the example use 2 fallbacks instead of the core agents' 3. This is intentional — specialists are narrower in scope and trade some redundancy for simpler configs. Add more fallbacks if your specialists handle critical tasks.

Image Model Routing

Set imageModel in your agent config to route vision/image analysis tasks to the best multimodal model:

"imageModel": {
  "primary": "google-gemini-cli/gemini-3-pro-preview",
  "fallbacks": [
    "google-gemini-cli/gemini-3-flash-preview",
    "anthropic/claude-opus-4-6"
  ]
}

Gemini Pro is recommended as the primary image model — it has strong multimodal capabilities and 1M context for analyzing large images or multiple images in one request. Flash is a good fallback for speed, and Opus handles vision well as a last resort.

Place this in agents.defaults to apply to all agents, or set it per-agent. Agents without imageModel typically fall back to their primary text model for vision tasks (exact behavior may vary by OpenClaw version — check docs.openclaw.ai for current defaults).

Collaboration Patterns

Pipeline (sequential)

Research Agent → Main Agent → Code Agent
(gather facts)   (plan)       (implement)

Choose this when the task requires gathering facts before implementing.

Parallel + Merge

Main Agent ──┬── Code Agent (approach A)
             └── Research Agent (approach B)
Then: Main merges and picks the best parts.

Choose this when exploring multiple solutions or under time pressure.

Adversarial Review

Code Agent writes → Main Agent critiques → Code Agent revises

Choose this for security-sensitive code or production-critical changes.

Orchestrated (Kimi-led)

/agent kimi-orchestrator Plan and execute: <complex multi-agent task>

Choose this for tasks requiring 3+ agents in complex dependency graphs. Caution: Kimi is slowest (39 tok/s) but best at tool orchestration (TAU-2: 0.959).

Fallback Chains

When a model is unavailable or rate-limited, fall through in reliability order.

Full Stack (4 providers)

| Task Type | Primary | Fallback 1 | Fallback 2 | Fallback 3 | |-----------|---------|------------|------------|------------| | Reasoning | Opus | Codex | Gemini Pro | Kimi K2.5 | | Code | Codex | Opus | Gemini Pro | Kimi K2.5 | | Research | Gemini Pro | Opus | Codex | Kimi K2.5 | | Fast tasks | Flash-Lite | Flash | Opus | Codex | | Agentic | Kimi K2.5 | Codex | Gemini Pro | Opus |

Important: Always use cross-provider fallbacks. Same-provider fallbacks (e.g., Gemini Pro → Flash) help with model-specific issues but not provider outages. Every fallback chain should span at least 2 different providers.

Claude + Gemini (2 providers)

| Task Type | Primary | Fallback 1 | Fallback 2 | |-----------|---------|------------|------------| | Reasoning | Opus | Gemini Pro | — | | Code | Opus | Gemini Pro | — | | Research | Gemini Pro | Opus | — | | Fast tasks | Flash-Lite | Flash | Opus |

Claude + Codex (2 providers)

| Task Type | Primary | Fallback 1 | |-----------|---------|------------| | Reasoning | Opus | Codex | | Code | Codex | Opus | | Everything else | Opus | Codex |

Claude Only (1 provider)

All tasks route to Opus. No fallback needed.

Provider Setup

For auth setup, OAuth flows (including headless VPS), and multi-device safety details, consult references/oauth-setup.md (in the same directory as this SKILL.md).

For provider configuration (openclaw.json, per-agent models.json, Google Gemini workarounds), consult references/provider-config.md.

Quick reference:

| Provider | Auth Method | Maintenance | |----------|-----------|-------------| | Anthropic | Setup-token (OAuth) | Low — auto-refresh | | Google Gemini | OAuth (CLI plugin) | Very low — long-lived tokens | | OpenAI Codex | OAuth (ChatGPT PKCE) | Low — auto-refresh | | Kimi | Static API key | None — never expires |

Troubleshooting

For detailed troubleshooting, consult references/troubleshooting.md (in the same directory as this SKILL.md). Common issues:

  • "No API provider registered for api: undefined" → Missing api field in provider config
  • "API key not valid" with Gemini subscription → Wrong API type; use google-gemini-cli not google-generative-ai
  • Model shows missing → Model ID mismatch; gemini-2.5-flash-lite (no -preview suffix)
  • Codex 401 Unauthorized → Token expired; re-run OAuth flow via references/oauth-setup.md
  • Sub-agent "Unknown model" → Provider missing from sub-agent's auth-profile

Cost Summary

| Setup | Monthly | Notes | |-------|---------|-------| | Claude only (Max 5x) | $100 | No routing, Opus handles everything | | Claude only (Max 20x) | $200 | No routing, 20x rate limits | | Balanced (Max 20x + Gemini) | $220 | Adds Flash speed + Pro research | | Code-focused (+ ChatGPT Plus) | $240 | Adds Codex for code + math | | Full stack (all 4, ChatGPT Plus) | $250 | Full specialization | | Full stack Pro (all 4, ChatGPT Pro) | $430 | Maximum rate limits |

Source: Artificial Analysis API v4, February 2026. Codex scores estimated (*) from OpenAI blog data. Structured benchmark data available in benchmarks.json.

Contract & API

Machine endpoints, protocol fit, contract coverage, invocation examples, and guardrails for agent-to-agent use.

Verifiedcapability-contract

Contract coverage

Status

ready

Auth

api_key, oauth

Streaming

No

Data region

global

Protocol support

OpenClaw: self-declared

Requires: openclew, lang:typescript

Forbidden: none

Guardrails

Operational confidence: medium

Contract is available with explicit auth and schema references.
Trust confidence is not low and verification freshness is acceptable.
Invocation examples
curl -s "https://xpersona.co/api/v1/agents/dorukardahan-zeroapi/snapshot"
curl -s "https://xpersona.co/api/v1/agents/dorukardahan-zeroapi/contract"
curl -s "https://xpersona.co/api/v1/agents/dorukardahan-zeroapi/trust"

Reliability & Benchmarks

Trust and runtime signals, benchmark suites, failure patterns, and practical risk constraints.

Missingruntime-metrics

Trust signals

Handshake

UNKNOWN

Confidence

unknown

Attempts 30d

unknown

Fallback rate

unknown

Runtime metrics

Observed P50

unknown

Observed P95

unknown

Rate limit

unknown

Estimated cost

unknown

No benchmark suites or observed failure patterns are available.

Media & Demo

Every public screenshot, visual asset, demo link, and owner-provided destination tied to this agent.

Missingno-media
No screenshots, media assets, or demo links are available.

Related Agents

Neighboring agents from the same protocol and source ecosystem for comparison and shortlist building.

Self-declaredprotocol-neighbors
GITHUB_REPOSactivepieces

Rank

70

AI Agents & MCPs & AI Workflow Automation • (~400 MCP servers for AI agents) • AI Automation / AI Agent with MCPs • AI Workflows & AI Agents • MCPs for AI Agents

Traction

No public download signal

Freshness

Updated 2d ago

OPENCLAW
GITHUB_REPOScherry-studio

Rank

70

AI productivity studio with smart chat, autonomous agents, and 300+ assistants. Unified access to frontier LLMs

Traction

No public download signal

Freshness

Updated 5d ago

MCPOPENCLAW
GITHUB_REPOSAionUi

Rank

70

Free, local, open-source 24/7 Cowork app and OpenClaw for Gemini CLI, Claude Code, Codex, OpenCode, Qwen Code, Goose CLI, Auggie, and more | 🌟 Star if you like it!

Traction

No public download signal

Freshness

Updated 6d ago

MCPOPENCLAW
GITHUB_REPOSCopilotKit

Rank

70

The Frontend for Agents & Generative UI. React + Angular

Traction

No public download signal

Freshness

Updated 23d ago

OPENCLAW
Machine Appendix

Contract JSON

{
  "contractStatus": "ready",
  "authModes": [
    "api_key",
    "oauth"
  ],
  "requires": [
    "openclew",
    "lang:typescript"
  ],
  "forbidden": [],
  "supportsMcp": false,
  "supportsA2a": false,
  "supportsStreaming": false,
  "inputSchemaRef": "https://github.com/dorukardahan/ZeroAPI#input",
  "outputSchemaRef": "https://github.com/dorukardahan/ZeroAPI#output",
  "dataRegion": "global",
  "contractUpdatedAt": "2026-02-24T19:42:01.234Z",
  "sourceUpdatedAt": "2026-02-24T19:42:01.234Z",
  "freshnessSeconds": 4423362
}

Invocation Guide

{
  "preferredApi": {
    "snapshotUrl": "https://xpersona.co/api/v1/agents/dorukardahan-zeroapi/snapshot",
    "contractUrl": "https://xpersona.co/api/v1/agents/dorukardahan-zeroapi/contract",
    "trustUrl": "https://xpersona.co/api/v1/agents/dorukardahan-zeroapi/trust"
  },
  "curlExamples": [
    "curl -s \"https://xpersona.co/api/v1/agents/dorukardahan-zeroapi/snapshot\"",
    "curl -s \"https://xpersona.co/api/v1/agents/dorukardahan-zeroapi/contract\"",
    "curl -s \"https://xpersona.co/api/v1/agents/dorukardahan-zeroapi/trust\""
  ],
  "jsonRequestTemplate": {
    "query": "summarize this repo",
    "constraints": {
      "maxLatencyMs": 2000,
      "protocolPreference": [
        "OPENCLEW"
      ]
    }
  },
  "jsonResponseTemplate": {
    "ok": true,
    "result": {
      "summary": "...",
      "confidence": 0.9
    },
    "meta": {
      "source": "GITHUB_OPENCLEW",
      "generatedAt": "2026-04-17T00:24:43.402Z"
    }
  },
  "retryPolicy": {
    "maxAttempts": 3,
    "backoffMs": [
      500,
      1500,
      3500
    ],
    "retryableConditions": [
      "HTTP_429",
      "HTTP_503",
      "NETWORK_TIMEOUT"
    ]
  }
}

Trust JSON

{
  "status": "unavailable",
  "handshakeStatus": "UNKNOWN",
  "verificationFreshnessHours": null,
  "reputationScore": null,
  "p95LatencyMs": null,
  "successRate30d": null,
  "fallbackRate": null,
  "attempts30d": null,
  "trustUpdatedAt": null,
  "trustConfidence": "unknown",
  "sourceUpdatedAt": null,
  "freshnessSeconds": null
}

Capability Matrix

{
  "rows": [
    {
      "key": "OPENCLEW",
      "type": "protocol",
      "support": "unknown",
      "confidenceSource": "profile",
      "notes": "Listed on profile"
    },
    {
      "key": "add",
      "type": "capability",
      "support": "supported",
      "confidenceSource": "profile",
      "notes": "Declared in agent profile metadata"
    }
  ],
  "flattenedTokens": "protocol:OPENCLEW|unknown|profile capability:add|supported|profile"
}

Facts JSON

[
  {
    "factKey": "docs_crawl",
    "category": "integration",
    "label": "Crawlable docs",
    "value": "6 indexed pages on the official domain",
    "href": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
    "sourceUrl": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
    "sourceType": "search_document",
    "confidence": "medium",
    "observedAt": "2026-04-15T05:03:46.393Z",
    "isPublic": true
  },
  {
    "factKey": "vendor",
    "category": "vendor",
    "label": "Vendor",
    "value": "Dorukardahan",
    "href": "https://github.com/dorukardahan/ZeroAPI",
    "sourceUrl": "https://github.com/dorukardahan/ZeroAPI",
    "sourceType": "profile",
    "confidence": "medium",
    "observedAt": "2026-03-01T06:03:28.810Z",
    "isPublic": true
  },
  {
    "factKey": "traction",
    "category": "adoption",
    "label": "Adoption signal",
    "value": "1 GitHub stars",
    "href": "https://github.com/dorukardahan/ZeroAPI",
    "sourceUrl": "https://github.com/dorukardahan/ZeroAPI",
    "sourceType": "profile",
    "confidence": "medium",
    "observedAt": "2026-03-01T06:03:28.810Z",
    "isPublic": true
  },
  {
    "factKey": "protocols",
    "category": "compatibility",
    "label": "Protocol compatibility",
    "value": "OpenClaw",
    "href": "https://xpersona.co/api/v1/agents/dorukardahan-zeroapi/contract",
    "sourceUrl": "https://xpersona.co/api/v1/agents/dorukardahan-zeroapi/contract",
    "sourceType": "contract",
    "confidence": "medium",
    "observedAt": "2026-02-24T19:42:01.234Z",
    "isPublic": true
  },
  {
    "factKey": "auth_modes",
    "category": "compatibility",
    "label": "Auth modes",
    "value": "api_key, oauth",
    "href": "https://xpersona.co/api/v1/agents/dorukardahan-zeroapi/contract",
    "sourceUrl": "https://xpersona.co/api/v1/agents/dorukardahan-zeroapi/contract",
    "sourceType": "contract",
    "confidence": "high",
    "observedAt": "2026-02-24T19:42:01.234Z",
    "isPublic": true
  },
  {
    "factKey": "schema_refs",
    "category": "artifact",
    "label": "Machine-readable schemas",
    "value": "OpenAPI or schema references published",
    "href": "https://github.com/dorukardahan/ZeroAPI#input",
    "sourceUrl": "https://xpersona.co/api/v1/agents/dorukardahan-zeroapi/contract",
    "sourceType": "contract",
    "confidence": "high",
    "observedAt": "2026-02-24T19:42:01.234Z",
    "isPublic": true
  },
  {
    "factKey": "handshake_status",
    "category": "security",
    "label": "Handshake status",
    "value": "UNKNOWN",
    "href": "https://xpersona.co/api/v1/agents/dorukardahan-zeroapi/trust",
    "sourceUrl": "https://xpersona.co/api/v1/agents/dorukardahan-zeroapi/trust",
    "sourceType": "trust",
    "confidence": "medium",
    "observedAt": null,
    "isPublic": true
  }
]

Change Events JSON

[
  {
    "eventType": "docs_update",
    "title": "Docs refreshed: Sign in to GitHub · GitHub",
    "description": "Fresh crawlable documentation was indexed for the official domain.",
    "href": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
    "sourceUrl": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
    "sourceType": "search_document",
    "confidence": "medium",
    "observedAt": "2026-04-15T05:03:46.393Z",
    "isPublic": true
  }
]

Sponsored

Ads related to zeroapi and adjacent AI workflows.