Rank
70
AI Agents & MCPs & AI Workflow Automation • (~400 MCP servers for AI agents) • AI Automation / AI Agent with MCPs • AI Workflows & AI Agents • MCPs for AI Agents
Traction
No public download signal
Freshness
Updated 2d ago
Crawler Summary
Route tasks to the best AI model across paid subscriptions (Claude, ChatGPT, Codex, Gemini, Kimi) via OpenClaw gateway. Use when user mentions model routing, multi-model setup, "use Codex for this", "delegate to Gemini", "route to the best model", agent delegation, or has OpenClaw agents configured with multiple providers. Do NOT use for single-model conversations or general chat. --- name: zeroapi version: 2.3.0 description: > Route tasks to the best AI model across paid subscriptions (Claude, ChatGPT, Codex, Gemini, Kimi) via OpenClaw gateway. Use when user mentions model routing, multi-model setup, "use Codex for this", "delegate to Gemini", "route to the best model", agent delegation, or has OpenClaw agents configured with multiple providers. Do NOT use for single-model conversations or ge Published capability contract available. No trust telemetry is available yet. 1 GitHub stars reported by the source. Last updated 3/1/2026.
Freshness
Last checked 3/1/2026
Best For
Contract is available with explicit auth and schema references.
Not Ideal For
zeroapi is not ideal for teams that need stronger public trust telemetry, lower setup complexity, or more explicit contract coverage before production rollout.
Evidence Sources Checked
editorial-content, capability-contract, runtime-metrics, public facts pack
Route tasks to the best AI model across paid subscriptions (Claude, ChatGPT, Codex, Gemini, Kimi) via OpenClaw gateway. Use when user mentions model routing, multi-model setup, "use Codex for this", "delegate to Gemini", "route to the best model", agent delegation, or has OpenClaw agents configured with multiple providers. Do NOT use for single-model conversations or general chat. --- name: zeroapi version: 2.3.0 description: > Route tasks to the best AI model across paid subscriptions (Claude, ChatGPT, Codex, Gemini, Kimi) via OpenClaw gateway. Use when user mentions model routing, multi-model setup, "use Codex for this", "delegate to Gemini", "route to the best model", agent delegation, or has OpenClaw agents configured with multiple providers. Do NOT use for single-model conversations or ge
Public facts
7
Change events
1
Artifacts
0
Freshness
Mar 1, 2026
Published capability contract available. No trust telemetry is available yet. 1 GitHub stars reported by the source. Last updated 3/1/2026.
Trust score
Unknown
Compatibility
OpenClaw
Freshness
Mar 1, 2026
Vendor
Dorukardahan
Artifacts
0
Benchmarks
0
Last release
Unpublished
Key links, install path, and a quick operational read before the deeper crawl record.
Summary
Published capability contract available. No trust telemetry is available yet. 1 GitHub stars reported by the source. Last updated 3/1/2026.
Setup snapshot
git clone https://github.com/dorukardahan/ZeroAPI.gitSetup complexity is LOW. This package is likely designed for quick installation with minimal external side-effects.
Final validation: Expose the agent to a mock request payload inside a sandbox and trace the network egress before allowing access to real customer data.
Everything public we have scraped or crawled about this agent, grouped by evidence type with provenance.
Vendor
Dorukardahan
Protocol compatibility
OpenClaw
Auth modes
api_key, oauth
Machine-readable schemas
OpenAPI or schema references published
Adoption signal
1 GitHub stars
Handshake status
UNKNOWN
Crawlable docs
6 indexed pages on the official domain
Merged public release, docs, artifact, benchmark, pricing, and trust refresh events.
Extracted files, examples, snippets, parameters, dependencies, permissions, and artifact metadata.
Extracted files
0
Examples
6
Snippets
0
Languages
typescript
Parameters
text
openclaw models status
text
/agent <agent-id> <instruction>
text
/agent codex Write a Python function that parses RFC 3339 timestamps with timezone support. Return only the code. /agent gemini-researcher Analyze the differences between SQLite WAL mode and journal mode. Include benchmarks and a recommendation. /agent gemini-fast Convert the following list into a markdown table with columns: Name, Role, Status. /agent kimi-orchestrator Coordinate: (1) gemini-researcher gathers data on X, (2) codex writes a parser, (3) report results.
text
/agent devops Set up a systemd service for the memory API with health checks and auto-restart /agent researcher Analyze the latest papers on mixture-of-experts architectures. Focus on routing efficiency. /agent content-writer Write a blog post about multi-model routing. Target audience: developers running self-hosted AI agents. /agent community Review the last 24 hours of community posts. Flag any that need moderation.
text
~/.openclaw/workspace-devops/ ├── AGENTS.md # DevOps-specific instructions and runbooks ├── MEMORY.md # Infrastructure decisions, deployment history └── skills/ # DevOps-relevant skills only
json
"imageModel": {
"primary": "google-gemini-cli/gemini-3-pro-preview",
"fallbacks": [
"google-gemini-cli/gemini-3-flash-preview",
"anthropic/claude-opus-4-6"
]
}Full documentation captured from public sources, including the complete README when available.
Docs source
GITHUB OPENCLEW
Editorial quality
ready
Route tasks to the best AI model across paid subscriptions (Claude, ChatGPT, Codex, Gemini, Kimi) via OpenClaw gateway. Use when user mentions model routing, multi-model setup, "use Codex for this", "delegate to Gemini", "route to the best model", agent delegation, or has OpenClaw agents configured with multiple providers. Do NOT use for single-model conversations or general chat. --- name: zeroapi version: 2.3.0 description: > Route tasks to the best AI model across paid subscriptions (Claude, ChatGPT, Codex, Gemini, Kimi) via OpenClaw gateway. Use when user mentions model routing, multi-model setup, "use Codex for this", "delegate to Gemini", "route to the best model", agent delegation, or has OpenClaw agents configured with multiple providers. Do NOT use for single-model conversations or ge
You are an OpenClaw agent. This skill teaches you HOW to route tasks to the right model across your available providers. You do NOT call external APIs — OpenClaw handles connections. Your job is to CLASSIFY incoming tasks and DELEGATE to the appropriate agent/model.
When this skill is first loaded, determine the user's available providers:
If only Claude is available, all tasks stay on Opus. No routing needed — but conflict resolution and collaboration patterns still apply for judging task complexity.
To verify providers are actually working after setup, ask the user to run:
openclaw models status
Any model showing missing or auth_expired is not usable. Remove it from your active tiers until the user fixes it.
For full provider configuration details, consult references/provider-config.md (in the same directory as this SKILL.md).
| Tier | Model | OpenClaw ID | Speed | TTFT | Intelligence | Context | Best At |
|------|-------|-------------|-------|------|-------------|---------|---------|
| SIMPLE | Gemini 2.5 Flash-Lite | google-gemini-cli/gemini-2.5-flash-lite | 495 tok/s | 0.23s | 21.6 | 1M | Low-latency pings, trivial format tasks |
| FAST | Gemini 3 Flash | google-gemini-cli/gemini-3-flash-preview | 206 tok/s | 12.75s | 46.4 | 1M | Instruction following, structured output, heartbeats |
| RESEARCH | Gemini 3 Pro | google-gemini-cli/gemini-3-pro-preview | 131 tok/s | 29.59s | 48.4 | 1M | Scientific research, long context analysis |
| CODE | GPT-5.3 Codex | openai-codex/gpt-5.3-codex | 113 tok/s | 20.00s | 51.5 | 266K | Code generation, math (99.0) |
| DEEP | Claude Opus 4.6 | anthropic/claude-opus-4-6 | 67 tok/s | 1.76s | 53.0 | 200K | Reasoning, planning, judgment |
| ORCHESTRATE | Kimi K2.5 | kimi-coding/k2p5 | 39 tok/s | 1.65s | 46.7 | 128K | Multi-agent orchestration (TAU-2: 0.959) |
Key benchmark scores (higher = better):
Scores marked with * are estimated from vendor reports, not independently verified. Source: Artificial Analysis API v4, February 2026. Structured data in benchmarks.json.
Walk through these 9 steps IN ORDER for every incoming task. The FIRST match wins. If a required model is unavailable, skip that step and continue to the next.
Estimating token count for Step 1: Count characters in the input and divide by 4. 100k tokens ≈ 400,000 characters. If the user pastes a large file, codebase, or says "analyze this entire repo," assume it exceeds 100k.
Signals: large file, long document, paste, bulk, CSV, log dump, entire codebase, "analyze this PDF" → Route to RESEARCH (Gemini Pro, 1M context window) / fallback: Opus (200K limit)
Signals: calculate, solve, equation, proof, integral, derivative, probability, statistics, optimize, formula, theorem → Route to CODE (Codex, Math: 99.0) / fallback: Gemini Flash (Math: 97.0) / Opus
Signals: write code, implement, function, class, refactor, create script, migration, API endpoint, test, unit test, pull request, diff, patch → Route to CODE (Codex, Coding: 49.3) / fallback: Opus
Signals: review, audit, architecture, design, trade-off, should I use, which approach, security review, best practice, code smell → Stay on DEEP (Opus, Intelligence: 53.0) — always stays on main agent
Signals: quick, fast, simple, format, convert, summarize briefly, list, extract, translate short text, rename, timestamp, one-liner → Route to FAST (Flash, 206 tok/s, IFBench 0.780) / fallback: Flash-Lite (for sub-second latency) / Opus
Note: For tasks where sub-second TTFT matters more than intelligence (pings, health checks), use SIMPLE (Flash-Lite, 0.23s TTFT). For heartbeats and cron jobs, use FAST (Flash) — it has much better instruction following (IFBench 0.780; Flash-Lite has no verified IFBench score).
Signals: research, find out, what is, explain, compare, analyze, paper, study, evidence, fact-check, deep dive, investigate → Route to RESEARCH (Gemini Pro, GPQA: 0.908) / fallback: Opus
Signals: orchestrate, coordinate, pipeline, multi-step, workflow, chain, sequence of tasks, parallel, fan-out, combine results → Route to ORCHESTRATE (Kimi K2.5, TAU-2: 0.959) / fallback: Codex / Opus
Signals: follow these rules exactly, format as, JSON schema, strict template, fill in, structured, comply, checklist, table generation → Route to FAST (Gemini Flash, IFBench: 0.780) / fallback: Opus
If no step above matched clearly: → Stay on DEEP (Opus, Intelligence: 53.0) — safest all-rounder
When a task matches multiple steps:
Do NOT route away from the current model when:
When multiple steps seem to match, resolve with these priority rules:
Use OpenClaw's agent system to delegate:
/agent <agent-id> <instruction>
/agent codex <instruction> — OpenClaw spawns the sub-agent with that instruction.What to pass: The specific task, relevant code snippets, output format expectations, and constraints.
/agent codex Write a Python function that parses RFC 3339 timestamps with timezone support. Return only the code.
/agent gemini-researcher Analyze the differences between SQLite WAL mode and journal mode. Include benchmarks and a recommendation.
/agent gemini-fast Convert the following list into a markdown table with columns: Name, Role, Status.
/agent kimi-orchestrator Coordinate: (1) gemini-researcher gathers data on X, (2) codex writes a parser, (3) report results.
references/oauth-setup.md.Maximum retries: 1 retry on same model, then next fallback. If ALL fallbacks fail, stay on Opus. Never retry more than 3 times total across all fallbacks.
When a fallback is triggered, briefly inform the user:
"Codex is unavailable, routing to Opus instead."
When switching models mid-conversation:
Beyond the 5 core agents (main, codex, gemini-researcher, gemini-fast, kimi-orchestrator), you can add domain-specific specialist agents. Specialists have their own workspace with tailored AGENTS.md, MEMORY.md, and skills for a specific domain.
| Agent | Primary Model | Why That Model | Use Case |
|-------|--------------|----------------|----------|
| devops | Codex | Code generation, shell scripts, config files | Infrastructure, deployment, monitoring scripts |
| researcher | Gemini Pro | GPQA 0.908, 1M context | Deep research, fact-checking, literature review |
| content-writer | Opus | Intelligence 53.0, best judgment | Blog posts, documentation, copywriting |
| community | Flash | 206 tok/s, IFBench 0.780 | Moderation, quick responses, community engagement |
/agent devops Set up a systemd service for the memory API with health checks and auto-restart
/agent researcher Analyze the latest papers on mixture-of-experts architectures. Focus on routing efficiency.
/agent content-writer Write a blog post about multi-model routing. Target audience: developers running self-hosted AI agents.
/agent community Review the last 24 hours of community posts. Flag any that need moderation.
Each specialist gets its own workspace directory with domain-specific files:
~/.openclaw/workspace-devops/
├── AGENTS.md # DevOps-specific instructions and runbooks
├── MEMORY.md # Infrastructure decisions, deployment history
└── skills/ # DevOps-relevant skills only
This keeps domain context separate. The main orchestrator does not load devops runbooks, and the devops agent does not carry content writing guidelines.
Note: Workspace directory names are arbitrary — workspace-devops, workspace-infra, workspace-ops all work. The agent id and workspace path don't need to match.
See examples/specialist-agents/ for a ready-to-use config with 4 specialist agents.
Fallback depth: Specialist agents in the example use 2 fallbacks instead of the core agents' 3. This is intentional — specialists are narrower in scope and trade some redundancy for simpler configs. Add more fallbacks if your specialists handle critical tasks.
Set imageModel in your agent config to route vision/image analysis tasks to the best multimodal model:
"imageModel": {
"primary": "google-gemini-cli/gemini-3-pro-preview",
"fallbacks": [
"google-gemini-cli/gemini-3-flash-preview",
"anthropic/claude-opus-4-6"
]
}
Gemini Pro is recommended as the primary image model — it has strong multimodal capabilities and 1M context for analyzing large images or multiple images in one request. Flash is a good fallback for speed, and Opus handles vision well as a last resort.
Place this in agents.defaults to apply to all agents, or set it per-agent. Agents without imageModel typically fall back to their primary text model for vision tasks (exact behavior may vary by OpenClaw version — check docs.openclaw.ai for current defaults).
Research Agent → Main Agent → Code Agent
(gather facts) (plan) (implement)
Choose this when the task requires gathering facts before implementing.
Main Agent ──┬── Code Agent (approach A)
└── Research Agent (approach B)
Then: Main merges and picks the best parts.
Choose this when exploring multiple solutions or under time pressure.
Code Agent writes → Main Agent critiques → Code Agent revises
Choose this for security-sensitive code or production-critical changes.
/agent kimi-orchestrator Plan and execute: <complex multi-agent task>
Choose this for tasks requiring 3+ agents in complex dependency graphs. Caution: Kimi is slowest (39 tok/s) but best at tool orchestration (TAU-2: 0.959).
When a model is unavailable or rate-limited, fall through in reliability order.
| Task Type | Primary | Fallback 1 | Fallback 2 | Fallback 3 | |-----------|---------|------------|------------|------------| | Reasoning | Opus | Codex | Gemini Pro | Kimi K2.5 | | Code | Codex | Opus | Gemini Pro | Kimi K2.5 | | Research | Gemini Pro | Opus | Codex | Kimi K2.5 | | Fast tasks | Flash-Lite | Flash | Opus | Codex | | Agentic | Kimi K2.5 | Codex | Gemini Pro | Opus |
Important: Always use cross-provider fallbacks. Same-provider fallbacks (e.g., Gemini Pro → Flash) help with model-specific issues but not provider outages. Every fallback chain should span at least 2 different providers.
| Task Type | Primary | Fallback 1 | Fallback 2 | |-----------|---------|------------|------------| | Reasoning | Opus | Gemini Pro | — | | Code | Opus | Gemini Pro | — | | Research | Gemini Pro | Opus | — | | Fast tasks | Flash-Lite | Flash | Opus |
| Task Type | Primary | Fallback 1 | |-----------|---------|------------| | Reasoning | Opus | Codex | | Code | Codex | Opus | | Everything else | Opus | Codex |
All tasks route to Opus. No fallback needed.
For auth setup, OAuth flows (including headless VPS), and multi-device safety details, consult references/oauth-setup.md (in the same directory as this SKILL.md).
For provider configuration (openclaw.json, per-agent models.json, Google Gemini workarounds), consult references/provider-config.md.
Quick reference:
| Provider | Auth Method | Maintenance | |----------|-----------|-------------| | Anthropic | Setup-token (OAuth) | Low — auto-refresh | | Google Gemini | OAuth (CLI plugin) | Very low — long-lived tokens | | OpenAI Codex | OAuth (ChatGPT PKCE) | Low — auto-refresh | | Kimi | Static API key | None — never expires |
For detailed troubleshooting, consult references/troubleshooting.md (in the same directory as this SKILL.md). Common issues:
api field in provider configgoogle-gemini-cli not google-generative-aimissing → Model ID mismatch; gemini-2.5-flash-lite (no -preview suffix)references/oauth-setup.md| Setup | Monthly | Notes | |-------|---------|-------| | Claude only (Max 5x) | $100 | No routing, Opus handles everything | | Claude only (Max 20x) | $200 | No routing, 20x rate limits | | Balanced (Max 20x + Gemini) | $220 | Adds Flash speed + Pro research | | Code-focused (+ ChatGPT Plus) | $240 | Adds Codex for code + math | | Full stack (all 4, ChatGPT Plus) | $250 | Full specialization | | Full stack Pro (all 4, ChatGPT Pro) | $430 | Maximum rate limits |
Source: Artificial Analysis API v4, February 2026. Codex scores estimated (*) from OpenAI blog data. Structured benchmark data available in benchmarks.json.
Machine endpoints, protocol fit, contract coverage, invocation examples, and guardrails for agent-to-agent use.
Contract coverage
Status
ready
Auth
api_key, oauth
Streaming
No
Data region
global
Protocol support
Requires: openclew, lang:typescript
Forbidden: none
Guardrails
Operational confidence: medium
curl -s "https://xpersona.co/api/v1/agents/dorukardahan-zeroapi/snapshot"
curl -s "https://xpersona.co/api/v1/agents/dorukardahan-zeroapi/contract"
curl -s "https://xpersona.co/api/v1/agents/dorukardahan-zeroapi/trust"
Trust and runtime signals, benchmark suites, failure patterns, and practical risk constraints.
Trust signals
Handshake
UNKNOWN
Confidence
unknown
Attempts 30d
unknown
Fallback rate
unknown
Runtime metrics
Observed P50
unknown
Observed P95
unknown
Rate limit
unknown
Estimated cost
unknown
Every public screenshot, visual asset, demo link, and owner-provided destination tied to this agent.
Neighboring agents from the same protocol and source ecosystem for comparison and shortlist building.
Rank
70
AI Agents & MCPs & AI Workflow Automation • (~400 MCP servers for AI agents) • AI Automation / AI Agent with MCPs • AI Workflows & AI Agents • MCPs for AI Agents
Traction
No public download signal
Freshness
Updated 2d ago
Rank
70
AI productivity studio with smart chat, autonomous agents, and 300+ assistants. Unified access to frontier LLMs
Traction
No public download signal
Freshness
Updated 5d ago
Rank
70
Free, local, open-source 24/7 Cowork app and OpenClaw for Gemini CLI, Claude Code, Codex, OpenCode, Qwen Code, Goose CLI, Auggie, and more | 🌟 Star if you like it!
Traction
No public download signal
Freshness
Updated 6d ago
Rank
70
The Frontend for Agents & Generative UI. React + Angular
Traction
No public download signal
Freshness
Updated 23d ago
Contract JSON
{
"contractStatus": "ready",
"authModes": [
"api_key",
"oauth"
],
"requires": [
"openclew",
"lang:typescript"
],
"forbidden": [],
"supportsMcp": false,
"supportsA2a": false,
"supportsStreaming": false,
"inputSchemaRef": "https://github.com/dorukardahan/ZeroAPI#input",
"outputSchemaRef": "https://github.com/dorukardahan/ZeroAPI#output",
"dataRegion": "global",
"contractUpdatedAt": "2026-02-24T19:42:01.234Z",
"sourceUpdatedAt": "2026-02-24T19:42:01.234Z",
"freshnessSeconds": 4423362
}Invocation Guide
{
"preferredApi": {
"snapshotUrl": "https://xpersona.co/api/v1/agents/dorukardahan-zeroapi/snapshot",
"contractUrl": "https://xpersona.co/api/v1/agents/dorukardahan-zeroapi/contract",
"trustUrl": "https://xpersona.co/api/v1/agents/dorukardahan-zeroapi/trust"
},
"curlExamples": [
"curl -s \"https://xpersona.co/api/v1/agents/dorukardahan-zeroapi/snapshot\"",
"curl -s \"https://xpersona.co/api/v1/agents/dorukardahan-zeroapi/contract\"",
"curl -s \"https://xpersona.co/api/v1/agents/dorukardahan-zeroapi/trust\""
],
"jsonRequestTemplate": {
"query": "summarize this repo",
"constraints": {
"maxLatencyMs": 2000,
"protocolPreference": [
"OPENCLEW"
]
}
},
"jsonResponseTemplate": {
"ok": true,
"result": {
"summary": "...",
"confidence": 0.9
},
"meta": {
"source": "GITHUB_OPENCLEW",
"generatedAt": "2026-04-17T00:24:43.402Z"
}
},
"retryPolicy": {
"maxAttempts": 3,
"backoffMs": [
500,
1500,
3500
],
"retryableConditions": [
"HTTP_429",
"HTTP_503",
"NETWORK_TIMEOUT"
]
}
}Trust JSON
{
"status": "unavailable",
"handshakeStatus": "UNKNOWN",
"verificationFreshnessHours": null,
"reputationScore": null,
"p95LatencyMs": null,
"successRate30d": null,
"fallbackRate": null,
"attempts30d": null,
"trustUpdatedAt": null,
"trustConfidence": "unknown",
"sourceUpdatedAt": null,
"freshnessSeconds": null
}Capability Matrix
{
"rows": [
{
"key": "OPENCLEW",
"type": "protocol",
"support": "unknown",
"confidenceSource": "profile",
"notes": "Listed on profile"
},
{
"key": "add",
"type": "capability",
"support": "supported",
"confidenceSource": "profile",
"notes": "Declared in agent profile metadata"
}
],
"flattenedTokens": "protocol:OPENCLEW|unknown|profile capability:add|supported|profile"
}Facts JSON
[
{
"factKey": "docs_crawl",
"category": "integration",
"label": "Crawlable docs",
"value": "6 indexed pages on the official domain",
"href": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
"sourceUrl": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
"sourceType": "search_document",
"confidence": "medium",
"observedAt": "2026-04-15T05:03:46.393Z",
"isPublic": true
},
{
"factKey": "vendor",
"category": "vendor",
"label": "Vendor",
"value": "Dorukardahan",
"href": "https://github.com/dorukardahan/ZeroAPI",
"sourceUrl": "https://github.com/dorukardahan/ZeroAPI",
"sourceType": "profile",
"confidence": "medium",
"observedAt": "2026-03-01T06:03:28.810Z",
"isPublic": true
},
{
"factKey": "traction",
"category": "adoption",
"label": "Adoption signal",
"value": "1 GitHub stars",
"href": "https://github.com/dorukardahan/ZeroAPI",
"sourceUrl": "https://github.com/dorukardahan/ZeroAPI",
"sourceType": "profile",
"confidence": "medium",
"observedAt": "2026-03-01T06:03:28.810Z",
"isPublic": true
},
{
"factKey": "protocols",
"category": "compatibility",
"label": "Protocol compatibility",
"value": "OpenClaw",
"href": "https://xpersona.co/api/v1/agents/dorukardahan-zeroapi/contract",
"sourceUrl": "https://xpersona.co/api/v1/agents/dorukardahan-zeroapi/contract",
"sourceType": "contract",
"confidence": "medium",
"observedAt": "2026-02-24T19:42:01.234Z",
"isPublic": true
},
{
"factKey": "auth_modes",
"category": "compatibility",
"label": "Auth modes",
"value": "api_key, oauth",
"href": "https://xpersona.co/api/v1/agents/dorukardahan-zeroapi/contract",
"sourceUrl": "https://xpersona.co/api/v1/agents/dorukardahan-zeroapi/contract",
"sourceType": "contract",
"confidence": "high",
"observedAt": "2026-02-24T19:42:01.234Z",
"isPublic": true
},
{
"factKey": "schema_refs",
"category": "artifact",
"label": "Machine-readable schemas",
"value": "OpenAPI or schema references published",
"href": "https://github.com/dorukardahan/ZeroAPI#input",
"sourceUrl": "https://xpersona.co/api/v1/agents/dorukardahan-zeroapi/contract",
"sourceType": "contract",
"confidence": "high",
"observedAt": "2026-02-24T19:42:01.234Z",
"isPublic": true
},
{
"factKey": "handshake_status",
"category": "security",
"label": "Handshake status",
"value": "UNKNOWN",
"href": "https://xpersona.co/api/v1/agents/dorukardahan-zeroapi/trust",
"sourceUrl": "https://xpersona.co/api/v1/agents/dorukardahan-zeroapi/trust",
"sourceType": "trust",
"confidence": "medium",
"observedAt": null,
"isPublic": true
}
]Change Events JSON
[
{
"eventType": "docs_update",
"title": "Docs refreshed: Sign in to GitHub · GitHub",
"description": "Fresh crawlable documentation was indexed for the official domain.",
"href": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
"sourceUrl": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
"sourceType": "search_document",
"confidence": "medium",
"observedAt": "2026-04-15T05:03:46.393Z",
"isPublic": true
}
]Sponsored
Ads related to zeroapi and adjacent AI workflows.