Rank
62
Secure agent-to-agent messaging — handshake, send, poll, and stream messages between AI agents via the a2achat.top API.
Traction
No public download signal
Freshness
Updated 2d ago
Crawler Summary
Open-source first AI inference — GLM-5 as default, Claude as fallback only. Own your inference forever via the Morpheus decentralized network. Stake MOR tokens, access GLM-5, GLM-4.7 Flash, Kimi K2.5, and 30+ models with persistent inference by recycling staked MOR. Open-source first model router routes all tiers to Morpheus by default — Claude only kicks in as an escape hatch when needed. Includes Morpheus API Gateway bootstrap for zero-config startup, OpenAI-compatible proxy with auto-session management, automatic retry with fresh sessions, OpenAI-compatible error classification to prevent cooldown cascades, multi-key auth rotation v2 with proactive DIEM balance monitoring and reactive 402 watchdog, Gateway Guardian v5 with direct curl inference probes (eliminates Signal spam), proactive Venice DIEM credit monitoring, circuit breaker for stuck sub-agents, nuclear self-healing restart, always-on proxy-router with launchd auto-restart, smart session archiver, three-shift cyclic execution engine (v2 with 15-minute execution loops), 24/7 always-on power configuration for macOS, bundled security skills, zero-dependency wallet management via macOS Keychain, x402 payment client for agent-to-agent USDC payments, and ERC-8004 agent registry reader for discovering trustless agents on Base. --- name: everclaw version: 2026.2.23 description: Open-source first AI inference — GLM-5 as default, Claude as fallback only. Own your inference forever via the Morpheus decentralized network. Stake MOR tokens, access GLM-5, GLM-4.7 Flash, Kimi K2.5, and 30+ models with persistent inference by recycling staked MOR. Open-source first model router routes all tiers to Morpheus by default — Claude only kicks in as an es Published capability contract available. No trust telemetry is available yet. 87 GitHub stars reported by the source. Last updated 2/24/2026.
Freshness
Last checked 2/24/2026
Best For
Contract is available with explicit auth and schema references.
Not Ideal For
everclaw is not ideal for teams that need stronger public trust telemetry, lower setup complexity, or more explicit contract coverage before production rollout.
Evidence Sources Checked
editorial-content, capability-contract, runtime-metrics, public facts pack
Open-source first AI inference — GLM-5 as default, Claude as fallback only. Own your inference forever via the Morpheus decentralized network. Stake MOR tokens, access GLM-5, GLM-4.7 Flash, Kimi K2.5, and 30+ models with persistent inference by recycling staked MOR. Open-source first model router routes all tiers to Morpheus by default — Claude only kicks in as an escape hatch when needed. Includes Morpheus API Gateway bootstrap for zero-config startup, OpenAI-compatible proxy with auto-session management, automatic retry with fresh sessions, OpenAI-compatible error classification to prevent cooldown cascades, multi-key auth rotation v2 with proactive DIEM balance monitoring and reactive 402 watchdog, Gateway Guardian v5 with direct curl inference probes (eliminates Signal spam), proactive Venice DIEM credit monitoring, circuit breaker for stuck sub-agents, nuclear self-healing restart, always-on proxy-router with launchd auto-restart, smart session archiver, three-shift cyclic execution engine (v2 with 15-minute execution loops), 24/7 always-on power configuration for macOS, bundled security skills, zero-dependency wallet management via macOS Keychain, x402 payment client for agent-to-agent USDC payments, and ERC-8004 agent registry reader for discovering trustless agents on Base. --- name: everclaw version: 2026.2.23 description: Open-source first AI inference — GLM-5 as default, Claude as fallback only. Own your inference forever via the Morpheus decentralized network. Stake MOR tokens, access GLM-5, GLM-4.7 Flash, Kimi K2.5, and 30+ models with persistent inference by recycling staked MOR. Open-source first model router routes all tiers to Morpheus by default — Claude only kicks in as an es
Public facts
6
Change events
0
Artifacts
0
Freshness
Feb 24, 2026
Published capability contract available. No trust telemetry is available yet. 87 GitHub stars reported by the source. Last updated 2/24/2026.
Trust score
Unknown
Compatibility
MCP, A2A, OpenClaw
Freshness
Feb 24, 2026
Vendor
Everclaw
Artifacts
0
Benchmarks
0
Last release
Unpublished
Key links, install path, and a quick operational read before the deeper crawl record.
Summary
Published capability contract available. No trust telemetry is available yet. 87 GitHub stars reported by the source. Last updated 2/24/2026.
Setup snapshot
git clone https://github.com/profbernardoj/everclaw-community-branches.gitSetup complexity is LOW. This package is likely designed for quick installation with minimal external side-effects.
Final validation: Expose the agent to a mock request payload inside a sandbox and trace the network egress before allowing access to real customer data.
Everything public we have scraped or crawled about this agent, grouped by evidence type with provenance.
Vendor
Everclaw
Protocol compatibility
MCP, A2A, OpenClaw
Auth modes
mcp, a2a, api_key
Machine-readable schemas
OpenAPI or schema references published
Adoption signal
87 GitHub stars
Handshake status
UNKNOWN
Merged public release, docs, artifact, benchmark, pricing, and trust refresh events.
Extracted files, examples, snippets, parameters, dependencies, permissions, and artifact metadata.
Extracted files
0
Examples
6
Snippets
0
Languages
typescript
Parameters
bash
node ~/.openclaw/workspace/skills/everclaw/scripts/setup.mjs --template gateway-only --key <API_KEY> --apply --test --restart
bash
node ~/.openclaw/workspace/skills/everclaw/scripts/setup.mjs --key <API_KEY> --apply --test --restart
bash
curl http://127.0.0.1:8083/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer morpheus-local" \
-d '{"model": "kimi-k2.5", "messages": [{"role": "user", "content": "Hello"}], "stream": false}'bash
curl http://127.0.0.1:8083/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer morpheus-local" \
-d '{"model": "kimi-k2.5", "messages": [{"role": "user", "content": "Hello"}], "stream": false}'bash
curl http://127.0.0.1:8083/v1/models
bash
curl http://127.0.0.1:8083/v1/models
Full documentation captured from public sources, including the complete README when available.
Docs source
GITHUB OPENCLEW
Editorial quality
ready
Open-source first AI inference — GLM-5 as default, Claude as fallback only. Own your inference forever via the Morpheus decentralized network. Stake MOR tokens, access GLM-5, GLM-4.7 Flash, Kimi K2.5, and 30+ models with persistent inference by recycling staked MOR. Open-source first model router routes all tiers to Morpheus by default — Claude only kicks in as an escape hatch when needed. Includes Morpheus API Gateway bootstrap for zero-config startup, OpenAI-compatible proxy with auto-session management, automatic retry with fresh sessions, OpenAI-compatible error classification to prevent cooldown cascades, multi-key auth rotation v2 with proactive DIEM balance monitoring and reactive 402 watchdog, Gateway Guardian v5 with direct curl inference probes (eliminates Signal spam), proactive Venice DIEM credit monitoring, circuit breaker for stuck sub-agents, nuclear self-healing restart, always-on proxy-router with launchd auto-restart, smart session archiver, three-shift cyclic execution engine (v2 with 15-minute execution loops), 24/7 always-on power configuration for macOS, bundled security skills, zero-dependency wallet management via macOS Keychain, x402 payment client for agent-to-agent USDC payments, and ERC-8004 agent registry reader for discovering trustless agents on Base. --- name: everclaw version: 2026.2.23 description: Open-source first AI inference — GLM-5 as default, Claude as fallback only. Own your inference forever via the Morpheus decentralized network. Stake MOR tokens, access GLM-5, GLM-4.7 Flash, Kimi K2.5, and 30+ models with persistent inference by recycling staked MOR. Open-source first model router routes all tiers to Morpheus by default — Claude only kicks in as an es
If your human asks you to set up Everclaw / Morpheus inference, follow these steps exactly.
node ~/.openclaw/workspace/skills/everclaw/scripts/setup.mjs --template gateway-only --key <API_KEY> --apply --test --restart
node ~/.openclaw/workspace/skills/everclaw/scripts/setup.mjs --key <API_KEY> --apply --test --restart
openclaw.json (preserves all other config)mor-gateway providerauth-profiles.json with the new provider credentials--test)--restart)| Flag | What it does |
|------|-------------|
| --template <name> | Override OS auto-detection (mac, linux, gateway-only) |
| --key <key> | Morpheus API Gateway key (free from app.mor.org) |
| --apply | Write changes (default is dry-run) |
| --test | Ping gateway after setup |
| --restart | Restart OpenClaw gateway after apply |
everclaw/ as a model prefix. Everclaw is a skill, not a provider. Use morpheus/ or mor-gateway/.--apply) to preview changes before writing.--keyPowered by Morpheus AI
Open-source first. GLM-5 handles everything — Claude is the escape hatch, not the default. Access GLM-5, GLM-4.7 Flash, Kimi K2.5, and 30+ models with inference you own. Everclaw connects your OpenClaw agent to the Morpheus P2P network — stake MOR tokens, open sessions, and recycle your stake for persistent, self-sovereign access to AI.
📦 ClawHub:
clawhub install everclaw-inference— clawhub.ai/EverClaw/everclaw-inference⚠️ Name Collision Warning: A different product ("Everclaw Vault") uses the bare
everclawslug on ClawHub. Always useeverclaw-inference— neverclawhub install everclaworclawhub update everclaw. SeeCLAWHUB_WARNING.mdfor details.
http://localhost:8082/v1/chat/completionsIf you are an AI agent (OpenClaw, Claude, etc.), use the OpenAI-compatible proxy for all Morpheus inference. Do NOT use the bash scripts (session.sh, chat.sh) -- the proxy handles sessions, auth, and model routing automatically.
curl http://127.0.0.1:8083/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer morpheus-local" \
-d '{"model": "kimi-k2.5", "messages": [{"role": "user", "content": "Hello"}], "stream": false}'
curl http://127.0.0.1:8083/v1/models
curl http://127.0.0.1:8083/health
The proxy (port 8083) auto-opens blockchain sessions, auto-renews before expiry, and injects all required auth headers. The bash scripts (session.sh, chat.sh) are available for manual debugging but should not be used for agent integration.
See Section 12 for full proxy documentation.
You need MOR on Base to stake for inference. If you already have ETH, USDC, or USDT on Base:
# Swap ETH for MOR
bash skills/everclaw/scripts/swap.sh eth 0.01
# Swap USDC for MOR
bash skills/everclaw/scripts/swap.sh usdc 50
Or swap manually on a DEX:
Don't have anything on Base yet? Buy ETH on Coinbase, withdraw to Base, then swap to MOR. See references/acquiring-mor.md for the full guide.
How much do you need? MOR is staked, not spent — you get it back. 50-100 MOR is enough for daily use. 0.005 ETH covers months of Base gas fees.
Agent → proxy-router (localhost:8082) → Morpheus P2P Network → Provider → Model
↓
Base Mainnet (MOR staking, session management)
clawhub install everclaw-inference
To update: clawhub update everclaw-inference
⚠️ Use everclaw-inference — not everclaw. The bare everclaw slug belongs to a different, unrelated product on ClawHub.
The safe installer handles fresh installs, updates, and ClawHub collision detection:
# Fresh install
curl -fsSL https://raw.githubusercontent.com/profbernardoj/everclaw/main/scripts/install-everclaw.sh | bash
# Or if you already have the skill:
bash skills/everclaw/scripts/install-everclaw.sh
# Check for updates
bash skills/everclaw/scripts/install-everclaw.sh --check
git clone https://github.com/profbernardoj/everclaw.git ~/.openclaw/workspace/skills/everclaw
To update: cd ~/.openclaw/workspace/skills/everclaw && git pull
After cloning, install the proxy-router:
bash skills/everclaw/scripts/install.sh
This downloads the latest proxy-router release for your OS/arch, extracts it to ~/morpheus/, and creates initial config files.
mor-launch-darwin-arm64.zip)~/morpheus/xattr -cr ~/morpheus/After installation, ~/morpheus/ should contain:
| File | Purpose |
|------|---------|
| proxy-router | The main binary |
| .env | Configuration (RPC, contracts, ports) |
| models-config.json | Maps blockchain model IDs to API types |
| .cookie | Auto-generated auth credentials |
The .env file configures the proxy-router for consumer mode on Base mainnet. Critical variables:
# RPC endpoint — MUST be set or router silently fails
ETH_NODE_ADDRESS=https://base-mainnet.public.blastapi.io
ETH_NODE_CHAIN_ID=8453
# Contract addresses (Base mainnet)
DIAMOND_CONTRACT_ADDRESS=0x6aBE1d282f72B474E54527D93b979A4f64d3030a
MOR_TOKEN_ADDRESS=0x7431aDa8a591C955a994a21710752EF9b882b8e3
# Wallet key — leave blank, inject at runtime via 1Password
WALLET_PRIVATE_KEY=
# Proxy settings
PROXY_ADDRESS=0.0.0.0:3333
PROXY_STORAGE_PATH=./data/badger/
PROXY_STORE_CHAT_CONTEXT=true
PROXY_FORWARD_CHAT_CONTEXT=true
MODELS_CONFIG_PATH=./models-config.json
# Web API
WEB_ADDRESS=0.0.0.0:8082
WEB_PUBLIC_URL=http://localhost:8082
# Auth
AUTH_CONFIG_FILE_PATH=./proxy.conf
COOKIE_FILE_PATH=./.cookie
# Logging
LOG_COLOR=true
LOG_LEVEL_APP=info
LOG_FOLDER_PATH=./data/logs
ENVIRONMENT=production
⚠️ ETH_NODE_ADDRESS MUST be set. The router silently connects to an empty string without it and all blockchain operations fail. Also MODELS_CONFIG_PATH must point to your models-config.json.
⚠️ This file is required. Without it, chat completions fail with "api adapter not found".
{
"$schema": "./internal/config/models-config-schema.json",
"models": [
{
"modelId": "0xb487ee62516981f533d9164a0a3dcca836b06144506ad47a5c024a7a2a33fc58",
"modelName": "kimi-k2.5:web",
"apiType": "openai",
"apiUrl": ""
},
{
"modelId": "0xbb9e920d94ad3fa2861e1e209d0a969dbe9e1af1cf1ad95c49f76d7b63d32d93",
"modelName": "kimi-k2.5",
"apiType": "openai",
"apiUrl": ""
}
]
}
⚠️ Note the format: The JSON uses a "models" array with "modelId" / "modelName" / "apiType" / "apiUrl" fields. The apiUrl is left empty — the router resolves provider endpoints from the blockchain. Add entries for every model you want to use. See references/models.md for the full list.
The proxy-router needs your wallet private key. Never store it on disk. Inject it at runtime from 1Password:
bash skills/everclaw/scripts/start.sh
Or manually:
cd ~/morpheus
source .env
# Retrieve private key from 1Password (never touches disk)
export WALLET_PRIVATE_KEY=$(
OP_SERVICE_ACCOUNT_TOKEN=$(security find-generic-password -a "YOUR_KEYCHAIN_ACCOUNT" -s "op-service-account-token" -w) \
op item get "YOUR_ITEM_NAME" --vault "YOUR_VAULT_NAME" --fields "Private Key" --reveal
)
export ETH_NODE_ADDRESS
nohup ./proxy-router > ./data/logs/router-stdout.log 2>&1 &
Wait a few seconds, then verify:
COOKIE_PASS=$(cat ~/morpheus/.cookie | cut -d: -f2)
curl -s -u "admin:$COOKIE_PASS" http://localhost:8082/healthcheck
Expected: HTTP 200.
bash skills/everclaw/scripts/stop.sh
Or: pkill -f proxy-router
Before opening sessions, approve the Diamond contract to transfer MOR on your behalf:
COOKIE_PASS=$(cat ~/morpheus/.cookie | cut -d: -f2)
curl -s -u "admin:$COOKIE_PASS" -X POST \
"http://localhost:8082/blockchain/approve?spender=0x6aBE1d282f72B474E54527D93b979A4f64d3030a&amount=1000000000000000000000"
⚠️ The /blockchain/approve endpoint uses query parameters, not a JSON body. The amount is in wei (1000000000000000000 = 1 MOR). Approve a large amount so you don't need to re-approve frequently.
Open a session by model ID (not bid ID):
MODEL_ID="0xb487ee62516981f533d9164a0a3dcca836b06144506ad47a5c024a7a2a33fc58"
curl -s -u "admin:$COOKIE_PASS" -X POST \
"http://localhost:8082/blockchain/models/${MODEL_ID}/session" \
-H "Content-Type: application/json" \
-d '{"sessionDuration": 3600}'
⚠️ Always use the model ID endpoint, not the bid ID. Using a bid ID results in "dial tcp: missing address".
The response includes a sessionId (hex string). Save this — you need it for inference.
# Open a 1-hour session for kimi-k2.5:web
bash skills/everclaw/scripts/session.sh open kimi-k2.5:web 3600
# List active sessions
bash skills/everclaw/scripts/session.sh list
# Close a session
bash skills/everclaw/scripts/session.sh close 0xSESSION_ID_HERE
session_id and model_id are HTTP headers, not JSON body fields. This is the single most common mistake.
CORRECT:
curl -s -u "admin:$COOKIE_PASS" "http://localhost:8082/v1/chat/completions" \
-H "Content-Type: application/json" \
-H "session_id: 0xYOUR_SESSION_ID" \
-H "model_id: 0xYOUR_MODEL_ID" \
-d '{
"model": "kimi-k2.5:web",
"messages": [{"role": "user", "content": "Hello, world!"}],
"stream": false
}'
WRONG (will fail with "session not found"):
# DON'T DO THIS
curl -s ... -d '{
"model": "kimi-k2.5:web",
"session_id": "0x...", # WRONG — not a body field
"model_id": "0x...", # WRONG — not a body field
"messages": [...]
}'
bash skills/everclaw/scripts/chat.sh kimi-k2.5:web "What is the meaning of life?"
Set "stream": true in the request body. The response will be Server-Sent Events (SSE).
Close a session to reclaim your staked MOR:
curl -s -u "admin:$COOKIE_PASS" -X POST \
"http://localhost:8082/blockchain/sessions/0xSESSION_ID/close"
Or use the script:
bash skills/everclaw/scripts/session.sh close 0xSESSION_ID
⚠️ MOR staked in a session is returned when the session closes. Close sessions you're not using to free up MOR for new sessions.
⚠️ Sessions are NOT persisted across router restarts. If you restart the proxy-router, you must re-open sessions. The blockchain still has the session, but the router's in-memory state is lost.
# Check balance (MOR + ETH)
bash skills/everclaw/scripts/balance.sh
# List sessions
bash skills/everclaw/scripts/session.sh list
After restarting the router:
# Wait for health check
sleep 5
# Re-open sessions for models you need
bash skills/everclaw/scripts/session.sh open kimi-k2.5:web 3600
COOKIE_PASS=$(cat ~/morpheus/.cookie | cut -d: -f2)
# MOR and ETH balance
curl -s -u "admin:$COOKIE_PASS" http://localhost:8082/blockchain/balance | jq .
# Active sessions
curl -s -u "admin:$COOKIE_PASS" http://localhost:8082/blockchain/sessions | jq .
# Available models
curl -s -u "admin:$COOKIE_PASS" http://localhost:8082/blockchain/models | jq .
See references/troubleshooting.md for a complete guide. Quick hits:
| Error | Fix |
|-------|-----|
| session not found | Use session_id/model_id as HTTP headers, not body fields |
| dial tcp: missing address | Open session by model ID, not bid ID |
| api adapter not found | Add the model to models-config.json |
| ERC20: transfer amount exceeds balance | Close old sessions to free staked MOR |
| Sessions gone after restart | Normal — re-open sessions after restart |
| MorpheusUI conflicts | Don't run MorpheusUI and headless router simultaneously |
| Contract | Address |
|----------|---------|
| Diamond | 0x6aBE1d282f72B474E54527D93b979A4f64d3030a |
| MOR Token | 0x7431aDa8a591C955a994a21710752EF9b882b8e3 |
| Action | Command |
|--------|---------|
| Install | bash skills/everclaw/scripts/install.sh |
| Start | bash skills/everclaw/scripts/start.sh |
| Stop | bash skills/everclaw/scripts/stop.sh |
| Swap ETH→MOR | bash skills/everclaw/scripts/swap.sh eth 0.01 |
| Swap USDC→MOR | bash skills/everclaw/scripts/swap.sh usdc 50 |
| Open session | bash skills/everclaw/scripts/session.sh open <model> [duration] |
| Close session | bash skills/everclaw/scripts/session.sh close <session_id> |
| List sessions | bash skills/everclaw/scripts/session.sh list |
| Send prompt | bash skills/everclaw/scripts/chat.sh <model> "prompt" |
| Check balance | bash skills/everclaw/scripts/balance.sh |
| Diagnose | bash skills/everclaw/scripts/diagnose.sh |
| Diagnose (config only) | bash skills/everclaw/scripts/diagnose.sh --config |
| Diagnose (quick) | bash skills/everclaw/scripts/diagnose.sh --quick |
Everclaw v0.4 includes a self-contained wallet manager that eliminates all external account dependencies. No 1Password, no Foundry, no Safe Wallet — just macOS Keychain and Node.js (already bundled with OpenClaw).
node skills/everclaw/scripts/everclaw-wallet.mjs setup
This generates a new Ethereum wallet and stores the private key in your macOS Keychain (encrypted at rest, protected by your login password / Touch ID).
node skills/everclaw/scripts/everclaw-wallet.mjs import-key 0xYOUR_PRIVATE_KEY
node skills/everclaw/scripts/everclaw-wallet.mjs balance
Shows ETH, MOR, USDC balances and MOR allowance for the Diamond contract.
# Swap 0.05 ETH for MOR
node skills/everclaw/scripts/everclaw-wallet.mjs swap eth 0.05
# Swap 50 USDC for MOR
node skills/everclaw/scripts/everclaw-wallet.mjs swap usdc 50
Executes onchain swaps via Uniswap V3 on Base. No external tools required — uses viem (bundled with OpenClaw).
node skills/everclaw/scripts/everclaw-wallet.mjs approve
Approves the Morpheus Diamond contract to use your MOR for session staking.
| Command | Description |
|---------|-------------|
| setup | Generate wallet, store in Keychain |
| address | Show wallet address |
| balance | Show ETH, MOR, USDC balances |
| swap eth <amount> | Swap ETH → MOR via Uniswap V3 |
| swap usdc <amount> | Swap USDC → MOR via Uniswap V3 |
| approve [amount] | Approve MOR for Morpheus staking |
| export-key | Print private key (use with caution) |
| import-key <0xkey> | Import existing private key |
The Morpheus proxy-router requires custom auth (Basic auth via .cookie) and custom HTTP headers (session_id, model_id) that standard OpenAI clients don't support. Everclaw includes a lightweight proxy that bridges this gap.
OpenClaw/any client → morpheus-proxy (port 8083) → proxy-router (port 8082) → Morpheus P2P → Provider
/v1/chat/completions requestssession_id/model_id headers automatically/health, /v1/models, /v1/chat/completionsbash skills/everclaw/scripts/install-proxy.sh
This installs:
morpheus-proxy.mjs → ~/morpheus/proxy/gateway-guardian.sh → ~/.openclaw/workspace/scripts/Environment variables (all optional, sane defaults):
| Variable | Default | Description |
|----------|---------|-------------|
| MORPHEUS_PROXY_PORT | 8083 | Port the proxy listens on |
| MORPHEUS_ROUTER_URL | http://localhost:8082 | Proxy-router URL |
| MORPHEUS_COOKIE_PATH | ~/morpheus/.cookie | Path to auth cookie |
| MORPHEUS_SESSION_DURATION | 604800 (7 days) | Session duration in seconds |
| MORPHEUS_RENEW_BEFORE | 3600 (1 hour) | Renew session this many seconds before expiry |
| MORPHEUS_PROXY_API_KEY | morpheus-local | Bearer token for proxy auth |
Sessions stake MOR tokens for their duration. Longer sessions = more MOR locked but fewer blockchain transactions:
| Duration | MOR Staked (approx) | Transactions | |----------|--------------------:|:-------------| | 1 hour | ~0.011 MOR | Every hour | | 1 day | ~0.274 MOR | Daily | | 7 days | ~1.9 MOR | Weekly |
MOR is returned when the session closes or expires. The proxy auto-renews before expiry, so you get continuous inference with minimal staking overhead.
curl http://127.0.0.1:8083/health
curl http://127.0.0.1:8083/v1/models
curl http://127.0.0.1:8083/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer morpheus-local" \
-d '{
"model": "kimi-k2.5",
"messages": [{"role": "user", "content": "Hello!"}],
"stream": false
}'
kimi-k2.5 (non-web) is the most reliable model — recommended as primary fallbackkimi-k2.5:web (web search variant) tends to timeout on P2P routing — avoid for fallback usev0.5 adds three critical improvements to the proxy that prevent prolonged outages caused by cooldown cascades — where both primary and fallback providers become unavailable simultaneously.
When a primary provider (e.g., Venice) returns a billing error, OpenClaw's failover engine marks that provider as "in cooldown." If the Morpheus proxy also returns errors that OpenClaw misclassifies as billing errors, both providers enter cooldown and the agent goes completely offline — sometimes for 6+ hours.
The proxy now returns errors in the exact format OpenAI uses, with proper type and code fields:
{
"error": {
"message": "Morpheus session unavailable: ...",
"type": "server_error",
"code": "morpheus_session_error",
"param": null
}
}
Key distinction: All Morpheus infrastructure errors are typed as "server_error" — never "billing" or "rate_limit_error". This ensures OpenClaw treats them as transient failures and retries appropriately, instead of putting the provider into extended cooldown.
Error codes returned by the proxy:
| Code | Meaning |
|------|---------|
| morpheus_session_error | Failed to open or refresh a blockchain session |
| morpheus_inference_error | Provider returned an error during inference |
| morpheus_upstream_error | Connection error to the proxy-router |
| timeout | Inference request exceeded the time limit |
| model_not_found | Requested model not in MODEL_MAP |
When the proxy-router returns a session-related error (expired, invalid, not found, closed), the proxy now:
This handles the common case where the proxy-router restarts and loses its in-memory session state, or when a long-running session expires mid-request.
Configure OpenClaw with multiple fallback models across providers:
{
"agents": {
"defaults": {
"model": {
"primary": "venice/claude-opus-4-6",
"fallbacks": [
"venice/claude-opus-45", // Try different Venice model first
"venice/kimi-k2-5", // Try yet another Venice model
"morpheus/kimi-k2.5" // Last resort: decentralized inference
]
}
}
}
}
This way, if the primary model has billing issues, OpenClaw tries other models on the same provider (which may have separate rate limits) before falling back to Morpheus. The cascade is:
Configure OpenClaw to use Morpheus as a fallback provider so your agent keeps running when primary API credits run out.
Add to your openclaw.json via config patch or manual edit:
{
"models": {
"providers": {
"morpheus": {
"baseUrl": "http://127.0.0.1:8083/v1",
"apiKey": "morpheus-local",
"api": "openai-completions",
"models": [
{
"id": "kimi-k2.5",
"name": "Kimi K2.5 (via Morpheus)",
"reasoning": true,
"input": ["text"],
"cost": { "input": 0, "output": 0, "cacheRead": 0, "cacheWrite": 0 },
"contextWindow": 131072,
"maxTokens": 8192
},
{
"id": "kimi-k2-thinking",
"name": "Kimi K2 Thinking (via Morpheus)",
"reasoning": true,
"input": ["text"],
"cost": { "input": 0, "output": 0, "cacheRead": 0, "cacheWrite": 0 },
"contextWindow": 131072,
"maxTokens": 8192
},
{
"id": "glm-4.7-flash",
"name": "GLM 4.7 Flash (via Morpheus)",
"reasoning": false,
"input": ["text"],
"cost": { "input": 0, "output": 0, "cacheRead": 0, "cacheWrite": 0 },
"contextWindow": 131072,
"maxTokens": 8192
}
]
}
}
}
}
Configure a multi-tier fallback chain (recommended since v0.5):
{
"agents": {
"defaults": {
"model": {
"primary": "venice/claude-opus-4-6",
"fallbacks": [
"venice/claude-opus-45", // Different model, same provider
"venice/kimi-k2-5", // Open-source model, same provider
"morpheus/kimi-k2.5" // Decentralized fallback
]
},
"models": {
"venice/claude-opus-45": { "alias": "Claude Opus 4.5" },
"venice/kimi-k2-5": { "alias": "Kimi K2.5" },
"morpheus/kimi-k2.5": { "alias": "Kimi K2.5 (Morpheus)" },
"morpheus/kimi-k2-thinking": { "alias": "Kimi K2 Thinking (Morpheus)" },
"morpheus/glm-4.7-flash": { "alias": "GLM 4.7 Flash (Morpheus)" }
}
}
}
}
⚠️ Why multi-tier? A single fallback creates a single point of failure. If both the primary provider and the single fallback enter cooldown simultaneously (e.g., billing error triggers cooldown on both), your agent goes offline. Multiple fallback tiers across different models and providers ensure at least one path remains available.
OpenClaw supports multiple API keys per provider with automatic rotation. When one key's credits run out (billing error), OpenClaw disables that key only and rotates to the next one — same model, fresh credits. This is the single most effective way to prevent downtime.
Add to ~/.openclaw/agents/main/agent/auth-profiles.json:
{
"venice:default": {
"type": "api_key",
"provider": "venice",
"key": "VENICE-INFERENCE-KEY-YOUR_KEY_HERE"
},
"morpheus:default": {
"type": "api_key",
"provider": "morpheus",
"key": "morpheus-local"
}
}
If you have multiple Venice API keys (e.g., from different accounts or plans), add them all as separate profiles. Order them from most credits to least:
auth-profiles.json:
{
"version": 1,
"profiles": {
"venice:key1": {
"type": "api_key",
"provider": "venice",
"key": "VENICE-INFERENCE-KEY-YOUR_PRIMARY_KEY"
},
"venice:key2": {
"type": "api_key",
"provider": "venice",
"key": "VENICE-INFERENCE-KEY-YOUR_SECOND_KEY"
},
"venice:key3": {
"type": "api_key",
"provider": "venice",
"key": "VENICE-INFERENCE-KEY-YOUR_THIRD_KEY"
},
"morpheus:default": {
"type": "api_key",
"provider": "morpheus",
"key": "morpheus-local"
}
}
}
openclaw.json — register the profiles and set explicit rotation order:
{
"auth": {
"profiles": {
"venice:key1": { "provider": "venice", "mode": "api_key" },
"venice:key2": { "provider": "venice", "mode": "api_key" },
"venice:key3": { "provider": "venice", "mode": "api_key" },
"morpheus:default": { "provider": "morpheus", "mode": "api_key" }
},
"order": {
"venice": ["venice:key1", "venice:key2", "venice:key3"]
}
}
}
⚠️ auth.order is critical. Without it, OpenClaw uses round-robin (oldest-used first), which may not match your credit balances. With an explicit order, keys are tried in the exact sequence you specify — highest credits first.
OpenClaw's auth engine handles rotation automatically:
auth.order. Same model, same provider — just fresh credits.Venice uses "DIEM" as its internal credit unit (1 DIEM ≈ $1 USD). Each API key has its own DIEM balance. Credits appear to reset daily. Expensive models drain credits faster:
| Model | Input Cost | Output Cost | ~Messages per 10 DIEM | |-------|-----------|-------------|----------------------| | Claude Opus 4.6 | 6 DIEM/M tokens | 30 DIEM/M tokens | ~5-10 | | Claude Opus 4.5 | 6 DIEM/M tokens | 30 DIEM/M tokens | ~5-10 | | Kimi K2.5 | 0.75 DIEM/M tokens | 3.75 DIEM/M tokens | ~50-100 | | GLM 4.7 Flash | 0.125 DIEM/M tokens | 0.5 DIEM/M tokens | ~500+ |
Tip: With multiple keys, the agent can stay on Claude Opus across key rotations. Without multi-key, it would fall to cheaper models or Morpheus after one key's credits run out.
The complete failover chain with multi-key rotation:
venice/claude-opus-45 (all keys again) → venice/kimi-k2-5 (all keys) → morpheus/kimi-k2.5Example with 6 keys (246 DIEM total):
venice:key1 (98 DIEM) → venice:key2 (50 DIEM) → venice:key3 (40 DIEM) →
venice:key4 (26 DIEM) → venice:key5 (20 DIEM) → venice:key6 (12 DIEM) →
morpheus/kimi-k2.5 (owned, staked MOR) → mor-gateway/kimi-k2.5 (community gateway)
v0.5 improvement: The Morpheus proxy returns "server_error" type errors (not billing errors), so OpenClaw won't put the Morpheus provider into extended cooldown due to transient infrastructure issues. If a Morpheus session expires mid-request, the proxy automatically opens a fresh session and retries once.
OpenClaw's billing error detection has pattern gaps with Venice-specific error messages. Two known gaps:
"Insufficient USD or Diem balance to complete request" but OpenClaw checks for "insufficient balance" (adjacent words). Since "USD or Diem" separates "insufficient" from "balance", the pattern fails."API key DIEM spend limit exceeded. Your account may still have DIEM balance, but this API key has reached its configured DIEM spending limit." — OpenClaw has no pattern for "spend limit" at all.Both get classified as "unknown" instead of "billing", the key gets a 60-second cooldown instead of a billing disable, and the same exhausted key gets retried in a loop.
Two scripts fix this at the skill level:
venice-key-monitor.sh)Periodically probes every Venice API key's DIEM/USD balance via a cheap GLM-4.7-Flash inference call (costs ~0.0001 DIEM). Reads the x-venice-balance-diem or x-venice-balance-usd response header and disables depleted keys by writing disabledUntil + disabledReason: "billing" directly to auth-profiles.json.
# Check all keys and disable depleted ones
bash skills/everclaw/scripts/venice-key-monitor.sh
# Report balances without making changes
bash skills/everclaw/scripts/venice-key-monitor.sh --status
# Custom depletion threshold (default: 1 DIEM)
bash skills/everclaw/scripts/venice-key-monitor.sh --threshold 5
Cron: Runs every 2 hours. Pre-empts the problem before the agent ever tries an empty key.
venice-402-watchdog.sh)Monitors auth-profiles.json for Venice keys with rapid failures that aren't properly billing-disabled (the telltale sign of OpenClaw's pattern gap). When detected, immediately disables the offending key and identifies the next healthy key.
# One-shot scan (check recent failures)
bash skills/everclaw/scripts/venice-402-watchdog.sh
# Run as daemon (continuous monitoring every 30s)
bash skills/everclaw/scripts/venice-402-watchdog.sh --daemon
Cron: Runs every 5 minutes. Catches billing errors in near-real-time that the proactive monitor might miss between its 2-hour checks.
| Venice Error | OpenClaw Pattern | Match? |
|-------------|-----------------|--------|
| Insufficient USD or Diem balance to complete request | "insufficient balance" | ❌ No — words not adjacent |
| API key DIEM spend limit exceeded | (none) | ❌ No pattern exists |
| 402 Payment Required | /status.*402/ | ✅ Only if status code preserved |
| Insufficient credits | "insufficient credits" | ✅ |
The watchdog catches the first two patterns (the most common Venice billing errors) that OpenClaw's text matching misses.
| File | Purpose |
|------|---------|
| ~/.openclaw/logs/venice-key-balances.json | Last balance check results per key |
| ~/.openclaw/logs/venice-402-state.json | Last watchdog action and rotation state |
| ~/.openclaw/logs/venice-key-monitor.log | Monitor activity log |
| ~/.openclaw/logs/venice-402-watchdog.log | Watchdog activity log |
A self-healing, billing-aware watchdog that monitors the OpenClaw gateway and its ability to run inference. Runs every 2 minutes via launchd.
| Version | What it checked | Fatal flaw |
|---------|----------------|------------|
| v1 | HTTP dashboard alive | Providers in cooldown = brain-dead but HTTP 200 |
| v2 | Raw provider URLs | Provider APIs always return 200 regardless of internal state |
| v3 | Through-OpenClaw inference probe | Billing exhaustion → restart → instant re-disable = dead loop. Also: set -e + pkill self-kill = silent no-op restarts |
| v4 | Through-OpenClaw + billing classification + credit monitoring | openclaw agent injected 71K workspace prompt into every probe |
| v5 | Direct curl inference probes + billing classification + credit monitoring | Current version |
Root cause: openclaw agent injected the full 71K workspace system prompt into every health probe. This caused mor-gateway/glm-5 to timeout at 60s (takes ~37s just for the prompt). Worse, failures were delivered to Signal as normal agent replies — spamming the user with error messages.
Fix: Direct curl to gateway's LiteLLM proxy with a tiny prompt (~50 chars). Uses glm-4.7-flash (fast, lightweight) instead of glm-5. No agent session = no Signal delivery on failure. Errors stay in logs only.
billing vs transient vs timeout. Billing errors trigger backoff + notification instead of useless restarts.set -euo pipefail with set -uo pipefail + explicit ERR trap. Restart failures are now logged instead of silently exiting.x-venice-balance-diem response header every 10 min. Warns when balance drops below threshold.x-venice-balance-diem response header. Warns below 15 DIEM.billing → 402, Insufficient DIEM/USD/balance → don't restart, enter billing backoff, notify ownertransient → auth cooldown without billing keywords → restart (clears cooldown)timeout → probe timed out → restartunknown → restart (safe default)openclaw gateway restart (graceful — resets cooldown state)launchctl kickstart -kcurl -fsSL https://clawd.bot/install.sh | bashPair with reduced billing backoff in openclaw.json to minimize downtime:
{
"auth": {
"cooldowns": {
"billingBackoffHoursByProvider": { "venice": 1 },
"billingMaxHours": 6,
"failureWindowHours": 12
}
}
}
Included in install-proxy.sh, or manually:
cp skills/everclaw/scripts/gateway-guardian.sh ~/.openclaw/workspace/scripts/
chmod +x ~/.openclaw/workspace/scripts/gateway-guardian.sh
# Install launchd plist (macOS)
# See templates/ai.openclaw.guardian.plist
⚠️ Important: The launchd plist should include OPENCLAW_GATEWAY_TOKEN in its environment variables.
bash ~/.openclaw/workspace/scripts/gateway-guardian.sh --verbose
tail -f ~/.openclaw/logs/guardian.log
| Variable | Default | Description |
|----------|---------|-------------|
| GATEWAY_PORT | 18789 | Gateway port to probe |
| PROBE_TIMEOUT | 8 | HTTP timeout in seconds |
| INFERENCE_TIMEOUT | 45 | Agent probe timeout |
| FAIL_THRESHOLD | 2 | HTTP failures before restart |
| INFERENCE_FAIL_THRESHOLD | 3 | Inference failures before escalation (~6 min) |
| BILLING_BACKOFF_INTERVAL | 1800 | Seconds between probes when billing-dead (30 min) |
| CREDIT_CHECK_INTERVAL | 600 | Seconds between Venice DIEM balance checks (10 min) |
| CREDIT_WARN_THRESHOLD | 15 | DIEM balance warning threshold |
| MAX_STUCK_DURATION_SEC | 1800 | Circuit breaker: kill sub-agents stuck >30 min |
| STUCK_CHECK_INTERVAL | 300 | Circuit breaker check interval (5 min) |
| OWNER_SIGNAL | +1XXXXXXXXXX | Signal number for notifications |
| SIGNAL_ACCOUNT | +15129488566 | Signal sender account |
| File | Purpose |
|------|---------|
| ~/.openclaw/logs/guardian.state | HTTP failure counter |
| ~/.openclaw/logs/guardian-inference.state | Inference failure counter |
| ~/.openclaw/logs/guardian-circuit-breaker.state | Circuit breaker timestamp |
| ~/.openclaw/logs/guardian-billing.state | Billing exhaustion start timestamp (0 = healthy) |
| ~/.openclaw/logs/guardian-billing-notified.state | Whether owner was notified (0/1) |
| ~/.openclaw/logs/guardian-credit-check.state | Last credit check timestamp |
| ~/.openclaw/logs/guardian.log | Guardian activity log |
OpenClaw stores every conversation as a .jsonl file in ~/.openclaw/agents/main/sessions/. Over time, these accumulate — and when the dashboard loads, it parses all session history into the DOM. At ~17MB (134+ sessions), browsers hit "Page Unresponsive" because the renderer chokes on thousands of chat message elements.
The bottleneck isn't raw memory — Chrome gives each tab 1.4-4GB of V8 heap. The real limit is DOM rendering performance. Chrome Lighthouse warns at 800 DOM nodes and errors at 1,400. A hundred sessions with tool calls, code blocks, and long conversations easily generate 5,000+ DOM elements. The browser's layout engine can't keep up.
| Sessions Dir Size | Dashboard Behavior | |------------------|--------------------| | < 5 MB | ✅ Loads instantly | | 5-10 MB | ⚡ Slight delay, usable | | 10-15 MB | ⚠️ Sluggish, noticeable lag | | 15-20 MB | 🔴 "Page Unresponsive" likely | | 20+ MB | 💀 Dashboard won't load |
Instead of archiving on a fixed schedule (which may fire too early or too late depending on usage), the session archiver monitors the actual size of the sessions directory and only moves files when they exceed a threshold.
Default threshold: 10MB — provides good headroom before hitting the ~15MB danger zone, without firing unnecessarily on light usage days.
# Archive if over threshold (default 10MB)
bash skills/everclaw/scripts/session-archive.sh
# Check size without archiving
bash skills/everclaw/scripts/session-archive.sh --check
# Force archive regardless of size
bash skills/everclaw/scripts/session-archive.sh --force
# Detailed output
bash skills/everclaw/scripts/session-archive.sh --verbose
The archiver never moves:
sessions.json (the index file)guardian-health-probe.jsonlKEEP_RECENT)Everything else gets moved to sessions/archive/ — not deleted. You can always move files back if needed.
| Variable | Default | Description |
|----------|---------|-------------|
| ARCHIVE_THRESHOLD_MB | 10 | Trigger threshold in MB |
| SESSIONS_DIR | ~/.openclaw/agents/main/sessions | Sessions directory path |
| KEEP_RECENT | 5 | Number of recent sessions to always keep |
Set up a cron job that runs the archiver periodically. The script is a no-op when under threshold, so it's safe to run frequently:
{
"name": "Smart session archiver",
"schedule": { "kind": "cron", "expr": "0 */6 * * *", "tz": "America/Chicago" },
"sessionTarget": "isolated",
"payload": {
"kind": "agentTurn",
"model": "morpheus/kimi-k2.5",
"message": "Run the smart session archiver: bash skills/everclaw/scripts/session-archive.sh --verbose. Report the results. If sessions were archived, mention the before/after size.",
"timeoutSeconds": 60
}
}
Recommended: every 6 hours. Frequent enough to catch growth spurts, cheap enough to run on the LIGHT tier since it's a no-op most of the time.
The script outputs a JSON summary for programmatic consumption:
{"archived":42,"freedMB":8.2,"beforeMB":12.4,"afterMB":4.2,"threshold":10}
Based on real-world testing: 134 sessions totaling 17MB caused "Page Unresponsive" in Chrome, Safari, and Brave on macOS. The dashboard uses a standard web renderer that parses all session JSONL into DOM elements — there's no virtualization or lazy loading. 10MB gives ~50% headroom before the ~15-20MB danger zone where most browsers start struggling.
Everclaw v0.7 includes an x402 payment client that lets your agent make USDC payments to any x402-enabled endpoint. The x402 protocol is an HTTP-native payment standard: when a server returns HTTP 402, your agent automatically signs a USDC payment and retries.
Agent → request → Server returns 402 + PAYMENT-REQUIRED header
Agent → parse requirements → sign EIP-712 payment → retry with PAYMENT-SIGNATURE header
Server → verify signature via facilitator → settle USDC → return resource
# Make a request to an x402-protected endpoint
node scripts/x402-client.mjs GET https://api.example.com/data
# Dry-run: see what would be paid without signing
node scripts/x402-client.mjs --dry-run GET https://api.example.com/data
# Set max payment per request
node scripts/x402-client.mjs --max-amount 0.50 GET https://api.example.com/data
# POST with body
node scripts/x402-client.mjs POST https://api.example.com/task '{"prompt":"hello"}'
# Check daily spending
node scripts/x402-client.mjs --budget
import { makePayableRequest, createX402Client } from './scripts/x402-client.mjs';
// One-shot request
const result = await makePayableRequest("https://api.example.com/data");
// result.paid → true if 402 was handled
// result.amount → "$0.010000" (USDC)
// result.body → response content
// Reusable client with budget limits
const client = createX402Client({
maxPerRequest: 0.50, // $0.50 USDC max per request
dailyLimit: 5.00, // $5.00 USDC per day
dryRun: false,
});
const res = await client.get("https://agent-api.example.com/query?q=weather");
const data = await client.post("https://agent-api.example.com/task", { prompt: "hello" });
// Check spending
console.log(client.budget());
// { date: "2026-02-11", spent: "$0.520000", remaining: "$4.480000", limit: "$5.000000", transactions: 3 }
HTTP 402 with PAYMENT-REQUIRED header containing JSON payment requirementsTransferWithAuthorization (EIP-3009) for USDC on Base using the agent's walletPAYMENT-SIGNATURE header containing the signed payment payload.x402-budget.json (amounts only, no keys)| Item | Address |
|------|---------|
| USDC (Base) | 0x833589fCD6eDb6E08f4c7C32D4f71b54bdA02913 |
| Coinbase Facilitator | https://api.cdp.coinbase.com/platform/v2/x402 |
| Base Chain ID | 8453 (CAIP-2: eip155:8453) |
The ERC-8004 protocol provides on-chain registries for agent discovery and trust. Everclaw v0.7 includes a reader that queries the Identity and Reputation registries on Base mainnet.
ERC-8004 defines three registries:
tokenURI pointing to a registration file containing name, description, services/endpoints, x402 support, and trust signalsAgents are discoverable, portable (transferable NFTs), and verifiable across organizational boundaries.
# Look up an agent by ID
node scripts/agent-registry.mjs lookup 1
# Get reputation data
node scripts/agent-registry.mjs reputation 1
# Full discovery (identity + registration file + reputation)
node scripts/agent-registry.mjs discover 1
# List agents in a range
node scripts/agent-registry.mjs list 1 10
# Get total registered agents
node scripts/agent-registry.mjs total
import { lookupAgent, getReputation, discoverAgent, totalAgents, listAgents } from './scripts/agent-registry.mjs';
// Look up identity
const agent = await lookupAgent(1);
// {
// agentId: 1,
// owner: "0x89E9...",
// uri: "data:application/json;base64,...",
// wallet: "0x89E9...",
// registration: {
// name: "ClawNews",
// description: "Hacker News for AI agents...",
// services: [{ name: "web", endpoint: "https://clawnews.io" }, ...],
// x402Support: false,
// active: true,
// supportedTrust: ["reputation"]
// }
// }
// Get reputation
const rep = await getReputation(1);
// {
// agentId: 1,
// clients: ["0x3975...", "0x718B..."],
// feedbackCount: 2,
// summary: { count: 2, value: "100", decimals: 0 },
// feedback: [{ client: "0x3975...", value: "100", tag1: "tip", tag2: "agent" }, ...]
// }
// Full discovery
const full = await discoverAgent(1);
// Combines identity, registration file, services, and reputation into one object
Agent registration files (resolved from tokenURI) follow the ERC-8004 standard:
{
"type": "https://eips.ethereum.org/EIPS/eip-8004#registration-v1",
"name": "MyAgent",
"description": "What the agent does",
"image": "https://example.com/logo.png",
"services": [
{ "name": "web", "endpoint": "https://myagent.com" },
{ "name": "A2A", "endpoint": "https://agent.example/.well-known/agent-card.json", "version": "0.3.0" },
{ "name": "MCP", "endpoint": "https://mcp.agent.eth/", "version": "2025-06-18" }
],
"x402Support": true,
"active": true,
"supportedTrust": ["reputation", "crypto-economic"]
}
The reader handles all URI types: data: URIs (base64-encoded JSON stored on-chain), ipfs:// URIs (via public IPFS gateway), and https:// URIs.
| Registry | Address |
|----------|---------|
| Identity | 0x8004A169FB4a3325136EB29fA0ceB6D2e539a432 |
| Reputation | 0x8004BAa17C55a88189AE136b182e5fdA19dE9b63 |
⚠️ Same addresses on all EVM chains — Ethereum, Base, Arbitrum, Polygon, Optimism, Linea, Avalanche, etc. The Identity Registry does NOT implement totalSupply(), so totalAgents() uses a binary search via ownerOf().
The x402 client and agent registry work together for agent-to-agent payments:
import { discoverAgent } from './scripts/agent-registry.mjs';
import { makePayableRequest } from './scripts/x402-client.mjs';
// 1. Discover an agent and find its x402-enabled endpoint
const agent = await discoverAgent(42);
const apiEndpoint = agent.services.find(s => s.name === "A2A")?.endpoint;
// 2. Make a paid request — x402 handling is automatic
if (agent.x402Support && apiEndpoint) {
const result = await makePayableRequest(apiEndpoint, {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ task: "Analyze this data..." }),
maxAmount: 500000n, // $0.50 USDC
});
console.log(result.body); // Agent's response
}
| Action | Command |
|--------|---------|
| Install Everclaw | bash skills/everclaw/scripts/install-everclaw.sh |
| Check for updates | bash skills/everclaw/scripts/install-everclaw.sh --check |
| Update (git pull) | cd skills/everclaw && git pull |
| Install router | bash skills/everclaw/scripts/install.sh |
| Install proxy + guardian | bash skills/everclaw/scripts/install-proxy.sh |
| Start router | bash skills/everclaw/scripts/start.sh |
| Stop router | bash skills/everclaw/scripts/stop.sh |
| Swap ETH→MOR | bash skills/everclaw/scripts/swap.sh eth 0.01 |
| Swap USDC→MOR | bash skills/everclaw/scripts/swap.sh usdc 50 |
| Open session | bash skills/everclaw/scripts/session.sh open <model> [duration] |
| Close session | bash skills/everclaw/scripts/session.sh close <session_id> |
| List sessions | bash skills/everclaw/scripts/session.sh list |
| Send prompt | bash skills/everclaw/scripts/chat.sh <model> "prompt" |
| Check balance | bash skills/everclaw/scripts/balance.sh |
| Proxy health | curl http://127.0.0.1:8083/health |
| Guardian test | bash scripts/gateway-guardian.sh --verbose |
| Guardian logs | tail -f ~/.openclaw/logs/guardian.log |
| Venice key health | bash skills/everclaw/scripts/venice-key-monitor.sh --status |
| Venice key balances | bash skills/everclaw/scripts/venice-key-monitor.sh --verbose |
| Venice 402 watchdog | bash skills/everclaw/scripts/venice-402-watchdog.sh --verbose |
| Archive sessions | bash skills/everclaw/scripts/session-archive.sh |
| Check session size | bash skills/everclaw/scripts/session-archive.sh --check |
| Force archive | bash skills/everclaw/scripts/session-archive.sh --force |
| x402 request | node scripts/x402-client.mjs GET <url> |
| x402 dry-run | node scripts/x402-client.mjs --dry-run GET <url> |
| x402 budget | node scripts/x402-client.mjs --budget |
| Lookup agent | node scripts/agent-registry.mjs lookup <id> |
| Agent reputation | node scripts/agent-registry.mjs reputation <id> |
| Discover agent | node scripts/agent-registry.mjs discover <id> |
| List agents | node scripts/agent-registry.mjs list <start> [count] |
| Total agents | node scripts/agent-registry.mjs total |
| Scan a skill | node security/skillguard/src/cli.js scan <path> |
| Batch scan | node security/skillguard/src/cli.js batch <dir> |
| Security audit | bash security/clawdstrike/scripts/collect_verified.sh |
| Detect injection | python3 security/prompt-guard/scripts/detect.py "text" |
Everclaw agents handle MOR tokens and private keys — making them high-value targets. v0.3 bundles four security skills to defend against supply chain attacks, prompt injection, credential theft, and configuration exposure.
Scans AgentSkill packages for malicious patterns before you install them. Detects credential theft, code injection, prompt manipulation, data exfiltration, and evasion techniques.
# Scan a skill directory
node security/skillguard/src/cli.js scan <path>
# Batch scan all installed skills
node security/skillguard/src/cli.js batch <directory>
# Scan a ClawHub skill by slug
node security/skillguard/src/cli.js scan-hub <slug>
Score interpretation:
When to use: Before installing any skill from ClawHub or untrusted sources. Run batch scans periodically to audit all installed skills.
Full docs: security/skillguard/SKILL.md
Security audit and threat model for OpenClaw gateway hosts. Verifies configuration, network exposure, installed skills/plugins, and filesystem hygiene. Produces an OK/VULNERABLE report with evidence and remediation steps.
# Run a full audit
cd security/clawdstrike && \
OPENCLAW_WORKSPACE_DIR=$HOME/.openclaw/workspace \
bash scripts/collect_verified.sh
What it checks:
When to use: After initial setup, after installing new skills, and periodically (weekly recommended).
Full docs: security/clawdstrike/SKILL.md
Advanced prompt injection defense system with multi-language detection (EN/KO/JA/ZH), severity scoring, automatic logging, and configurable security policies. Connects to the HiveFence distributed threat intelligence network.
# Analyze a message for injection attempts
python3 security/prompt-guard/scripts/detect.py "suspicious message here"
# Run audit on prompt injection logs
python3 security/prompt-guard/scripts/audit.py
# Analyze historical logs
python3 security/prompt-guard/scripts/analyze_log.py
Detection categories:
When to use: In group chats, when processing untrusted input, when agents interact with external data sources.
Full docs: security/prompt-guard/SKILL.md
Secure key management for AI agents handling private keys, API secrets, and wallet credentials. Covers secure storage patterns, session keys, leak prevention, prompt injection defense specific to financial operations, and MetaMask Delegation Framework (EIP-7710) integration.
Key principles:
op run for runtime injectionReference docs:
security/bagman/references/secure-storage.md — Storage patternssecurity/bagman/references/session-keys.md — Session key architecturesecurity/bagman/references/delegation-framework.md — EIP-7710 integrationsecurity/bagman/references/leak-prevention.md — Leak detection rulessecurity/bagman/references/prompt-injection-defense.md — Financial-specific injection defenseWhen to use: Whenever an agent handles private keys, wallet credentials, or API secrets — which Everclaw agents always do.
Full docs: security/bagman/SKILL.md
For Everclaw agents handling MOR tokens:
A lightweight, local prompt classifier that routes requests to the cheapest capable model. Runs in <1ms with zero external API calls.
| Tier | Primary Model | Fallback | Use Case |
|------|--------------|----------|----------|
| LIGHT | morpheus/glm-4.7-flash | morpheus/kimi-k2.5 | Cron jobs, heartbeats, simple Q&A, status checks |
| STANDARD | morpheus/kimi-k2.5 | venice/kimi-k2-5 | Research, drafting, summaries, most sub-agent tasks |
| HEAVY | venice/claude-opus-4-6 | venice/claude-opus-45 | Complex reasoning, architecture, formal proofs, strategy |
All LIGHT and STANDARD tier models run through Morpheus (inference you own via staked MOR). Only HEAVY tier uses Venice (premium).
The router scores prompts across 13 weighted dimensions:
| Dimension | Weight | What It Detects |
|-----------|--------|----------------|
| reasoningMarkers | 0.20 | "prove", "theorem", "step by step", "chain of thought" |
| codePresence | 0.14 | function, class, import, backticks, "refactor" |
| synthesis | 0.11 | "summarize", "compare", "draft", "analyze", "review" |
| technicalTerms | 0.10 | "algorithm", "architecture", "smart contract", "consensus" |
| multiStepPatterns | 0.10 | "first...then", "step 1", numbered lists |
| simpleIndicators | 0.08 | "what is", "hello", "weather" (negative score → pushes toward LIGHT) |
| agenticTask | 0.06 | "edit", "deploy", "install", "debug", "fix" |
| creativeMarkers | 0.04 | "story", "poem", "brainstorm" |
| questionComplexity | 0.04 | Multiple question marks |
| tokenCount | 0.04 | Short prompts skew LIGHT, long prompts skew HEAVY |
| constraintCount | 0.04 | "at most", "at least", "maximum", "budget" |
| domainSpecificity | 0.04 | "quantum", "zero-knowledge", "genomics" |
| outputFormat | 0.03 | "json", "yaml", "table", "csv" |
Special override: 2+ reasoning keywords in the user prompt → force HEAVY at 88%+ confidence. This prevents accidental cheap routing of genuinely hard problems.
Ambiguous prompts (low confidence) default to STANDARD — the safe middle ground.
# Test routing for a prompt
node scripts/router.mjs "What is 2+2?"
# → LIGHT (morpheus/glm-4.7-flash)
node scripts/router.mjs "Summarize the meeting notes and draft a follow-up"
# → STANDARD (morpheus/kimi-k2.5)
node scripts/router.mjs "Design a distributed consensus algorithm and prove its correctness"
# → HEAVY (venice/claude-opus-4-6)
# JSON output for programmatic use
node scripts/router.mjs --json "Build a React component"
# Pipe from stdin
echo '{"prompt":"hello","system":"You are helpful"}' | node scripts/router.mjs --stdin
import { route, classify } from './scripts/router.mjs';
const decision = route("Check the weather in Austin");
// {
// tier: "LIGHT",
// model: "morpheus/glm-4.7-flash",
// fallback: "morpheus/kimi-k2.5",
// confidence: 0.87,
// score: -0.10,
// signals: ["short (7 tok)", "simple (weather)"],
// reasoning: "score=-0.100 → LIGHT"
// }
Set the model field on cron job payloads to route to cheaper models:
{
"payload": {
"kind": "agentTurn",
"model": "morpheus/kimi-k2.5", // STANDARD tier — owned via Morpheus
"message": "Compile a morning briefing...",
"timeoutSeconds": 300
}
}
For truly simple cron jobs (health checks, pings, status queries):
{
"payload": {
"kind": "agentTurn",
"model": "morpheus/glm-4.7-flash", // LIGHT tier — fastest, owned
"message": "Check proxy health and report any issues",
"timeoutSeconds": 60
}
}
// Simple research task → STANDARD
sessions_spawn({ task: "Search for X news", model: "morpheus/kimi-k2.5" });
// Quick lookup → LIGHT
sessions_spawn({ task: "What's the weather?", model: "morpheus/glm-4.7-flash" });
// Complex analysis → let it use the default (HEAVY / Claude 4.6)
sessions_spawn({ task: "Design the x402 payment integration..." });
With the router in place, only complex reasoning tasks in the main session use premium models. All background work (cron jobs, sub-agents, heartbeats) runs on Morpheus inference you own:
| Before | After | |--------|-------| | All cron jobs → Claude 4.6 (premium) | Cron jobs → Kimi K2.5 / GLM Flash (owned) | | All sub-agents → Claude 4.6 (premium) | Sub-agents → Kimi K2.5 (owned) unless complex | | Main session → Claude 4.6 | Main session → Claude 4.6 (unchanged) |
The Morpheus API Gateway (api.mor.org) provides community-powered, OpenAI-compatible inference — no node, no staking, no wallet required. Everclaw v0.8 includes a bootstrap script that configures this as an OpenClaw provider, giving new users instant access to AI from the first launch.
New OpenClaw users face a cold-start problem: they need an API key (Claude, OpenAI, etc.) before their agent can do anything. Everclaw v0.8 solves this by bundling a community API key for the Morpheus inference marketplace, which is currently in open beta.
The bootstrap flow:
node scripts/bootstrap-gateway.mjs — agent gets inference immediatelyapp.mor.org# One command — tests the gateway and patches OpenClaw config
node skills/everclaw/scripts/bootstrap-gateway.mjs
# Or with your own API key from app.mor.org
node skills/everclaw/scripts/bootstrap-gateway.mjs --key sk-YOUR_KEY_HERE
# Test the gateway connection
node skills/everclaw/scripts/bootstrap-gateway.mjs --test
# Check current gateway status
node skills/everclaw/scripts/bootstrap-gateway.mjs --status
The bootstrap script:
openclaw.json to add mor-gateway as a new providermor-gateway/kimi-k2.5 to the fallback chain| Setting | Value |
|---------|-------|
| Base URL | https://api.mor.org/api/v1 |
| API format | OpenAI-compatible |
| Auth | Bearer token (sk-...) |
| Open beta | Until March 1, 2026 |
| Models | 34 (LLMs, TTS, STT, embeddings) |
| Provider name | mor-gateway |
The gateway exposes all models on the Morpheus inference marketplace:
| Model | Type | Notes |
|-------|------|-------|
| kimi-k2.5 | LLM | Primary bootstrap model — strong coding + reasoning |
| glm-4.7-flash | LLM | Fast, good for simple tasks |
| llama-3.3-70b | LLM | General purpose |
| qwen3-235b | LLM | Large, strong reasoning |
| gpt-oss-120b | LLM | OpenAI-compatible OSS model |
| hermes-4-14b | LLM | Lightweight |
| tts-kokoro | TTS | Text-to-speech |
| whisper-v3-large-turbo | STT | Speech-to-text |
| text-embedding-bge-m3 | Embedding | Text embeddings |
All models also have :web variants with web search capability.
{
"models": {
"providers": {
"mor-gateway": {
"baseUrl": "https://api.mor.org/api/v1",
"apiKey": "sk-...",
"api": "openai-completions",
"models": [
{ "id": "kimi-k2.5", "name": "Kimi K2.5 (via Morpheus Gateway)", "reasoning": false },
{ "id": "glm-4.7-flash", "name": "GLM 4.7 Flash (via Morpheus Gateway)", "reasoning": false },
{ "id": "llama-3.3-70b", "name": "Llama 3.3 70B (via Morpheus Gateway)", "reasoning": false }
]
}
}
}
}
Important: All gateway models must have "reasoning": false — the upstream litellm rejects the reasoning_effort parameter.
The bootstrap script includes a community API key (base64-obfuscated) for the SmartAgentProtocol account. This provides open access during the beta period.
Getting your own key (recommended):
node scripts/bootstrap-gateway.mjs --key YOUR_KEY| Feature | API Gateway (v0.8) | Local Proxy (v0.2) | P2P Node (v0.1) | |---------|-------------------|-------------------|-----------------| | Setup | One command | Install proxy + config | Full node install | | Cost | Open (beta) | Own (MOR staking) | Own (MOR staking) | | Requires MOR | No | Yes | Yes | | Requires wallet | No | Yes | Yes | | Decentralized | Gateway → providers | Direct P2P | Direct P2P | | Best for | New users, quick start | Daily use, reliability | Full sovereignty |
The recommended progression: Gateway → Local Proxy → P2P Node as users gain confidence with the Morpheus ecosystem.
With the gateway added, the recommended fallback chain becomes:
venice/claude-opus-4-6 # Primary (premium)
→ venice/claude-opus-45 # Venice fallback
→ venice/kimi-k2-5 # Venice open tier
→ morpheus/kimi-k2.5 # Local proxy (MOR staking)
→ mor-gateway/kimi-k2.5 # API Gateway (open beta)
For new users without Venice or a local proxy, the gateway is the first and only provider — making it the critical bootstrap path.
Your agent needs your Mac to stay awake. macOS defaults to sleep after inactivity, which interrupts cron jobs, heartbeats, and long-running tasks. Everclaw includes an always-on setup script that configures power management for continuous operation.
# Configure macOS to never sleep (requires sudo)
sudo bash skills/everclaw/scripts/always-on.sh
# Restore default power settings
sudo bash skills/everclaw/scripts/always-on.sh --restore
The script configures macOS power management for 24/7 operation:
| Setting | Value | Purpose |
|---------|-------|---------|
| disablesleep | 1 | System never sleeps |
| standby | 0 | No hibernation |
| autopoweroff | 0 | No deep sleep |
| powernap | 1 | Network activity while display off |
| womp | 1 | Wake on LAN enabled (remote access) |
| autorestart | 1 | Auto-restart after power failure |
| tcpkeepalive | 1 | Keep network connections alive |
| disksleep | 0 | Never spin down disks |
The script also installs a LaunchAgent (com.everclaw.alwayson) that runs caffeinate -i -d -s in the background, providing an additional layer of protection against system sleep:
-i — Prevent system from idling to sleep-d — Prevent display from sleeping-s — Prevent system from sleeping when on AC power# Check current power settings
pmset -g
# Should show:
# SleepDisabled 1
# standby 0
# autorestart 1
Without always-on configuration:
With always-on:
A Mac Mini M4 at idle with sleep disabled draws ~6-10W. That's roughly:
Linux:
sudo systemctl mask sleep.target suspend.target hibernate.target hybrid-sleep.target
Headless Raspberry Pi:
No sleep by default. Ensure systemd services are enabled for OpenClaw and Morpheus.
Mac still sleeps:
pmset -g assertions for any processes preventing sleeplaunchctl list | grep everclawDisplay still sleeps: This is fine — the system stays awake even with display off thanks to Power Nap. To disable display sleep entirely:
sudo pmset -a displaysleep 0
A structured task planning system that proposes prioritized work plans at the start of each 8-hour shift. Nothing executes without user approval.
| Shift | Default Time | Window | Character | |-------|-------------|--------|-----------| | ☀️ Morning | 6:00 AM | 6 AM – 2 PM | Ramp-up: meetings, comms, decisions | | 🌤️ Afternoon | 2:00 PM | 2 PM – 10 PM | Deep work: coding, writing, building | | 🌙 Night | 10:00 PM | 10 PM – 6 AM | Autonomous: research, maintenance |
# Create three cron jobs (adjust times to your timezone)
openclaw cron add --name three-shifts-morning --schedule "0 6 * * *" \
--message "Generate morning shift plan. Read the three-shifts skill, gather context, and propose tasks for the 6 AM – 2 PM window."
openclaw cron add --name three-shifts-afternoon --schedule "0 14 * * *" \
--message "Generate afternoon shift plan. Read the three-shifts skill, gather context, and propose tasks for the 2 PM – 10 PM window."
openclaw cron add --name three-shifts-night --schedule "0 22 * * *" \
--message "Generate night shift plan. Read the three-shifts skill, gather context, and propose tasks for the 10 PM – 6 AM window."
See three-shifts/SKILL.md for full documentation including approval workflows, configuration options, weekend behavior, and quiet hours.
openclaw agent probes. Eliminates 71K workspace prompt injection into health checks, prevents Signal spam from failed probes, uses glm-4.7-flash for fast lightweight probingreferences/acquiring-mor.md — How to get MOR tokens (exchanges, bridges, swaps)references/models.md — Available models and their blockchain IDsreferences/api.md — Complete proxy-router API referencereferences/economics.md — How MOR staking economics workreferences/troubleshooting.md — Common errors and solutionssecurity/skillguard/SKILL.md — SkillGuard full documentationsecurity/clawdstrike/SKILL.md — ClawdStrike full documentationsecurity/prompt-guard/SKILL.md — PromptGuard full documentationsecurity/bagman/SKILL.md — Bagman full documentationMachine endpoints, protocol fit, contract coverage, invocation examples, and guardrails for agent-to-agent use.
Contract coverage
Status
ready
Auth
mcp, a2a, api_key
Streaming
Yes
Data region
global
Protocol support
Requires: a2a, mcp, openclew, lang:typescript, streaming
Forbidden: none
Guardrails
Operational confidence: medium
curl -s "https://xpersona.co/api/v1/agents/profbernardoj-everclaw-community-branches/snapshot"
curl -s "https://xpersona.co/api/v1/agents/profbernardoj-everclaw-community-branches/contract"
curl -s "https://xpersona.co/api/v1/agents/profbernardoj-everclaw-community-branches/trust"
Trust and runtime signals, benchmark suites, failure patterns, and practical risk constraints.
Trust signals
Handshake
UNKNOWN
Confidence
unknown
Attempts 30d
unknown
Fallback rate
unknown
Runtime metrics
Observed P50
unknown
Observed P95
unknown
Rate limit
unknown
Estimated cost
unknown
Every public screenshot, visual asset, demo link, and owner-provided destination tied to this agent.
Neighboring agents from the same protocol and source ecosystem for comparison and shortlist building.
Rank
62
Secure agent-to-agent messaging — handshake, send, poll, and stream messages between AI agents via the a2achat.top API.
Traction
No public download signal
Freshness
Updated 2d ago
Rank
62
Yellow Pages for AI agents — discover, register, and search for agents by skill, language, location, and cost model via the yellowagents.top API.
Traction
No public download signal
Freshness
Updated 2d ago
Rank
62
Launch your own Solana token for free. Keep 100% of trading fees forever. Non-custodial — your keys, your tokens. No SOL needed. Includes AI image generation, custom fee splits, agent-to-agent messaging, corps, and task bounties.
Traction
No public download signal
Freshness
Updated 2d ago
Rank
62
Documentation-only WhatsApp API reference — zero executables, zero install scripts, zero local file writes. All actions require explicit user invocation. Provides 90+ API endpoints for sending messages, capturing leads, running campaigns, scheduling reports, tracking campaign analytics, and managing clients. MOLTFLOW_API_KEY is the only credential required — generate a scoped key from the MoltFlow dashboard (Settings > API Keys). AI features (voice transcription, RAG, style profiles) use the user's own LLM API key configured via the MoltFlow web dashboard, never passed through this skill.
Traction
No public download signal
Freshness
Updated 2d ago
Contract JSON
{
"contractStatus": "ready",
"authModes": [
"mcp",
"a2a",
"api_key"
],
"requires": [
"a2a",
"mcp",
"openclew",
"lang:typescript",
"streaming"
],
"forbidden": [],
"supportsMcp": true,
"supportsA2a": true,
"supportsStreaming": true,
"inputSchemaRef": "https://github.com/profbernardoj/everclaw-community-branches#input",
"outputSchemaRef": "https://github.com/profbernardoj/everclaw-community-branches#output",
"dataRegion": "global",
"contractUpdatedAt": "2026-02-24T19:44:14.404Z",
"sourceUpdatedAt": "2026-02-24T19:44:14.404Z",
"freshnessSeconds": 4420633
}Invocation Guide
{
"preferredApi": {
"snapshotUrl": "https://xpersona.co/api/v1/agents/profbernardoj-everclaw-community-branches/snapshot",
"contractUrl": "https://xpersona.co/api/v1/agents/profbernardoj-everclaw-community-branches/contract",
"trustUrl": "https://xpersona.co/api/v1/agents/profbernardoj-everclaw-community-branches/trust"
},
"curlExamples": [
"curl -s \"https://xpersona.co/api/v1/agents/profbernardoj-everclaw-community-branches/snapshot\"",
"curl -s \"https://xpersona.co/api/v1/agents/profbernardoj-everclaw-community-branches/contract\"",
"curl -s \"https://xpersona.co/api/v1/agents/profbernardoj-everclaw-community-branches/trust\""
],
"jsonRequestTemplate": {
"query": "summarize this repo",
"constraints": {
"maxLatencyMs": 2000,
"protocolPreference": [
"A2A",
"MCP",
"OPENCLEW"
]
}
},
"jsonResponseTemplate": {
"ok": true,
"result": {
"summary": "...",
"confidence": 0.9
},
"meta": {
"source": "GITHUB_OPENCLEW",
"generatedAt": "2026-04-16T23:41:27.559Z"
}
},
"retryPolicy": {
"maxAttempts": 3,
"backoffMs": [
500,
1500,
3500
],
"retryableConditions": [
"HTTP_429",
"HTTP_503",
"NETWORK_TIMEOUT"
]
}
}Trust JSON
{
"status": "unavailable",
"handshakeStatus": "UNKNOWN",
"verificationFreshnessHours": null,
"reputationScore": null,
"p95LatencyMs": null,
"successRate30d": null,
"fallbackRate": null,
"attempts30d": null,
"trustUpdatedAt": null,
"trustConfidence": "unknown",
"sourceUpdatedAt": null,
"freshnessSeconds": null
}Capability Matrix
{
"rows": [
{
"key": "A2A",
"type": "protocol",
"support": "supported",
"confidenceSource": "contract",
"notes": "Confirmed by capability contract"
},
{
"key": "MCP",
"type": "protocol",
"support": "supported",
"confidenceSource": "contract",
"notes": "Confirmed by capability contract"
},
{
"key": "OPENCLEW",
"type": "protocol",
"support": "unknown",
"confidenceSource": "profile",
"notes": "Listed on profile"
},
{
"key": "do",
"type": "capability",
"support": "supported",
"confidenceSource": "profile",
"notes": "Declared in agent profile metadata"
},
{
"key": "stay",
"type": "capability",
"support": "supported",
"confidenceSource": "profile",
"notes": "Declared in agent profile metadata"
},
{
"key": "the",
"type": "capability",
"support": "supported",
"confidenceSource": "profile",
"notes": "Declared in agent profile metadata"
},
{
"key": "always",
"type": "capability",
"support": "supported",
"confidenceSource": "profile",
"notes": "Declared in agent profile metadata"
},
{
"key": "a",
"type": "capability",
"support": "supported",
"confidenceSource": "profile",
"notes": "Declared in agent profile metadata"
},
{
"key": "all",
"type": "capability",
"support": "supported",
"confidenceSource": "profile",
"notes": "Declared in agent profile metadata"
},
{
"key": "2",
"type": "capability",
"support": "supported",
"confidenceSource": "profile",
"notes": "Declared in agent profile metadata"
},
{
"key": "then",
"type": "capability",
"support": "supported",
"confidenceSource": "profile",
"notes": "Declared in agent profile metadata"
}
],
"flattenedTokens": "protocol:A2A|supported|contract protocol:MCP|supported|contract protocol:OPENCLEW|unknown|profile capability:do|supported|profile capability:stay|supported|profile capability:the|supported|profile capability:always|supported|profile capability:a|supported|profile capability:all|supported|profile capability:2|supported|profile capability:then|supported|profile"
}Facts JSON
[
{
"factKey": "protocols",
"category": "compatibility",
"label": "Protocol compatibility",
"value": "MCP, A2A, OpenClaw",
"href": "https://xpersona.co/api/v1/agents/profbernardoj-everclaw-community-branches/contract",
"sourceUrl": "https://xpersona.co/api/v1/agents/profbernardoj-everclaw-community-branches/contract",
"sourceType": "contract",
"confidence": "high",
"observedAt": "2026-02-24T19:44:14.404Z",
"isPublic": true
},
{
"factKey": "auth_modes",
"category": "compatibility",
"label": "Auth modes",
"value": "mcp, a2a, api_key",
"href": "https://xpersona.co/api/v1/agents/profbernardoj-everclaw-community-branches/contract",
"sourceUrl": "https://xpersona.co/api/v1/agents/profbernardoj-everclaw-community-branches/contract",
"sourceType": "contract",
"confidence": "high",
"observedAt": "2026-02-24T19:44:14.404Z",
"isPublic": true
},
{
"factKey": "schema_refs",
"category": "artifact",
"label": "Machine-readable schemas",
"value": "OpenAPI or schema references published",
"href": "https://github.com/profbernardoj/everclaw-community-branches#input",
"sourceUrl": "https://xpersona.co/api/v1/agents/profbernardoj-everclaw-community-branches/contract",
"sourceType": "contract",
"confidence": "high",
"observedAt": "2026-02-24T19:44:14.404Z",
"isPublic": true
},
{
"factKey": "vendor",
"category": "vendor",
"label": "Vendor",
"value": "Everclaw",
"href": "https://everclaw.com",
"sourceUrl": "https://everclaw.com",
"sourceType": "profile",
"confidence": "medium",
"observedAt": "2026-02-24T19:43:14.176Z",
"isPublic": true
},
{
"factKey": "traction",
"category": "adoption",
"label": "Adoption signal",
"value": "87 GitHub stars",
"href": "https://github.com/profbernardoj/everclaw-community-branches",
"sourceUrl": "https://github.com/profbernardoj/everclaw-community-branches",
"sourceType": "profile",
"confidence": "medium",
"observedAt": "2026-02-24T19:43:14.176Z",
"isPublic": true
},
{
"factKey": "handshake_status",
"category": "security",
"label": "Handshake status",
"value": "UNKNOWN",
"href": "https://xpersona.co/api/v1/agents/profbernardoj-everclaw-community-branches/trust",
"sourceUrl": "https://xpersona.co/api/v1/agents/profbernardoj-everclaw-community-branches/trust",
"sourceType": "trust",
"confidence": "medium",
"observedAt": null,
"isPublic": true
}
]Change Events JSON
[]
Sponsored
Ads related to everclaw and adjacent AI workflows.