Rank
70
AI Agents & MCPs & AI Workflow Automation • (~400 MCP servers for AI agents) • AI Automation / AI Agent with MCPs • AI Workflows & AI Agents • MCPs for AI Agents
Traction
No public download signal
Freshness
Updated 2d ago
Crawler Summary
Complete guide for deploying OpenClaw (Telegram AI Bot) on a China mainland server with Docker, GFW bypass (Xray proxy + Chromium wrapper), browser automation (CDP), memory system (local embedding model), skills sync, and GitHub-based memory synchronization between Telegram Bot and Claude Code. Covers 21 documented pitfalls with root cause analysis. Use when deploying OpenClaw, troubleshooting bot issues, or setting up AI bot infrastructure behind GFW. --- name: openclaw-deployment description: Complete guide for deploying OpenClaw (Telegram AI Bot) on a China mainland server with Docker, GFW bypass (Xray proxy + Chromium wrapper), browser automation (CDP), memory system (local embedding model), skills sync, and GitHub-based memory synchronization between Telegram Bot and Claude Code. Covers 21 documented pitfalls with root cause analysis. Use when deploying OpenCl Capability contract not published. No trust telemetry is available yet. Last updated 2/25/2026.
Freshness
Last checked 2/25/2026
Best For
openclaw-deployment is best for github, embedding, enable workflows where OpenClaw compatibility matters.
Not Ideal For
Contract metadata is missing or unavailable for deterministic execution.
Evidence Sources Checked
editorial-content, GITHUB OPENCLEW, runtime-metrics, public facts pack
Complete guide for deploying OpenClaw (Telegram AI Bot) on a China mainland server with Docker, GFW bypass (Xray proxy + Chromium wrapper), browser automation (CDP), memory system (local embedding model), skills sync, and GitHub-based memory synchronization between Telegram Bot and Claude Code. Covers 21 documented pitfalls with root cause analysis. Use when deploying OpenClaw, troubleshooting bot issues, or setting up AI bot infrastructure behind GFW. --- name: openclaw-deployment description: Complete guide for deploying OpenClaw (Telegram AI Bot) on a China mainland server with Docker, GFW bypass (Xray proxy + Chromium wrapper), browser automation (CDP), memory system (local embedding model), skills sync, and GitHub-based memory synchronization between Telegram Bot and Claude Code. Covers 21 documented pitfalls with root cause analysis. Use when deploying OpenCl
Public facts
4
Change events
1
Artifacts
0
Freshness
Feb 25, 2026
Capability contract not published. No trust telemetry is available yet. Last updated 2/25/2026.
Trust score
Unknown
Compatibility
OpenClaw
Freshness
Feb 25, 2026
Vendor
Kkkano
Artifacts
0
Benchmarks
0
Last release
Unpublished
Key links, install path, and a quick operational read before the deeper crawl record.
Summary
Capability contract not published. No trust telemetry is available yet. Last updated 2/25/2026.
Setup snapshot
git clone https://github.com/kkkano/openclaw-deployment-skill.gitSetup complexity is LOW. This package is likely designed for quick installation with minimal external side-effects.
Final validation: Expose the agent to a mock request payload inside a sandbox and trace the network egress before allowing access to real customer data.
Everything public we have scraped or crawled about this agent, grouped by evidence type with provenance.
Vendor
Kkkano
Protocol compatibility
OpenClaw
Handshake status
UNKNOWN
Crawlable docs
6 indexed pages on the official domain
Merged public release, docs, artifact, benchmark, pricing, and trust refresh events.
Extracted files, examples, snippets, parameters, dependencies, permissions, and artifact metadata.
Extracted files
0
Examples
6
Snippets
0
Languages
typescript
Parameters
text
User (Telegram) → Telegram API → Cloudflare Pages (reverse proxy) → China Server (OpenClaw Docker)
│
┌───────────┼───────────┐
│ │ │
LLM API Browser Memory
(proxy) (Chrome+ (SQLite+
Xray) embedding)bash
curl -fsSL https://get.docker.com | sh
bash
curl -fsSL https://get.docker.com | sh sudo usermod -aG docker $USER # Logout and login again
bash
cd /home/ubuntu git clone https://github.com/openclaw/openclaw.git cd openclaw
yaml
services:
openclaw-gateway:
# CRITICAL: increase shared memory for Chrome
shm_size: 512m
ports:
- "18789:18789"
volumes:
- /home/ubuntu/.openclaw:/home/node/.openclaw
# DO NOT set network_mode: host (breaks container DNS)bash
# Find gateway IP (typically 172.19.0.1 or 172.17.0.1) docker network inspect openclaw_default | grep Gateway
Full documentation captured from public sources, including the complete README when available.
Docs source
GITHUB OPENCLEW
Editorial quality
ready
Complete guide for deploying OpenClaw (Telegram AI Bot) on a China mainland server with Docker, GFW bypass (Xray proxy + Chromium wrapper), browser automation (CDP), memory system (local embedding model), skills sync, and GitHub-based memory synchronization between Telegram Bot and Claude Code. Covers 21 documented pitfalls with root cause analysis. Use when deploying OpenClaw, troubleshooting bot issues, or setting up AI bot infrastructure behind GFW. --- name: openclaw-deployment description: Complete guide for deploying OpenClaw (Telegram AI Bot) on a China mainland server with Docker, GFW bypass (Xray proxy + Chromium wrapper), browser automation (CDP), memory system (local embedding model), skills sync, and GitHub-based memory synchronization between Telegram Bot and Claude Code. Covers 21 documented pitfalls with root cause analysis. Use when deploying OpenCl
Deploy OpenClaw as a Telegram AI Bot on a China mainland cloud server (tested on Tencent Cloud Ubuntu 22.04). This skill covers the complete deployment pipeline including GFW bypass, browser automation, memory system, and cross-device memory synchronization.
User (Telegram) → Telegram API → Cloudflare Pages (reverse proxy) → China Server (OpenClaw Docker)
│
┌───────────┼───────────┐
│ │ │
LLM API Browser Memory
(proxy) (Chrome+ (SQLite+
Xray) embedding)
sudo for host-side accessopenclaw.json take effect in seconds without restartcurl -fsSL https://get.docker.com | sh
sudo usermod -aG docker $USER
# Logout and login again
cd /home/ubuntu
git clone https://github.com/openclaw/openclaw.git
cd openclaw
Key modifications to docker-compose.yml:
services:
openclaw-gateway:
# CRITICAL: increase shared memory for Chrome
shm_size: 512m
ports:
- "18789:18789"
volumes:
- /home/ubuntu/.openclaw:/home/node/.openclaw
# DO NOT set network_mode: host (breaks container DNS)
Pitfall #11: Chrome crashes with
shm_sizeat default 64MB. Set to512m.
Container accesses host services via Docker bridge gateway IP:
# Find gateway IP (typically 172.19.0.1 or 172.17.0.1)
docker network inspect openclaw_default | grep Gateway
This IP is used for:
socks5://172.19.0.1:10808GFW blocks api.telegram.org. Use Cloudflare Pages as reverse proxy.
Create a Cloudflare Pages project with functions/[[path]].js:
export async function onRequest(context) {
const url = new URL(context.request.url);
url.host = 'api.telegram.org';
return fetch(new Request(url, context.request));
}
Deploy to Cloudflare Pages. The resulting URL (e.g., https://your-project.pages.dev) replaces api.telegram.org.
In openclaw.json:
{
"telegram": {
"token": "YOUR_BOT_TOKEN",
"apiBaseUrl": "https://your-project.pages.dev"
}
}
Pitfall #1: Without reverse proxy, Bot cannot connect to Telegram at all.
China mainland cannot directly access OpenAI/Anthropic APIs. Use an API proxy service.
{
"models": [
{
"id": "your-model-id",
"provider": "custom",
"apiBaseUrl": "https://your-api-proxy.com/v1",
"apiKey": "your-key"
}
]
}
Pitfall #2: Direct API calls timeout. Always use a proxy service.
OpenClaw skills are stored at /home/ubuntu/.openclaw/workspace/skills/. Manual copying is tedious for 100+ skills with nested directories.
#!/usr/bin/env python3
"""Recursively sync all SKILL.md files to OpenClaw server via SSH."""
import paramiko
import os
from pathlib import Path
LOCAL_SKILLS = os.path.expanduser("~/.claude/skills")
REMOTE_SKILLS = "/home/ubuntu/.openclaw/workspace/skills"
HOST = "your-server-ip"
USER = "ubuntu"
def sync_skills():
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh.connect(HOST, username=USER)
sftp = ssh.open_sftp()
count = 0
for root, dirs, files in os.walk(LOCAL_SKILLS):
for f in files:
if f == "SKILL.md":
local_path = os.path.join(root, f)
# Flatten: use parent directory name as skill name
skill_name = os.path.basename(root)
remote_dir = f"{REMOTE_SKILLS}/{skill_name}"
remote_path = f"{remote_dir}/SKILL.md"
try:
sftp.stat(remote_dir)
except FileNotFoundError:
sftp.mkdir(remote_dir)
sftp.put(local_path, remote_path)
count += 1
print(f" [{count}] {skill_name}/SKILL.md")
sftp.close()
ssh.close()
print(f"\nSynced {count} skills.")
if __name__ == "__main__":
sync_skills()
Pitfall #4: Default
maxSkillsPromptCharsis 30K, truncating to ~52 skills.
{
"skills": {
"maxSkillsPromptChars": 120000
}
}
Pitfall #9: Telegram limits to 100 native commands. Disable registration:
{
"telegram": {
"commands": {
"native": false
}
}
}
Create /home/ubuntu/.openclaw/workspace/SOUL.md with the bot's personality definition.
OpenClaw injects this into every conversation as system prompt. Without it, the bot responds with generic AI personality.
Pitfall: Without SOUL.md, the bot has no personality and responds in a generic, utilitarian tone.
Create /home/ubuntu/.openclaw/workspace/USER.md with user information the bot should know.
OpenClaw's memory uses vector embeddings for semantic search. The auto provider tries remote embedding APIs, but most China API proxies don't support embedding endpoints.
Pitfall #8: Memory search returns no results because embedding falls back to FTS (full-text search only). Pitfall #13: API proxy services typically only forward chat completion, not embedding.
Download a GGUF embedding model via China-accessible mirror:
# HuggingFace is blocked by GFW. Use hf-mirror.com instead.
cd /home/ubuntu/.openclaw/models/
wget https://hf-mirror.com/nicepkg/embeddinggemma/resolve/main/embeddinggemma-300m-qat-Q8_0.gguf
Pitfall #16:
huggingface.cois blocked by GFW. Usehf-mirror.com.
{
"memory": {
"backend": "builtin"
}
}
OpenClaw auto-detects the local model in the models/ directory and uses node-llama-cpp for inference. The model generates 768-dimensional vectors stored in SQLite with vec0 extension.
memory/default.sqlite
├── chunks # Memory fragments (text + 768-dim vector embedding)
├── chunks_fts # Full-text search index (FTS5)
├── chunks_vec # Vector search index (vec0)
├── files # Indexed file list
├── embedding_cache # Embedding cache
└── meta # Model config metadata
Enable image/audio/video recognition:
{
"telegram": {
"multimedia": {
"enabled": true,
"image": true,
"audio": true,
"video": true
}
}
}
Requires the LLM model to support vision (e.g., GPT-4o, Claude with vision).
This is the most complex section. Browser setup in a headless Docker container behind GFW involves multiple interacting systems.
OpenClaw uses Playwright's bundled Chromium. The binary is at:
/root/.cache/ms-playwright/chromium-{version}/chrome-linux64/chrome
Configure in openclaw.json:
{
"browser": {
"executablePath": "/usr/bin/chromium",
"headless": true,
"noSandbox": true
}
}
Pitfall #6: Without
executablePath, OpenClaw triesxdg-openwhich fails in Docker.
Pitfall #11: Chrome crashes immediately. Default Docker
/dev/shmis 64MB.
In docker-compose.yml:
shm_size: 512m
Pitfall #12:
docker compose up -drecreates the container, losing symlinks and running processes.
Solution: post-start.sh script that runs after every container start.
OpenClaw derives Chrome DevTools Protocol (CDP) ports from the gateway port:
Gateway Port (18789)
→ Control Port: 18789 + 2 = 18791
→ CDP Range Start: 18789 + 2 + 9 = 18800 ← Chrome listens here
→ Canvas Port: 18789 + 4 = 18793 (NOT Chrome!)
Pitfall #15: Don't confuse port 18793 (Canvas) with 18800 (Chrome CDP).
Source: /app/src/config/port-defaults.ts → derivePort(base, offset, fallback)
Chrome must be pre-started before OpenClaw tries to use it:
# Inside container (via docker exec)
CHROME_BIN="/root/.cache/ms-playwright/chromium-1208/chrome-linux64/chrome"
USER_DATA="/tmp/openclaw/profiles/openclaw"
# Clean stale locks
rm -f "$USER_DATA/SingletonLock" "$USER_DATA/SingletonCookie" "$USER_DATA/SingletonSocket"
# Launch Chrome
$CHROME_BIN \
--headless=new \
--no-sandbox \
--disable-gpu \
--remote-debugging-port=18800 \
--user-data-dir="$USER_DATA" \
--proxy-server=socks5://172.19.0.1:10808 \
&
# Verify CDP is responding
sleep 3
curl -s http://127.0.0.1:18800/json/version
Pitfall #14: Race condition if OpenClaw tries browser before Chrome is ready. Always verify CDP responds. Pitfall #12: SingletonLock from crashed Chrome prevents restart. Must clean before launch.
Pitfall #17: International websites (HN, Wikipedia, etc.) timeout because GFW blocks TCP.
Pitfall #18: Cloudflare WARP registers and shows "Connected" but actual proxy traffic fails (exit code 97) on China mainland servers. Do not waste time on this.
Install Xray on the HOST (not in container):
bash -c "$(curl -L https://github.com/XTLS/Xray-install/raw/main/install-release.sh)" @ install
Decode VMess node from Clash subscription:
# Clash subscription → base64 decode → extract vmess:// lines → base64 decode each
curl -s "YOUR_CLASH_SUBSCRIPTION_URL" | base64 -d | grep "^vmess://" | head -1 | sed 's/vmess:\/\///' | base64 -d
Configure as systemd service:
sudo systemctl enable xray
sudo systemctl start xray
# Verify: SOCKS5 on 10808, HTTP on 10809
curl --proxy socks5://127.0.0.1:10808 https://news.ycombinator.com -o /dev/null -w "%{http_code} %{time_total}s"
Chrome doesn't read environment proxy variables. Inject via wrapper script:
#!/bin/bash
# /usr/bin/chromium (wrapper - replaces the binary)
REAL_CHROME="/root/.cache/ms-playwright/chromium-1208/chrome-linux64/chrome"
exec "$REAL_CHROME" --proxy-server=socks5://172.19.0.1:10808 "$@"
CRITICAL Pitfall #19: Do NOT move the Chrome binary! It must stay in the Playwright directory alongside
icudtl.dat(ICU data). Moving it causesICU data not founderrors. The wrapper script calls Chrome at its original location.
This is the subtlest pitfall in the entire deployment.
Pitfall #20 & #21: OpenClaw auto-creates a "chrome" profile with
driver: "extension"(for Chrome Extension relay). Even if you setdefaultProfile: "openclaw", the AI model's tool description explicitly tells it to useprofile="chrome"for extension scenarios. In headless Docker, extension relay NEVER works.
Root Cause Chain:
profiles.ts → ensureDefaultChromeExtensionProfile() auto-creates "chrome" profile with driver: "extension"browser-tool.ts → tool description tells AI: "use profile='chrome' for extension relay"profile="chrome" → tries extension relay → failsSolution: Explicitly define BOTH profiles pointing to CDP:
{
"browser": {
"executablePath": "/usr/bin/chromium",
"headless": true,
"noSandbox": true,
"defaultProfile": "openclaw",
"profiles": {
"openclaw": {
"cdpPort": 18800,
"color": "#FF4500"
},
"chrome": {
"cdpPort": 18800,
"color": "#FF4500"
}
}
}
}
This way, regardless of which profile the AI selects, it connects via CDP to the pre-started Chrome instance.
# 1. Chrome process running?
docker exec CONTAINER pgrep -f "chrome.*remote-debugging-port"
# 2. CDP HTTP endpoint responding?
docker exec CONTAINER curl -s http://127.0.0.1:18800/json/version | head -1
# 3. WebSocket connectivity?
# Extract wsUrl from /json/version, test with wscat or script
# 4. Page navigation works through proxy?
docker exec CONTAINER curl -s --proxy socks5://172.19.0.1:10808 https://news.ycombinator.com | head -5
# 5. Full browser test: navigate and get title
# Send bot a message: "Visit https://news.ycombinator.com and tell me the top 3 posts"
Telegram Bot (server) GitHub Private Repo Claude Code (local)
│ (openclaw-memory) │
│ cron every 6h ──► │ │
│ · memory.sqlite │ ◄── git pull │
│ · memory-export.md │ │
│ · sessions-meta.json │ │
└── telegram-bot/ ─────────► │ ◄──── claude-code/ ────────┘
│
shared/ (bidirectional)
Generate SSH key on server:
ssh-keygen -t ed25519 -C "openclaw-memory-sync" -f ~/.ssh/id_ed25519 -N ""
Create private GitHub repo:
gh repo create yourname/openclaw-memory --private
Add deploy key (with write access):
gh repo deploy-key add ~/.ssh/id_ed25519.pub --repo yourname/openclaw-memory --title "openclaw-server" --allow-write
Clone on server:
git config --global user.name "openclaw-bot"
git config --global user.email "openclaw-bot@users.noreply.github.com"
ssh-keyscan github.com >> ~/.ssh/known_hosts
git clone git@github.com:yourname/openclaw-memory.git ~/openclaw-memory
Deploy sync script: See 📋 sync-memory.sh
Set up cron:
crontab -e
# Add:
0 */6 * * * /home/ubuntu/scripts/sync-memory.sh >> /home/ubuntu/logs/memory-sync.log 2>&1
Clone locally:
git clone git@github.com:yourname/openclaw-memory.git D:/openclaw-memory
0600), so script uses sudo cp + chownsqlite3 CLI (Claude Code reads .md, not .sqlite).jsonl session logs are NOT synced (too large, contain sensitive data)git diff --cached --quiet)Simply listing file paths in CLAUDE.md is NOT enough — Claude Code won't proactively read arbitrary directories.
You must add imperative read instructions so Claude Code treats them as behavioral rules:
### ⚡ 记忆读取指令(必须执行)
当主人提到 Telegram Bot、Bot 记忆、同步记忆等相关话题时,**必须主动读取以下文件**获取 Bot 最新状态:
1. **Bot 记忆导出**: `D:/openclaw-memory/telegram-bot/export/memory-export.md`
2. **同步状态**: `D:/openclaw-memory/.sync-status.json`
3. **Bot 配置**: `D:/openclaw-memory/shared/config-sanitized.json`
如果文件不存在或内容过旧,提醒主人运行桌面 `拉取Bot记忆.bat` 或 `cd /d D:\openclaw-memory && git pull`
Why this matters:
CLAUDE.md is read at session start as behavioral instructions, not just reference dataRead tool to open those filesopenclaw-memory/
├── .sync-status.json # Last sync timestamp + stats
├── telegram-bot/
│ ├── db/memory.sqlite # SQLite backup
│ ├── export/memory-export.md # Markdown export (Claude Code reads this)
│ └── sessions/
│ ├── sessions-meta.json # Session metadata
│ └── session-stats.md # Session statistics
├── claude-code/ # Reserved for Claude Code data
└── shared/
└── config-sanitized.json # Sanitized openclaw.json
/home/ubuntu/.openclaw/openclaw.jsonThis is THE most important file. OpenClaw hot-reloads it without restart.
{
"telegram": {
"token": "BOT_TOKEN",
"apiBaseUrl": "https://your-cf-proxy.pages.dev",
"commands": { "native": false },
"multimedia": {
"enabled": true,
"image": true,
"audio": true,
"video": true
}
},
"models": [
{
"id": "model-id",
"provider": "custom",
"apiBaseUrl": "https://your-api-proxy/v1",
"apiKey": "key"
}
],
"skills": {
"maxSkillsPromptChars": 120000
},
"memory": {
"backend": "builtin"
},
"browser": {
"executablePath": "/usr/bin/chromium",
"headless": true,
"noSandbox": true,
"defaultProfile": "openclaw",
"profiles": {
"openclaw": { "cdpPort": 18800 },
"chrome": { "cdpPort": 18800 }
}
}
}
| File | Location | Purpose |
|------|----------|---------|
| openclaw.json | /home/ubuntu/.openclaw/ | Master config (hot-reload) |
| SOUL.md | /home/ubuntu/.openclaw/workspace/ | Bot personality |
| USER.md | /home/ubuntu/.openclaw/workspace/ | User profile |
| skills/ | /home/ubuntu/.openclaw/workspace/ | Skill definitions |
| models/ | /home/ubuntu/.openclaw/ | Local embedding model |
| memory/default.sqlite | /home/ubuntu/.openclaw/ | Memory database |
| docker-compose.yml | /home/ubuntu/openclaw/ | Container config |
| post-start.sh | /home/ubuntu/openclaw/ | Browser setup script |
| sync-memory.sh | /home/ubuntu/scripts/ | Memory sync to GitHub |
| config.json | /etc/xray/ | Xray proxy config |
| /usr/bin/chromium | Container | Wrapper script (proxy injection) |
| # | Problem | Root Cause | Solution |
|---|---------|-----------|----------|
| 1 | Bot can't connect to Telegram | GFW blocks api.telegram.org | Cloudflare Pages reverse proxy |
| 2 | LLM API timeout | GFW blocks OpenAI/Anthropic | API proxy service |
| 3 | Only 32 skills loaded | Sync script didn't handle nested dirs | Recursive scan + flatten |
| 4 | Prompt truncated to ~52 skills | maxSkillsPromptChars default 30K | Increase to 120K |
| 5 | Bot says "52 skills" | LLM estimation error, all 161 present | Non-issue, ignore |
| 6 | Chrome not found | No desktop, xdg-open fails | Set executablePath |
| 7 | Browser timeout 15s | Config not yet applied | Update config + restart |
| 8 | Memory search no results | Embedding falls back to FTS | Local embedding model |
| 9 | Command menu error | Telegram limits 100 native commands | commands.native: false |
| 10 | sendChatAction fails | Network fluctuation | Non-fatal, auto-recovers |
| 11 | Chrome crashes after page load | /dev/shm default 64MB | shm_size: 512m |
| 12 | Chrome won't start after rebuild | Symlinks lost + SingletonLock | post-start.sh |
| 13 | API proxy no embedding | Proxy only forwards chat | Local node-llama-cpp model |
| 14 | Snapshot/interaction timeout | Chrome not started (race) | Pre-start + verify CDP |
| 15 | CDP port confusion | 18800=Chrome, 18793=Canvas | Understand port derivation |
| 16 | HuggingFace blocked | GFW blocks huggingface.co | Use hf-mirror.com |
| 17 | International sites timeout | GFW TCP-level blocking | Xray SOCKS5 proxy |
| 18 | WARP doesn't work | WARP endpoints unreliable in China | Use Xray + VMess instead |
| 19 | Chrome ICU error | Binary moved from Playwright dir | Keep original path, use wrapper |
| 20 | defaultProfile ignored | AI model follows tool description | Define both profiles explicitly |
| 21 | Two profiles inconsistent | Auto-created chrome uses extension | Both profiles point to CDP 18800 |
Bot not responding?
├─ Check container running: docker ps
├─ Check logs: docker logs openclaw-openclaw-gateway-1 --tail 50
├─ Check Telegram proxy: curl https://your-cf-proxy.pages.dev/bot{TOKEN}/getMe
└─ Check LLM API: curl your-api-proxy/v1/chat/completions
Memory not working?
├─ Check model exists: ls /home/ubuntu/.openclaw/models/*.gguf
├─ Check SQLite: sqlite3 /home/ubuntu/.openclaw/memory/default.sqlite ".tables"
├─ Check embedding: look for "embedding" in container logs
└─ Verify: memory search should use vector, not just FTS
Browser not working?
├─ Chrome running?: docker exec CONTAINER pgrep -f chrome
│ └─ No → Run post-start.sh
├─ CDP responding?: curl http://127.0.0.1:18800/json/version
│ └─ No → Check SingletonLock, restart Chrome
├─ Proxy working?: curl --proxy socks5://172.19.0.1:10808 https://httpbin.org/ip
│ └─ No → Check Xray: systemctl status xray
├─ Profile correct?: Check openclaw.json browser.profiles
│ └─ Both profiles must point to cdpPort: 18800
└─ Page loads?: Try navigating to a URL via bot command
└─ Timeout → Check GFW, verify Xray routing
openclaw.json created with all settingsSOUL.md and USER.md createdpost-start.sh deployed (Chromium wrapper + Chrome pre-start)docker compose up -d && bash post-start.shMachine endpoints, protocol fit, contract coverage, invocation examples, and guardrails for agent-to-agent use.
Contract coverage
Status
missing
Auth
None
Streaming
No
Data region
Unspecified
Protocol support
Requires: none
Forbidden: none
Guardrails
Operational confidence: low
curl -s "https://xpersona.co/api/v1/agents/kkkano-openclaw-deployment-skill/snapshot"
curl -s "https://xpersona.co/api/v1/agents/kkkano-openclaw-deployment-skill/contract"
curl -s "https://xpersona.co/api/v1/agents/kkkano-openclaw-deployment-skill/trust"
Trust and runtime signals, benchmark suites, failure patterns, and practical risk constraints.
Trust signals
Handshake
UNKNOWN
Confidence
unknown
Attempts 30d
unknown
Fallback rate
unknown
Runtime metrics
Observed P50
unknown
Observed P95
unknown
Rate limit
unknown
Estimated cost
unknown
Do not use if
Every public screenshot, visual asset, demo link, and owner-provided destination tied to this agent.
Neighboring agents from the same protocol and source ecosystem for comparison and shortlist building.
Rank
70
AI Agents & MCPs & AI Workflow Automation • (~400 MCP servers for AI agents) • AI Automation / AI Agent with MCPs • AI Workflows & AI Agents • MCPs for AI Agents
Traction
No public download signal
Freshness
Updated 2d ago
Rank
70
AI productivity studio with smart chat, autonomous agents, and 300+ assistants. Unified access to frontier LLMs
Traction
No public download signal
Freshness
Updated 5d ago
Rank
70
Free, local, open-source 24/7 Cowork app and OpenClaw for Gemini CLI, Claude Code, Codex, OpenCode, Qwen Code, Goose CLI, Auggie, and more | 🌟 Star if you like it!
Traction
No public download signal
Freshness
Updated 6d ago
Rank
70
The Frontend for Agents & Generative UI. React + Angular
Traction
No public download signal
Freshness
Updated 23d ago
Contract JSON
{
"contractStatus": "missing",
"authModes": [],
"requires": [],
"forbidden": [],
"supportsMcp": false,
"supportsA2a": false,
"supportsStreaming": false,
"inputSchemaRef": null,
"outputSchemaRef": null,
"dataRegion": null,
"contractUpdatedAt": null,
"sourceUpdatedAt": null,
"freshnessSeconds": null
}Invocation Guide
{
"preferredApi": {
"snapshotUrl": "https://xpersona.co/api/v1/agents/kkkano-openclaw-deployment-skill/snapshot",
"contractUrl": "https://xpersona.co/api/v1/agents/kkkano-openclaw-deployment-skill/contract",
"trustUrl": "https://xpersona.co/api/v1/agents/kkkano-openclaw-deployment-skill/trust"
},
"curlExamples": [
"curl -s \"https://xpersona.co/api/v1/agents/kkkano-openclaw-deployment-skill/snapshot\"",
"curl -s \"https://xpersona.co/api/v1/agents/kkkano-openclaw-deployment-skill/contract\"",
"curl -s \"https://xpersona.co/api/v1/agents/kkkano-openclaw-deployment-skill/trust\""
],
"jsonRequestTemplate": {
"query": "summarize this repo",
"constraints": {
"maxLatencyMs": 2000,
"protocolPreference": [
"OPENCLEW"
]
}
},
"jsonResponseTemplate": {
"ok": true,
"result": {
"summary": "...",
"confidence": 0.9
},
"meta": {
"source": "GITHUB_OPENCLEW",
"generatedAt": "2026-04-17T00:50:19.697Z"
}
},
"retryPolicy": {
"maxAttempts": 3,
"backoffMs": [
500,
1500,
3500
],
"retryableConditions": [
"HTTP_429",
"HTTP_503",
"NETWORK_TIMEOUT"
]
}
}Trust JSON
{
"status": "unavailable",
"handshakeStatus": "UNKNOWN",
"verificationFreshnessHours": null,
"reputationScore": null,
"p95LatencyMs": null,
"successRate30d": null,
"fallbackRate": null,
"attempts30d": null,
"trustUpdatedAt": null,
"trustConfidence": "unknown",
"sourceUpdatedAt": null,
"freshnessSeconds": null
}Capability Matrix
{
"rows": [
{
"key": "OPENCLEW",
"type": "protocol",
"support": "unknown",
"confidenceSource": "profile",
"notes": "Listed on profile"
},
{
"key": "github",
"type": "capability",
"support": "supported",
"confidenceSource": "profile",
"notes": "Declared in agent profile metadata"
},
{
"key": "embedding",
"type": "capability",
"support": "supported",
"confidenceSource": "profile",
"notes": "Declared in agent profile metadata"
},
{
"key": "enable",
"type": "capability",
"support": "supported",
"confidenceSource": "profile",
"notes": "Declared in agent profile metadata"
},
{
"key": "vision",
"type": "capability",
"support": "supported",
"confidenceSource": "profile",
"notes": "Declared in agent profile metadata"
}
],
"flattenedTokens": "protocol:OPENCLEW|unknown|profile capability:github|supported|profile capability:embedding|supported|profile capability:enable|supported|profile capability:vision|supported|profile"
}Facts JSON
[
{
"factKey": "docs_crawl",
"category": "integration",
"label": "Crawlable docs",
"value": "6 indexed pages on the official domain",
"href": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
"sourceUrl": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
"sourceType": "search_document",
"confidence": "medium",
"observedAt": "2026-04-15T05:03:46.393Z",
"isPublic": true
},
{
"factKey": "vendor",
"category": "vendor",
"label": "Vendor",
"value": "Kkkano",
"href": "https://github.com/kkkano/openclaw-deployment-skill",
"sourceUrl": "https://github.com/kkkano/openclaw-deployment-skill",
"sourceType": "profile",
"confidence": "medium",
"observedAt": "2026-02-25T01:46:15.687Z",
"isPublic": true
},
{
"factKey": "protocols",
"category": "compatibility",
"label": "Protocol compatibility",
"value": "OpenClaw",
"href": "https://xpersona.co/api/v1/agents/kkkano-openclaw-deployment-skill/contract",
"sourceUrl": "https://xpersona.co/api/v1/agents/kkkano-openclaw-deployment-skill/contract",
"sourceType": "contract",
"confidence": "medium",
"observedAt": "2026-02-25T01:46:15.687Z",
"isPublic": true
},
{
"factKey": "handshake_status",
"category": "security",
"label": "Handshake status",
"value": "UNKNOWN",
"href": "https://xpersona.co/api/v1/agents/kkkano-openclaw-deployment-skill/trust",
"sourceUrl": "https://xpersona.co/api/v1/agents/kkkano-openclaw-deployment-skill/trust",
"sourceType": "trust",
"confidence": "medium",
"observedAt": null,
"isPublic": true
}
]Change Events JSON
[
{
"eventType": "docs_update",
"title": "Docs refreshed: Sign in to GitHub · GitHub",
"description": "Fresh crawlable documentation was indexed for the official domain.",
"href": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
"sourceUrl": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
"sourceType": "search_document",
"confidence": "medium",
"observedAt": "2026-04-15T05:03:46.393Z",
"isPublic": true
}
]Sponsored
Ads related to openclaw-deployment and adjacent AI workflows.