Rank
70
AI Agents & MCPs & AI Workflow Automation • (~400 MCP servers for AI agents) • AI Automation / AI Agent with MCPs • AI Workflows & AI Agents • MCPs for AI Agents
Traction
No public download signal
Freshness
Updated 2d ago
Crawler Summary
Invoke the pi coding agent CLI as a sub-agent. Use when delegating work to pi, running pi programmatically, sending prompts to a specific LLM model via pi, or when users say "use pi", "run pi", "ask pi", "pi agent", "delegate to pi". Includes orchestration best practices for context passing, tool scoping, and multi-agent workflows. --- name: pi-agent description: Invoke the pi coding agent CLI as a sub-agent. Use when delegating work to pi, running pi programmatically, sending prompts to a specific LLM model via pi, or when users say "use pi", "run pi", "ask pi", "pi agent", "delegate to pi". Includes orchestration best practices for context passing, tool scoping, and multi-agent workflows. --- Pi Agent: Sub-Agent Orchestration Guide Default Co Capability contract not published. No trust telemetry is available yet. Last updated 4/15/2026.
Freshness
Last checked 4/15/2026
Best For
pi-agent is best for do, multimodal workflows where OpenClaw compatibility matters.
Not Ideal For
Contract metadata is missing or unavailable for deterministic execution.
Evidence Sources Checked
editorial-content, GITHUB OPENCLEW, runtime-metrics, public facts pack
Invoke the pi coding agent CLI as a sub-agent. Use when delegating work to pi, running pi programmatically, sending prompts to a specific LLM model via pi, or when users say "use pi", "run pi", "ask pi", "pi agent", "delegate to pi". Includes orchestration best practices for context passing, tool scoping, and multi-agent workflows. --- name: pi-agent description: Invoke the pi coding agent CLI as a sub-agent. Use when delegating work to pi, running pi programmatically, sending prompts to a specific LLM model via pi, or when users say "use pi", "run pi", "ask pi", "pi agent", "delegate to pi". Includes orchestration best practices for context passing, tool scoping, and multi-agent workflows. --- Pi Agent: Sub-Agent Orchestration Guide Default Co
Public facts
4
Change events
1
Artifacts
0
Freshness
Apr 15, 2026
Capability contract not published. No trust telemetry is available yet. Last updated 4/15/2026.
Trust score
Unknown
Compatibility
OpenClaw
Freshness
Apr 15, 2026
Vendor
Gary149
Artifacts
0
Benchmarks
0
Last release
Unpublished
Key links, install path, and a quick operational read before the deeper crawl record.
Summary
Capability contract not published. No trust telemetry is available yet. Last updated 4/15/2026.
Setup snapshot
git clone https://github.com/gary149/pi-agent-skill.gitSetup complexity is LOW. This package is likely designed for quick installation with minimal external side-effects.
Final validation: Expose the agent to a mock request payload inside a sandbox and trace the network egress before allowing access to real customer data.
Everything public we have scraped or crawled about this agent, grouped by evidence type with provenance.
Vendor
Gary149
Protocol compatibility
OpenClaw
Handshake status
UNKNOWN
Crawlable docs
6 indexed pages on the official domain
Merged public release, docs, artifact, benchmark, pricing, and trust refresh events.
Extracted files, examples, snippets, parameters, dependencies, permissions, and artifact metadata.
Extracted files
0
Examples
6
Snippets
0
Languages
typescript
Parameters
bash
pi -p --no-session --model google/gemini-3.1-pro-preview --thinking high "your prompt here"
bash
pi -p --no-session --model openrouter/google/gemini-3.1-pro-preview --thinking high "your prompt here"
text
Objective: <one sentence — what to produce> Output Format: <exact structure of the response, e.g. markdown headings, JSON schema, bullet list> Context: <only what this agent needs — not everything you know> Boundaries: <what NOT to do — prevents tool drift and scope creep>
bash
pi -p --no-session \ --model google/gemini-3.1-pro-preview --thinking high \ "Objective: Add rate limiting middleware to the login endpoint (max 5 attempts per IP per 15 min). Output Format: ## Completed - what was done ## Files Changed - path — description of change Context: - Login route: src/routes/auth.ts:42 (POST /api/login) - Auth middleware: src/middleware/auth.ts (exports: requireAuth, validateToken) - Existing rate limiter dep: express-rate-limit@7.1.0 in package.json - Pattern: middleware is registered in src/app.ts:15-30 Boundaries: Do not modify test files. Do not refactor existing auth code. Do not add new dependencies."
bash
# Describe a screenshot pi -p --no-session --model google/gemini-3.1-pro-preview \ @screenshot.png "Objective: Describe the UI elements in this screenshot. Output Format: bullet list." # Multiple images pi -p --no-session --model sonnet \ @before.png @after.png "Objective: Describe the visual differences between these two screenshots."
bash
# Write role definition to temp file cat > /tmp/pi-reviewer.md << 'EOF' You are a code reviewer focused on security vulnerabilities. ## Output Format For each finding: ### [SEVERITY] Title - **File:** path:line - **Issue:** description - **Fix:** suggested remediation ## Rules - Only report confirmed vulnerabilities, not style issues - Use bash only for `git diff` and `git log`, never to modify files EOF # Pass as append-system-prompt pi -p --no-session --append-system-prompt /tmp/pi-reviewer.md \ --tools read,grep,find,ls,bash \ --model google/gemini-3.1-pro-preview --thinking high \ "Objective: Review the changes in the last 3 commits for security issues. Boundaries: Do not modify files. Do not review test files."
Full documentation captured from public sources, including the complete README when available.
Docs source
GITHUB OPENCLEW
Editorial quality
ready
Invoke the pi coding agent CLI as a sub-agent. Use when delegating work to pi, running pi programmatically, sending prompts to a specific LLM model via pi, or when users say "use pi", "run pi", "ask pi", "pi agent", "delegate to pi". Includes orchestration best practices for context passing, tool scoping, and multi-agent workflows. --- name: pi-agent description: Invoke the pi coding agent CLI as a sub-agent. Use when delegating work to pi, running pi programmatically, sending prompts to a specific LLM model via pi, or when users say "use pi", "run pi", "ask pi", "pi agent", "delegate to pi". Includes orchestration best practices for context passing, tool scoping, and multi-agent workflows. --- Pi Agent: Sub-Agent Orchestration Guide Default Co
Unless the user specifies a different model, always use gemini-3.1-pro-preview with high thinking:
pi -p --no-session --model google/gemini-3.1-pro-preview --thinking high "your prompt here"
Fallback: If the Google API fails (rate limit, outage, auth error), retry the same command with OpenRouter:
pi -p --no-session --model openrouter/google/gemini-3.1-pro-preview --thinking high "your prompt here"
Use pi -p (print mode) for non-interactive, single-shot execution. Never use bare pi from another agent — it requires a TTY. Always add --no-session to avoid polluting the user's session list.
Pi sub-agents must focus on execution, not exploration. Every invocation should produce a concrete deliverable — code written, files changed, a structured analysis, a direct answer.
The orchestrator (you) handles all exploration. Use your own tools and subagents to read files, grep for patterns, understand the codebase, and gather context. Then pass the distilled context to pi for execution. Never spawn pi to "explore" or "investigate" — that's your job. Pi receives context and acts on it.
Every task string passed to pi should have 4 sections. This is the single most important factor for reliable sub-agent results.
Objective: <one sentence — what to produce>
Output Format:
<exact structure of the response, e.g. markdown headings, JSON schema, bullet list>
Context:
<only what this agent needs — not everything you know>
Boundaries:
<what NOT to do — prevents tool drift and scope creep>
Vague prompts like "Review the code in src/" produce unpredictable results. The 4-section template constrains the agent's behavior:
The orchestrator has already explored the codebase and found the relevant files. Now it passes that context to pi for execution:
pi -p --no-session \
--model google/gemini-3.1-pro-preview --thinking high \
"Objective: Add rate limiting middleware to the login endpoint (max 5 attempts per IP per 15 min).
Output Format:
## Completed
- what was done
## Files Changed
- path — description of change
Context:
- Login route: src/routes/auth.ts:42 (POST /api/login)
- Auth middleware: src/middleware/auth.ts (exports: requireAuth, validateToken)
- Existing rate limiter dep: express-rate-limit@7.1.0 in package.json
- Pattern: middleware is registered in src/app.ts:15-30
Boundaries:
Do not modify test files. Do not refactor existing auth code. Do not add new dependencies."
Pi has 4 mechanisms for injecting context. Choose based on what the sub-agent needs.
| Mechanism | When to use | Example |
|-----------|-------------|---------|
| @file | Agent needs full file content (<200 lines) or an image | @src/config.ts "Explain this" / @screenshot.png "What's wrong?" |
| --system-prompt <path> | Custom agent identity — replaces pi's defaults | --system-prompt /tmp/auditor.md |
| --append-system-prompt <path> | Add constraints to pi's default identity | --append-system-prompt /tmp/rules.md |
| Piped stdin | Compressed summary from a prior agent | echo "$scout_output" \| pi -p ... |
The @file syntax also works for images. Pi auto-detects image files (png, jpg, gif, webp) by magic bytes and sends them as vision input. The model must support multimodal input (Gemini, Claude, GPT-4o do; most others don't).
# Describe a screenshot
pi -p --no-session --model google/gemini-3.1-pro-preview \
@screenshot.png "Objective: Describe the UI elements in this screenshot. Output Format: bullet list."
# Multiple images
pi -p --no-session --model sonnet \
@before.png @after.png "Objective: Describe the visual differences between these two screenshots."
Images are auto-resized to fit within 2000x2000px / 4.5MB before sending.
read/grep tools to explore it instead of injecting the whole thing.--append-system-prompt for role specialization. Write the role definition to a temp file, pass the path. This preserves pi's built-in capabilities while adding your constraints.--system-prompt only when you need a blank-slate agent. Pair with --no-skills --no-extensions to strip all default context.# Write role definition to temp file
cat > /tmp/pi-reviewer.md << 'EOF'
You are a code reviewer focused on security vulnerabilities.
## Output Format
For each finding:
### [SEVERITY] Title
- **File:** path:line
- **Issue:** description
- **Fix:** suggested remediation
## Rules
- Only report confirmed vulnerabilities, not style issues
- Use bash only for `git diff` and `git log`, never to modify files
EOF
# Pass as append-system-prompt
pi -p --no-session --append-system-prompt /tmp/pi-reviewer.md \
--tools read,grep,find,ls,bash \
--model google/gemini-3.1-pro-preview --thinking high \
"Objective: Review the changes in the last 3 commits for security issues.
Boundaries: Do not modify files. Do not review test files."
Match tools to the task type. Giving a review agent write/edit tools is asking for trouble. Exploration/recon is the orchestrator's job — don't delegate it to pi.
| Task type | Tools | Thinking | Rationale |
|-----------|-------|----------|-----------|
| Code review | read,grep,find,ls,bash | medium–high | Bash for git diff/log only |
| Implementation | read,bash,edit,write (default) | high | Full capabilities |
| Targeted analysis | read,grep,find,ls | medium–high | Read-only, orchestrator already identified what to analyze |
| Pure reasoning | --no-tools | high–xhigh | No file access needed |
| Build / test | read,bash,ls | minimal | Run commands, read output |
| Flag | Effect |
|------|--------|
| --no-tools | Disable all built-in tools |
| --no-skills | Don't load any skills |
| --no-extensions | Don't load any extensions |
Use --no-skills --no-extensions when you're providing a complete --system-prompt — otherwise the agent loads skills/extensions that may conflict with your custom identity.
result=$(pi -p --no-session --no-tools \
--model haiku "Objective: Convert this list of endpoints into an OpenAPI snippet.
Output Format: YAML OpenAPI paths block.
Context:
- GET /api/users (returns User[])
- POST /api/users (body: CreateUserDTO, returns User)
- DELETE /api/users/:id (returns 204)
Boundaries: Only output YAML, no explanation.")
echo "$result"
Use --mode json for programmatic parsing. Pi emits newline-delimited JSON events. The key event types:
| Event type | Contains | Use for |
|------------|----------|---------|
| message_end | Full assistant message with usage, stopReason, model | Final answer extraction, cost tracking |
| tool_result_end | Tool output message | Monitoring tool usage |
# Extract the final assistant text from JSON mode
pi -p --no-session --mode json --model sonnet "Summarize this file" @README.md \
| grep '"type":"message_end"' \
| python3 -c "
import sys, json
for line in sys.stdin:
evt = json.loads(line)
msg = evt.get('message', {})
if msg.get('role') == 'assistant':
for block in msg.get('content', []):
if block.get('type') == 'text':
print(block['text'])
"
For parallel sub-agents, have each write to a named output file, then merge:
# Fan out: 3 pi agents implement the same refactor across different modules
for mod in api auth db; do
pi -p --no-session \
--model google/gemini-3.1-pro-preview --thinking high \
"Objective: Replace all console.log calls with the structured logger in src/$mod/.
Output Format:
## Files Changed
- path:line — what was changed
Context: Logger import is \`import { logger } from '@/lib/logger'\`. Use logger.info/warn/error.
Boundaries: Only modify src/$mod/. Do not change test files." > "/tmp/refactor-$mod.txt" &
done
wait
# Orchestrator merges results
cat /tmp/refactor-*.txt
# BAD: every agent gets the entire codebase summary
pi -p "Here is the full architecture doc (2000 lines)... Now find TODOs in src/utils/"
Each agent should receive only the context it needs. A TODO-finder doesn't need your architecture doc.
# BAD: no structure, unpredictable output
pi -p "Review the code in src/"
# GOOD: 4-section template
pi -p "Objective: Identify functions with cyclomatic complexity >10 in src/.
Output Format: - path:line functionName (complexity: N)
Context: This is a TypeScript project using ESLint.
Boundaries: Do not modify files. Do not review tests."
# BAD: using pi to explore the codebase
pi -p "Explore the auth module and tell me how it works"
pi -p --tools read,grep,find,ls "Scout the codebase for relevant files..."
# GOOD: explore yourself, then delegate execution
# (use your own tools: Read, Grep, Glob, or spawn subagents)
# Once you understand the codebase, pass distilled context to pi:
pi -p --no-session "Objective: Add rate limiting to POST /api/login.
Context:
- Route: src/routes/auth.ts:42
- Middleware pattern: src/middleware/auth.ts
- Dep available: express-rate-limit@7.1.0
Boundaries: Do not modify tests."
# BAD: each agent gets ALL prior outputs, context explodes
step1=$(pi -p "Analyze the codebase...")
step2=$(pi -p "Given this analysis: $step1 — now plan changes...")
step3=$(pi -p "Given analysis: $step1 and plan: $step2 — now implement...")
Compress between steps. Extract only the sections the next agent needs.
# BAD: analysis agent has edit/write tools, might modify files
pi -p "Review this code for bugs" @src/app.ts
# GOOD: read-only tools
pi -p --tools read,grep,find,ls "Review this code for bugs" @src/app.ts
# BAD: creates a persistent session for a throwaway task
pi -p --model haiku "What does this regex do?"
# GOOD: ephemeral
pi -p --no-session --model haiku "What does this regex do?"
# BAD: passing 5 prior agent outputs when only the last one matters
pi -p "Previous outputs: $step1 $step2 $step3 $step4 $step5 — now summarize"
# GOOD: pass only what's needed
pi -p "Objective: Summarize findings. Context: $step5 Boundaries: ..."
The orchestrator (you) does the exploration using your own tools/subagents, then passes distilled context to pi for implementation.
# Step 1: YOU (the orchestrator) explore the codebase
# Use your own Read, Grep, Glob tools or spawn Explore subagents to understand:
# - Login route: src/routes/auth.ts:42 (POST /api/login, calls AuthService.login)
# - Auth middleware: src/middleware/auth.ts (exports requireAuth, validateToken)
# - express-rate-limit@7.1.0 already in package.json
# - Middleware registered in src/app.ts:15-30
# Step 2: Pass distilled context to pi for execution
pi -p --no-session \
--model google/gemini-3.1-pro-preview --thinking high \
"Objective: Add rate limiting to the login endpoint (max 5 attempts per IP per 15 min).
Output Format:
## Completed
- what was done
## Files Changed
- path — description of change
## Notes
- anything the caller should know
Context:
- Login route: src/routes/auth.ts:42 (POST /api/login, calls AuthService.login)
- Auth middleware pattern: src/middleware/auth.ts (exports requireAuth, validateToken)
- express-rate-limit@7.1.0 already in package.json
- Middleware registered in src/app.ts:15-30
Boundaries: Only modify auth-related files. Do not refactor existing code. Do not change tests."
The orchestrator has already identified the audit areas. Each pi instance executes a focused, scoped audit.
# Orchestrator has identified these areas to audit (via its own exploration)
areas=("SQL injection:src/db" "XSS:src/views" "Auth bypass:src/auth")
# Fan out parallel executions — each pi does targeted analysis, not exploration
for entry in "${areas[@]}"; do
IFS=: read -r vuln_type dir <<< "$entry"
pi -p --no-session --tools read,grep,find,ls \
--model sonnet --thinking high \
"Objective: Audit $dir/ for $vuln_type vulnerabilities.
Output Format:
### $vuln_type Findings
- **[HIGH|MED|LOW]** path:line — description
Boundaries: Do not modify files. Only report $vuln_type issues. Do not explore outside $dir/." \
> "/tmp/audit-$(echo $vuln_type | tr ' ' '-').txt" &
done
wait
# Merge — orchestrator can do this itself, or use pi for structured formatting
cat /tmp/audit-*.txt | pi -p --no-session --no-tools \
--model haiku --thinking minimal \
"Objective: Merge these security audit results into a single prioritized report.
Output Format:
## Critical
## High
## Medium
## Low
Each item: - path:line — vulnerability type — description
Boundaries: Do not add findings not present in the input. Do not suggest fixes."
# Write a specialized agent definition to a temp file
cat > /tmp/pi-architect.md << 'EOF'
You are a senior software architect. You analyze codebases and produce
architectural decision records (ADRs).
You must NOT make any changes to files. Only read, analyze, and produce ADRs.
## Output Format
# ADR-NNN: Title
## Status: Proposed
## Context
## Decision
## Consequences
EOF
pi -p --no-session \
--system-prompt /tmp/pi-architect.md \
--no-skills --no-extensions \
--tools read,grep,find,ls \
--model google/gemini-3.1-pro-preview --thinking high \
"Objective: Produce an ADR for migrating from Express to Fastify.
Context: The project is in /Users/me/app. Entry point is src/index.ts.
Boundaries: Do not modify files. Focus only on the migration decision."
# Orchestrator already read the file and extracted the function signatures.
# Now use pi to generate documentation from that context.
pi -p --no-session --mode json --no-tools \
--model sonnet --thinking medium \
"Objective: Generate JSDoc comments for these functions.
Output Format: For each function, output the JSDoc block followed by the signature.
Context:
- createSession(userId: string, options?: SessionOptions): Promise<Session> — creates a new auth session
- validateToken(token: string): TokenPayload | null — validates JWT, returns null if invalid
- revokeSession(sessionId: string): Promise<void> — invalidates an active session
Boundaries: Only output JSDoc + signatures. No explanation." \
2>/dev/null \
| while IFS= read -r line; do
type=$(echo "$line" | python3 -c "import sys,json; print(json.loads(sys.stdin.read()).get('type',''))" 2>/dev/null)
if [ "$type" = "message_end" ]; then
echo "$line" | python3 -c "
import sys, json
evt = json.loads(sys.stdin.read())
msg = evt.get('message', {})
if msg.get('role') == 'assistant':
for block in msg.get('content', []):
if block.get('type') == 'text':
print(block['text'])
"
fi
done
| Flag | Description |
|------|-------------|
| -p, --print | Non-interactive mode (required for programmatic use) |
| --model <pattern> | Model ID, fuzzy match, or provider/id or id:thinking |
| --provider <name> | Provider name (anthropic, google, openai, etc.) |
| --thinking <level> | off, minimal, low, medium, high, xhigh |
| --system-prompt <path> | Replace system prompt with contents of file at path |
| --append-system-prompt <path> | Append file contents to default system prompt |
| --tools <list> | Comma-separated: read,bash,edit,write,grep,find,ls |
| --no-tools | Disable all built-in tools |
| --no-skills | Don't load skills |
| --no-extensions | Don't load extensions |
| --no-session | Ephemeral, don't persist session |
| --mode json | Output JSON event stream (newline-delimited) |
| --mode rpc | Long-running RPC mode over stdin/stdout |
| @file | Include file contents in the prompt |
# Provider/model shorthand (recommended)
pi -p --model anthropic/claude-opus-4-6 "prompt"
# Fuzzy match (partial names work)
pi -p --model sonnet "prompt"
# Model with thinking level shorthand
pi -p --model sonnet:high "prompt"
anthropic, google, openai, xai, groq, cerebras, mistral, openrouter, huggingface, amazon-bedrock, azure-openai-responses, github-copilot, minimax, kimi-coding, and others. Use pi --list-models to see all available models.
For long-running programmatic control, use RPC mode over stdin/stdout:
pi --mode rpc --model anthropic/claude-opus-4-6
Send JSON commands on stdin:
{"type": "prompt", "message": "Hello"}
{"type": "set_model", "provider": "google", "modelId": "gemini-2.5-pro"}
{"type": "set_thinking_level", "level": "high"}
Machine endpoints, protocol fit, contract coverage, invocation examples, and guardrails for agent-to-agent use.
Contract coverage
Status
missing
Auth
None
Streaming
No
Data region
Unspecified
Protocol support
Requires: none
Forbidden: none
Guardrails
Operational confidence: low
curl -s "https://xpersona.co/api/v1/agents/gary149-pi-agent-skill/snapshot"
curl -s "https://xpersona.co/api/v1/agents/gary149-pi-agent-skill/contract"
curl -s "https://xpersona.co/api/v1/agents/gary149-pi-agent-skill/trust"
Trust and runtime signals, benchmark suites, failure patterns, and practical risk constraints.
Trust signals
Handshake
UNKNOWN
Confidence
unknown
Attempts 30d
unknown
Fallback rate
unknown
Runtime metrics
Observed P50
unknown
Observed P95
unknown
Rate limit
unknown
Estimated cost
unknown
Do not use if
Every public screenshot, visual asset, demo link, and owner-provided destination tied to this agent.
Neighboring agents from the same protocol and source ecosystem for comparison and shortlist building.
Rank
70
AI Agents & MCPs & AI Workflow Automation • (~400 MCP servers for AI agents) • AI Automation / AI Agent with MCPs • AI Workflows & AI Agents • MCPs for AI Agents
Traction
No public download signal
Freshness
Updated 2d ago
Rank
70
AI productivity studio with smart chat, autonomous agents, and 300+ assistants. Unified access to frontier LLMs
Traction
No public download signal
Freshness
Updated 5d ago
Rank
70
Free, local, open-source 24/7 Cowork app and OpenClaw for Gemini CLI, Claude Code, Codex, OpenCode, Qwen Code, Goose CLI, Auggie, and more | 🌟 Star if you like it!
Traction
No public download signal
Freshness
Updated 6d ago
Rank
70
The Frontend for Agents & Generative UI. React + Angular
Traction
No public download signal
Freshness
Updated 23d ago
Contract JSON
{
"contractStatus": "missing",
"authModes": [],
"requires": [],
"forbidden": [],
"supportsMcp": false,
"supportsA2a": false,
"supportsStreaming": false,
"inputSchemaRef": null,
"outputSchemaRef": null,
"dataRegion": null,
"contractUpdatedAt": null,
"sourceUpdatedAt": null,
"freshnessSeconds": null
}Invocation Guide
{
"preferredApi": {
"snapshotUrl": "https://xpersona.co/api/v1/agents/gary149-pi-agent-skill/snapshot",
"contractUrl": "https://xpersona.co/api/v1/agents/gary149-pi-agent-skill/contract",
"trustUrl": "https://xpersona.co/api/v1/agents/gary149-pi-agent-skill/trust"
},
"curlExamples": [
"curl -s \"https://xpersona.co/api/v1/agents/gary149-pi-agent-skill/snapshot\"",
"curl -s \"https://xpersona.co/api/v1/agents/gary149-pi-agent-skill/contract\"",
"curl -s \"https://xpersona.co/api/v1/agents/gary149-pi-agent-skill/trust\""
],
"jsonRequestTemplate": {
"query": "summarize this repo",
"constraints": {
"maxLatencyMs": 2000,
"protocolPreference": [
"OPENCLEW"
]
}
},
"jsonResponseTemplate": {
"ok": true,
"result": {
"summary": "...",
"confidence": 0.9
},
"meta": {
"source": "GITHUB_OPENCLEW",
"generatedAt": "2026-04-16T23:32:34.124Z"
}
},
"retryPolicy": {
"maxAttempts": 3,
"backoffMs": [
500,
1500,
3500
],
"retryableConditions": [
"HTTP_429",
"HTTP_503",
"NETWORK_TIMEOUT"
]
}
}Trust JSON
{
"status": "unavailable",
"handshakeStatus": "UNKNOWN",
"verificationFreshnessHours": null,
"reputationScore": null,
"p95LatencyMs": null,
"successRate30d": null,
"fallbackRate": null,
"attempts30d": null,
"trustUpdatedAt": null,
"trustConfidence": "unknown",
"sourceUpdatedAt": null,
"freshnessSeconds": null
}Capability Matrix
{
"rows": [
{
"key": "OPENCLEW",
"type": "protocol",
"support": "unknown",
"confidenceSource": "profile",
"notes": "Listed on profile"
},
{
"key": "do",
"type": "capability",
"support": "supported",
"confidenceSource": "profile",
"notes": "Declared in agent profile metadata"
},
{
"key": "multimodal",
"type": "capability",
"support": "supported",
"confidenceSource": "profile",
"notes": "Declared in agent profile metadata"
}
],
"flattenedTokens": "protocol:OPENCLEW|unknown|profile capability:do|supported|profile capability:multimodal|supported|profile"
}Facts JSON
[
{
"factKey": "docs_crawl",
"category": "integration",
"label": "Crawlable docs",
"value": "6 indexed pages on the official domain",
"href": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
"sourceUrl": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
"sourceType": "search_document",
"confidence": "medium",
"observedAt": "2026-04-15T05:03:46.393Z",
"isPublic": true
},
{
"factKey": "vendor",
"category": "vendor",
"label": "Vendor",
"value": "Gary149",
"href": "https://github.com/gary149/pi-agent-skill",
"sourceUrl": "https://github.com/gary149/pi-agent-skill",
"sourceType": "profile",
"confidence": "medium",
"observedAt": "2026-04-15T01:13:32.799Z",
"isPublic": true
},
{
"factKey": "protocols",
"category": "compatibility",
"label": "Protocol compatibility",
"value": "OpenClaw",
"href": "https://xpersona.co/api/v1/agents/gary149-pi-agent-skill/contract",
"sourceUrl": "https://xpersona.co/api/v1/agents/gary149-pi-agent-skill/contract",
"sourceType": "contract",
"confidence": "medium",
"observedAt": "2026-04-15T01:13:32.799Z",
"isPublic": true
},
{
"factKey": "handshake_status",
"category": "security",
"label": "Handshake status",
"value": "UNKNOWN",
"href": "https://xpersona.co/api/v1/agents/gary149-pi-agent-skill/trust",
"sourceUrl": "https://xpersona.co/api/v1/agents/gary149-pi-agent-skill/trust",
"sourceType": "trust",
"confidence": "medium",
"observedAt": null,
"isPublic": true
}
]Change Events JSON
[
{
"eventType": "docs_update",
"title": "Docs refreshed: Sign in to GitHub · GitHub",
"description": "Fresh crawlable documentation was indexed for the official domain.",
"href": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
"sourceUrl": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
"sourceType": "search_document",
"confidence": "medium",
"observedAt": "2026-04-15T05:03:46.393Z",
"isPublic": true
}
]Sponsored
Ads related to pi-agent and adjacent AI workflows.