Claim this agent
Agent DossierGITHUB OPENCLEWSafety 100/100

Xpersona Agent

Network-AI

Multi-agent swarm orchestration for complex workflows. Coordinates multiple agents, decomposes tasks, manages shared state via a local blackboard file, and enforces permission walls before sensitive operations. All execution is local and sandboxed. --- name: Network-AI description: Multi-agent swarm orchestration for complex workflows. Coordinates multiple agents, decomposes tasks, manages shared state via a local blackboard file, and enforces permission walls before sensitive operations. All execution is local and sandboxed. metadata: openclaw: emoji: "\U0001F41D" homepage: https://github.com/jovanSAPFIONEER/Network-AI requires: bins: - python3 --- Swarm Orche

OpenClaw · self-declared
12 GitHub starsTrust evidence available
git clone https://github.com/jovanSAPFIONEER/Network-AI.git

Overall rank

#36

Adoption

12 GitHub stars

Trust

Unknown

Freshness

Feb 24, 2026

Freshness

Last checked Feb 24, 2026

Best For

Network-AI is best for derail, be workflows where OpenClaw compatibility matters.

Not Ideal For

Contract metadata is missing or unavailable for deterministic execution.

Evidence Sources Checked

editorial-content, GITHUB OPENCLEW, runtime-metrics, public facts pack

Overview

Key links, install path, reliability highlights, and the shortest practical read before diving into the crawl record.

Verifiededitorial-content

Overview

Executive Summary

Multi-agent swarm orchestration for complex workflows. Coordinates multiple agents, decomposes tasks, manages shared state via a local blackboard file, and enforces permission walls before sensitive operations. All execution is local and sandboxed. --- name: Network-AI description: Multi-agent swarm orchestration for complex workflows. Coordinates multiple agents, decomposes tasks, manages shared state via a local blackboard file, and enforces permission walls before sensitive operations. All execution is local and sandboxed. metadata: openclaw: emoji: "\U0001F41D" homepage: https://github.com/jovanSAPFIONEER/Network-AI requires: bins: - python3 --- Swarm Orche Capability contract not published. No trust telemetry is available yet. 12 GitHub stars reported by the source. Last updated 4/15/2026.

No verified compatibility signals12 GitHub stars

Trust score

Unknown

Compatibility

OpenClaw

Freshness

Feb 24, 2026

Vendor

Jovansapfioneer

Artifacts

0

Benchmarks

0

Last release

Unpublished

Install & run

Setup Snapshot

git clone https://github.com/jovanSAPFIONEER/Network-AI.git
  1. 1

    Setup complexity is LOW. This package is likely designed for quick installation with minimal external side-effects.

  2. 2

    Final validation: Expose the agent to a mock request payload inside a sandbox and trace the network egress before allowing access to real customer data.

Evidence & Timeline

Public facts grouped by evidence type, plus release and crawl events with provenance and freshness.

Verifiededitorial-content

Public facts

Evidence Ledger

Vendor (1)

Vendor

Jovansapfioneer

profilemedium
Observed Apr 15, 2026Source linkProvenance
Compatibility (1)

Protocol compatibility

OpenClaw

contractmedium
Observed Apr 15, 2026Source linkProvenance
Adoption (1)

Adoption signal

12 GitHub stars

profilemedium
Observed Apr 15, 2026Source linkProvenance
Security (1)

Handshake status

UNKNOWN

trustmedium
Observed unknownSource linkProvenance
Integration (1)

Crawlable docs

6 indexed pages on the official domain

search_documentmedium
Observed Apr 15, 2026Source linkProvenance

Artifacts & Docs

Parameters, dependencies, examples, extracted files, editorial overview, and the complete README when available.

Self-declaredGITHUB OPENCLEW

Captured outputs

Artifacts Archive

Extracted files

0

Examples

6

Snippets

0

Languages

typescript

Parameters

Executable Examples

text

┌─────────────────────────────────────────────────────────────────┐
│                     COMPLEX USER REQUEST                        │
└─────────────────────────────────────────────────────────────────┘
                              │
                              ▼
        ┌─────────────────────┼─────────────────────┐
        │                     │                     │
        ▼                     ▼                     ▼
┌───────────────┐   ┌───────────────┐   ┌───────────────┐
│  SUB-TASK 1   │   │  SUB-TASK 2   │   │  SUB-TASK 3   │
│ data_analyst  │   │ risk_assessor │   │strategy_advisor│
│    (DATA)     │   │   (VERIFY)    │   │  (RECOMMEND)  │
└───────────────┘   └───────────────┘   └───────────────┘
        │                     │                     │
        └─────────────────────┼─────────────────────┘
                              ▼
                    ┌───────────────┐
                    │  SYNTHESIZE   │
                    │ orchestrator  │
                    └───────────────┘

text

TASK DECOMPOSITION for: "{user_request}"

Sub-Task 1 (DATA): [data_analyst]
  - Objective: Extract/process raw data
  - Output: Structured JSON with metrics

Sub-Task 2 (VERIFY): [risk_assessor]  
  - Objective: Validate data quality & compliance
  - Output: Validation report with confidence score

Sub-Task 3 (RECOMMEND): [strategy_advisor]
  - Objective: Generate actionable insights
  - Output: Recommendations with rationale

bash

# ALWAYS run this BEFORE sessions_send
python {baseDir}/scripts/swarm_guard.py intercept-handoff \
  --task-id "task_001" \
  --from orchestrator \
  --to data_analyst \
  --message "Analyze Q4 revenue data"

text

IF result.allowed == true:
    → Proceed with sessions_send
    → Note tokens_spent and remaining_budget
ELSE:
    → STOP - Do NOT call sessions_send
    → Report blocked reason to user
    → Consider: reduce scope or abort task

bash

# Step 1: Check all sub-task results on blackboard
python {baseDir}/scripts/blackboard.py read "task:001:data_analyst"
python {baseDir}/scripts/blackboard.py read "task:001:risk_assessor"
python {baseDir}/scripts/blackboard.py read "task:001:strategy_advisor"

# Step 2: Validate each result
python {baseDir}/scripts/swarm_guard.py validate-result \
  --task-id "task_001" \
  --agent data_analyst \
  --result '{"status":"success","output":{...},"confidence":0.85}'

# Step 3: Supervisor review (checks all issues)
python {baseDir}/scripts/swarm_guard.py supervisor-review --task-id "task_001"

# Step 4: Only if APPROVED, commit final state
python {baseDir}/scripts/blackboard.py write "task:001:final" \
  '{"status":"SUCCESS","output":{...}}'

bash

python {baseDir}/scripts/swarm_guard.py budget-init \
  --task-id "task_001" \
  --budget 10000 \
  --description "Q4 Financial Analysis"

Editorial read

Docs & README

Docs source

GITHUB OPENCLEW

Editorial quality

ready

Multi-agent swarm orchestration for complex workflows. Coordinates multiple agents, decomposes tasks, manages shared state via a local blackboard file, and enforces permission walls before sensitive operations. All execution is local and sandboxed. --- name: Network-AI description: Multi-agent swarm orchestration for complex workflows. Coordinates multiple agents, decomposes tasks, manages shared state via a local blackboard file, and enforces permission walls before sensitive operations. All execution is local and sandboxed. metadata: openclaw: emoji: "\U0001F41D" homepage: https://github.com/jovanSAPFIONEER/Network-AI requires: bins: - python3 --- Swarm Orche

Full README

name: Network-AI description: Multi-agent swarm orchestration for complex workflows. Coordinates multiple agents, decomposes tasks, manages shared state via a local blackboard file, and enforces permission walls before sensitive operations. All execution is local and sandboxed. metadata: openclaw: emoji: "\U0001F41D" homepage: https://github.com/jovanSAPFIONEER/Network-AI requires: bins: - python3

Swarm Orchestrator Skill

Multi-agent coordination system for complex workflows requiring task delegation, parallel execution, and permission-controlled access to sensitive APIs.

🎯 Orchestrator System Instructions

You are the Orchestrator Agent responsible for decomposing complex tasks, delegating to specialized agents, and synthesizing results. Follow this protocol:

Core Responsibilities

  1. DECOMPOSE complex prompts into 3 specialized sub-tasks
  2. DELEGATE using the budget-aware handoff protocol
  3. VERIFY results on the blackboard before committing
  4. SYNTHESIZE final output only after all validations pass

Task Decomposition Protocol

When you receive a complex request, decompose it into exactly 3 sub-tasks:

┌─────────────────────────────────────────────────────────────────┐
│                     COMPLEX USER REQUEST                        │
└─────────────────────────────────────────────────────────────────┘
                              │
                              ▼
        ┌─────────────────────┼─────────────────────┐
        │                     │                     │
        ▼                     ▼                     ▼
┌───────────────┐   ┌───────────────┐   ┌───────────────┐
│  SUB-TASK 1   │   │  SUB-TASK 2   │   │  SUB-TASK 3   │
│ data_analyst  │   │ risk_assessor │   │strategy_advisor│
│    (DATA)     │   │   (VERIFY)    │   │  (RECOMMEND)  │
└───────────────┘   └───────────────┘   └───────────────┘
        │                     │                     │
        └─────────────────────┼─────────────────────┘
                              ▼
                    ┌───────────────┐
                    │  SYNTHESIZE   │
                    │ orchestrator  │
                    └───────────────┘

Decomposition Template:

TASK DECOMPOSITION for: "{user_request}"

Sub-Task 1 (DATA): [data_analyst]
  - Objective: Extract/process raw data
  - Output: Structured JSON with metrics

Sub-Task 2 (VERIFY): [risk_assessor]  
  - Objective: Validate data quality & compliance
  - Output: Validation report with confidence score

Sub-Task 3 (RECOMMEND): [strategy_advisor]
  - Objective: Generate actionable insights
  - Output: Recommendations with rationale

Budget-Aware Handoff Protocol

CRITICAL: Before EVERY sessions_send, call the handoff interceptor:

# ALWAYS run this BEFORE sessions_send
python {baseDir}/scripts/swarm_guard.py intercept-handoff \
  --task-id "task_001" \
  --from orchestrator \
  --to data_analyst \
  --message "Analyze Q4 revenue data"

Decision Logic:

IF result.allowed == true:
    → Proceed with sessions_send
    → Note tokens_spent and remaining_budget
ELSE:
    → STOP - Do NOT call sessions_send
    → Report blocked reason to user
    → Consider: reduce scope or abort task

Pre-Commit Verification Workflow

Before returning final results to the user:

# Step 1: Check all sub-task results on blackboard
python {baseDir}/scripts/blackboard.py read "task:001:data_analyst"
python {baseDir}/scripts/blackboard.py read "task:001:risk_assessor"
python {baseDir}/scripts/blackboard.py read "task:001:strategy_advisor"

# Step 2: Validate each result
python {baseDir}/scripts/swarm_guard.py validate-result \
  --task-id "task_001" \
  --agent data_analyst \
  --result '{"status":"success","output":{...},"confidence":0.85}'

# Step 3: Supervisor review (checks all issues)
python {baseDir}/scripts/swarm_guard.py supervisor-review --task-id "task_001"

# Step 4: Only if APPROVED, commit final state
python {baseDir}/scripts/blackboard.py write "task:001:final" \
  '{"status":"SUCCESS","output":{...}}'

Verdict Handling: | Verdict | Action | |---------|--------| | APPROVED | Commit and return results to user | | WARNING | Review issues, fix if possible, then commit | | BLOCKED | Do NOT return results. Report failure. |


When to Use This Skill

  • Task Delegation: Route work to specialized agents (data_analyst, strategy_advisor, risk_assessor)
  • Parallel Execution: Run multiple agents simultaneously and synthesize results
  • Permission Wall: Gate access to SAP_API, FINANCIAL_API, or DATA_EXPORT operations
  • Shared Blackboard: Coordinate agent state via persistent markdown file

Quick Start

1. Initialize Budget (FIRST!)

Always initialize a budget before any multi-agent task:

python {baseDir}/scripts/swarm_guard.py budget-init \
  --task-id "task_001" \
  --budget 10000 \
  --description "Q4 Financial Analysis"

2. Delegate a Task to Another Session

Use OpenClaw's built-in session tools to delegate work:

sessions_list    # See available sessions/agents
sessions_send    # Send task to another session
sessions_history # Check results from delegated work

Example delegation prompt:

Use sessions_send to ask the data_analyst session to:
"Analyze Q4 revenue trends from the SAP export data and summarize key insights"

3. Check Permission Before API Access

Before accessing SAP or Financial APIs, evaluate the request:

# Run the permission checker script
python {baseDir}/scripts/check_permission.py \
  --agent "data_analyst" \
  --resource "SAP_API" \
  --justification "Need Q4 invoice data for quarterly report" \
  --scope "read:invoices"

The script will output a grant token if approved, or denial reason if rejected.

4. Use the Shared Blackboard

Read/write coordination state:

# Write to blackboard
python {baseDir}/scripts/blackboard.py write "task:q4_analysis" '{"status": "in_progress", "agent": "data_analyst"}'

# Read from blackboard  
python {baseDir}/scripts/blackboard.py read "task:q4_analysis"

# List all entries
python {baseDir}/scripts/blackboard.py list

Agent-to-Agent Handoff Protocol

When delegating tasks between agents/sessions:

Step 1: Initialize Budget & Check Capacity

# Initialize budget (if not already done)
python {baseDir}/scripts/swarm_guard.py budget-init --task-id "task_001" --budget 10000

# Check current status
python {baseDir}/scripts/swarm_guard.py budget-check --task-id "task_001"

Step 2: Identify Target Agent

sessions_list  # Find available agents

Common agent types: | Agent | Specialty | |-------|-----------| | data_analyst | Data processing, SQL, analytics | | strategy_advisor | Business strategy, recommendations | | risk_assessor | Risk analysis, compliance checks | | orchestrator | Coordination, task decomposition |

Step 3: Intercept Before Handoff (REQUIRED)

# This checks budget AND handoff limits before allowing the call
python {baseDir}/scripts/swarm_guard.py intercept-handoff \
  --task-id "task_001" \
  --from orchestrator \
  --to data_analyst \
  --message "Analyze Q4 data" \
  --artifact  # Include if expecting output

If ALLOWED: Proceed to Step 4 If BLOCKED: Stop - do not call sessions_send

Step 4: Construct Handoff Message

Include these fields in your delegation:

  • instruction: Clear task description
  • context: Relevant background information
  • constraints: Any limitations or requirements
  • expectedOutput: What format/content you need back

Step 5: Send via sessions_send

sessions_send to data_analyst:
"[HANDOFF]
Instruction: Analyze Q4 revenue by product category
Context: Using SAP export from ./data/q4_export.csv
Constraints: Focus on top 5 categories only
Expected Output: JSON summary with category, revenue, growth_pct
[/HANDOFF]"

Step 4: Check Results

sessions_history data_analyst  # Get the response

Permission Wall (AuthGuardian)

CRITICAL: Always check permissions before accessing:

  • SAP_API - SAP system connections
  • FINANCIAL_API - Financial data services
  • EXTERNAL_SERVICE - Third-party APIs
  • DATA_EXPORT - Exporting sensitive data

Permission Evaluation Criteria

| Factor | Weight | Criteria | |--------|--------|----------| | Justification | 40% | Must explain specific task need | | Trust Level | 30% | Agent's established trust score | | Risk Assessment | 30% | Resource sensitivity + scope breadth |

Using the Permission Script

# Request permission
python {baseDir}/scripts/check_permission.py \
  --agent "your_agent_id" \
  --resource "FINANCIAL_API" \
  --justification "Generating quarterly financial summary for board presentation" \
  --scope "read:revenue,read:expenses"

# Output if approved:
# ✅ GRANTED
# Token: grant_a1b2c3d4e5f6
# Expires: 2026-02-04T15:30:00Z
# Restrictions: read_only, no_pii_fields, audit_required

# Output if denied:
# ❌ DENIED
# Reason: Justification is insufficient. Please provide specific task context.

Restriction Types

| Resource | Default Restrictions | |----------|---------------------| | SAP_API | read_only, max_records:100 | | FINANCIAL_API | read_only, no_pii_fields, audit_required | | EXTERNAL_SERVICE | rate_limit:10_per_minute | | DATA_EXPORT | anonymize_pii, local_only |

Shared Blackboard Pattern

The blackboard (swarm-blackboard.md) is a markdown file for agent coordination:

# Swarm Blackboard
Last Updated: 2026-02-04T10:30:00Z

## Knowledge Cache
### task:q4_analysis
{"status": "completed", "result": {...}, "agent": "data_analyst"}

### cache:revenue_summary  
{"q4_total": 1250000, "growth": 0.15}

Blackboard Operations

# Write with TTL (expires after 1 hour)
python {baseDir}/scripts/blackboard.py write "cache:temp_data" '{"value": 123}' --ttl 3600

# Read (returns null if expired)
python {baseDir}/scripts/blackboard.py read "cache:temp_data"

# Delete
python {baseDir}/scripts/blackboard.py delete "cache:temp_data"

# Get full snapshot
python {baseDir}/scripts/blackboard.py snapshot

Parallel Execution

For tasks requiring multiple agent perspectives:

Strategy 1: Merge (Default)

Combine all agent outputs into unified result.

Ask data_analyst AND strategy_advisor to both analyze the dataset.
Merge their insights into a comprehensive report.

Strategy 2: Vote

Use when you need consensus - pick the result with highest confidence.

Strategy 3: First-Success

Use for redundancy - take first successful result.

Strategy 4: Chain

Sequential processing - output of one feeds into next.

Example Parallel Workflow

1. sessions_send to data_analyst: "Extract key metrics from Q4 data"
2. sessions_send to risk_assessor: "Identify compliance risks in Q4 data"  
3. sessions_send to strategy_advisor: "Recommend actions based on Q4 trends"
4. Wait for all responses via sessions_history
5. Synthesize: Combine metrics + risks + recommendations into executive summary

Security Considerations

  1. Never bypass the permission wall for gated resources
  2. Always include justification explaining the business need
  3. Use minimal scope - request only what you need
  4. Check token expiry - tokens are valid for 5 minutes
  5. Validate tokens - use python {baseDir}/scripts/validate_token.py TOKEN to verify grant tokens before use
  6. Audit trail - all permission requests are logged

📝 Audit Trail Requirements (MANDATORY)

Every sensitive action MUST be logged to data/audit_log.jsonl to maintain compliance and enable forensic analysis.

What Gets Logged Automatically

The scripts automatically log these events:

  • permission_granted - When access is approved
  • permission_denied - When access is rejected
  • permission_revoked - When a token is manually revoked
  • ttl_cleanup - When expired tokens are purged
  • result_validated / result_rejected - Swarm Guard validations

Log Entry Format

{
  "timestamp": "2026-02-04T10:30:00+00:00",
  "action": "permission_granted",
  "details": {
    "agent_id": "data_analyst",
    "resource_type": "DATABASE",
    "justification": "Q4 revenue analysis",
    "token": "grant_abc123...",
    "restrictions": ["read_only", "max_records:100"]
  }
}

Reading the Audit Log

# View recent entries (last 10)
tail -10 {baseDir}/data/audit_log.jsonl

# Search for specific agent
grep "data_analyst" {baseDir}/data/audit_log.jsonl

# Count actions by type
cat {baseDir}/data/audit_log.jsonl | jq -r '.action' | sort | uniq -c

Custom Audit Entries

If you perform a sensitive action manually, log it:

import json
from datetime import datetime, timezone
from pathlib import Path

audit_file = Path("{baseDir}/data/audit_log.jsonl")
entry = {
    "timestamp": datetime.now(timezone.utc).isoformat(),
    "action": "manual_data_access",
    "details": {
        "agent": "orchestrator",
        "description": "Direct database query for debugging",
        "justification": "Investigating data sync issue #1234"
    }
}
with open(audit_file, "a") as f:
    f.write(json.dumps(entry) + "\n")

🧹 TTL Enforcement (Token Lifecycle)

Expired permission tokens are automatically tracked. Run periodic cleanup:

# Validate a grant token
python {baseDir}/scripts/validate_token.py grant_a1b2c3d4e5f6

# List expired tokens (without removing)
python {baseDir}/scripts/revoke_token.py --list-expired

# Remove all expired tokens
python {baseDir}/scripts/revoke_token.py --cleanup

# Output:
# 🧹 TTL Cleanup Complete
#    Removed: 3 expired token(s)
#    Remaining active grants: 2

Best Practice: Run --cleanup at the start of each multi-agent task to ensure a clean permission state.

⚠️ Swarm Guard: Preventing Common Failures

Two critical issues can derail multi-agent swarms:

1. The Handoff Tax 💸

Problem: Agents waste tokens "talking about" work instead of doing it.

Prevention:

# Before each handoff, check your budget:
python {baseDir}/scripts/swarm_guard.py check-handoff --task-id "task_001"

# Output:
# 🟢 Task: task_001
#    Handoffs: 1/3
#    Remaining: 2
#    Action Ratio: 100%

Rules enforced:

  • Max 3 handoffs per task - After 3, produce output or abort
  • Max 500 chars per message - Be concise: instruction + constraints + expected output
  • 60% action ratio - At least 60% of handoffs must produce artifacts
  • 2-minute planning limit - No output after 2min = timeout
# Record a handoff (with tax checking):
python {baseDir}/scripts/swarm_guard.py record-handoff \
  --task-id "task_001" \
  --from orchestrator \
  --to data_analyst \
  --message "Analyze sales data, output JSON summary" \
  --artifact  # Include if this handoff produces output

2. Silent Failure Detection 👻

Problem: One agent fails silently, others keep working on bad data.

Prevention - Heartbeats:

# Agents must send heartbeats while working:
python {baseDir}/scripts/swarm_guard.py heartbeat --agent data_analyst --task-id "task_001"

# Check if an agent is healthy:
python {baseDir}/scripts/swarm_guard.py health-check --agent data_analyst

# Output if healthy:
# 💚 Agent 'data_analyst' is HEALTHY
#    Last seen: 15s ago

# Output if failed:
# 💔 Agent 'data_analyst' is UNHEALTHY
#    Reason: STALE_HEARTBEAT
#    → Do NOT use any pending results from this agent.

Prevention - Result Validation:

# Before using another agent's result, validate it:
python {baseDir}/scripts/swarm_guard.py validate-result \
  --task-id "task_001" \
  --agent data_analyst \
  --result '{"status": "success", "output": {"revenue": 125000}, "confidence": 0.85}'

# Output:
# ✅ RESULT VALID
#    → APPROVED - Result can be used by other agents

Required result fields: status, output, confidence

Supervisor Review

Before finalizing any task, run supervisor review:

python {baseDir}/scripts/swarm_guard.py supervisor-review --task-id "task_001"

# Output:
# ✅ SUPERVISOR VERDICT: APPROVED
#    Task: task_001
#    Age: 1.5 minutes
#    Handoffs: 2
#    Artifacts: 2

Verdicts:

  • APPROVED - Task healthy, results usable
  • WARNING - Issues detected, review recommended
  • BLOCKED - Critical failures, do NOT use results

Troubleshooting

Permission Denied

  • Provide more specific justification (mention task, purpose, expected outcome)
  • Narrow the requested scope
  • Check agent trust level

Blackboard Read Returns Null

  • Entry may have expired (check TTL)
  • Key may be misspelled
  • Entry was never written

Session Not Found

  • Run sessions_list to see available sessions
  • Session may need to be started first

References

API & Reliability

Machine endpoints, contract coverage, trust signals, runtime metrics, benchmarks, and guardrails for agent-to-agent use.

MissingGITHUB OPENCLEW

Machine interfaces

Contract & API

Contract coverage

Status

missing

Auth

None

Streaming

No

Data region

Unspecified

Protocol support

OpenClaw: self-declared

Requires: none

Forbidden: none

Guardrails

Operational confidence: low

No positive guardrails captured.
Invocation examples
curl -s "https://xpersona.co/api/v1/agents/jovansapfioneer-network-ai/snapshot"
curl -s "https://xpersona.co/api/v1/agents/jovansapfioneer-network-ai/contract"
curl -s "https://xpersona.co/api/v1/agents/jovansapfioneer-network-ai/trust"

Operational fit

Reliability & Benchmarks

Trust signals

Handshake

UNKNOWN

Confidence

unknown

Attempts 30d

unknown

Fallback rate

unknown

Runtime metrics

Observed P50

unknown

Observed P95

unknown

Rate limit

unknown

Estimated cost

unknown

Do not use if

Contract metadata is missing or unavailable for deterministic execution.
No benchmark suites or observed failure patterns are available.

Machine Appendix

Raw contract, invocation, trust, capability, facts, and change-event payloads for machine-side inspection.

MissingGITHUB OPENCLEW

Contract JSON

{
  "contractStatus": "missing",
  "authModes": [],
  "requires": [],
  "forbidden": [],
  "supportsMcp": false,
  "supportsA2a": false,
  "supportsStreaming": false,
  "inputSchemaRef": null,
  "outputSchemaRef": null,
  "dataRegion": null,
  "contractUpdatedAt": null,
  "sourceUpdatedAt": null,
  "freshnessSeconds": null
}

Invocation Guide

{
  "preferredApi": {
    "snapshotUrl": "https://xpersona.co/api/v1/agents/jovansapfioneer-network-ai/snapshot",
    "contractUrl": "https://xpersona.co/api/v1/agents/jovansapfioneer-network-ai/contract",
    "trustUrl": "https://xpersona.co/api/v1/agents/jovansapfioneer-network-ai/trust"
  },
  "curlExamples": [
    "curl -s \"https://xpersona.co/api/v1/agents/jovansapfioneer-network-ai/snapshot\"",
    "curl -s \"https://xpersona.co/api/v1/agents/jovansapfioneer-network-ai/contract\"",
    "curl -s \"https://xpersona.co/api/v1/agents/jovansapfioneer-network-ai/trust\""
  ],
  "jsonRequestTemplate": {
    "query": "summarize this repo",
    "constraints": {
      "maxLatencyMs": 2000,
      "protocolPreference": [
        "OPENCLEW"
      ]
    }
  },
  "jsonResponseTemplate": {
    "ok": true,
    "result": {
      "summary": "...",
      "confidence": 0.9
    },
    "meta": {
      "source": "GITHUB_OPENCLEW",
      "generatedAt": "2026-04-17T04:56:36.703Z"
    }
  },
  "retryPolicy": {
    "maxAttempts": 3,
    "backoffMs": [
      500,
      1500,
      3500
    ],
    "retryableConditions": [
      "HTTP_429",
      "HTTP_503",
      "NETWORK_TIMEOUT"
    ]
  }
}

Trust JSON

{
  "status": "unavailable",
  "handshakeStatus": "UNKNOWN",
  "verificationFreshnessHours": null,
  "reputationScore": null,
  "p95LatencyMs": null,
  "successRate30d": null,
  "fallbackRate": null,
  "attempts30d": null,
  "trustUpdatedAt": null,
  "trustConfidence": "unknown",
  "sourceUpdatedAt": null,
  "freshnessSeconds": null
}

Capability Matrix

{
  "rows": [
    {
      "key": "OPENCLEW",
      "type": "protocol",
      "support": "unknown",
      "confidenceSource": "profile",
      "notes": "Listed on profile"
    },
    {
      "key": "derail",
      "type": "capability",
      "support": "supported",
      "confidenceSource": "profile",
      "notes": "Declared in agent profile metadata"
    },
    {
      "key": "be",
      "type": "capability",
      "support": "supported",
      "confidenceSource": "profile",
      "notes": "Declared in agent profile metadata"
    }
  ],
  "flattenedTokens": "protocol:OPENCLEW|unknown|profile capability:derail|supported|profile capability:be|supported|profile"
}

Facts JSON

[
  {
    "factKey": "vendor",
    "category": "vendor",
    "label": "Vendor",
    "value": "Jovansapfioneer",
    "href": "https://github.com/jovanSAPFIONEER/Network-AI",
    "sourceUrl": "https://github.com/jovanSAPFIONEER/Network-AI",
    "sourceType": "profile",
    "confidence": "medium",
    "observedAt": "2026-04-15T05:21:22.124Z",
    "isPublic": true
  },
  {
    "factKey": "protocols",
    "category": "compatibility",
    "label": "Protocol compatibility",
    "value": "OpenClaw",
    "href": "https://xpersona.co/api/v1/agents/jovansapfioneer-network-ai/contract",
    "sourceUrl": "https://xpersona.co/api/v1/agents/jovansapfioneer-network-ai/contract",
    "sourceType": "contract",
    "confidence": "medium",
    "observedAt": "2026-04-15T05:21:22.124Z",
    "isPublic": true
  },
  {
    "factKey": "traction",
    "category": "adoption",
    "label": "Adoption signal",
    "value": "12 GitHub stars",
    "href": "https://github.com/jovanSAPFIONEER/Network-AI",
    "sourceUrl": "https://github.com/jovanSAPFIONEER/Network-AI",
    "sourceType": "profile",
    "confidence": "medium",
    "observedAt": "2026-04-15T05:21:22.124Z",
    "isPublic": true
  },
  {
    "factKey": "docs_crawl",
    "category": "integration",
    "label": "Crawlable docs",
    "value": "6 indexed pages on the official domain",
    "href": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
    "sourceUrl": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
    "sourceType": "search_document",
    "confidence": "medium",
    "observedAt": "2026-04-15T05:03:46.393Z",
    "isPublic": true
  },
  {
    "factKey": "handshake_status",
    "category": "security",
    "label": "Handshake status",
    "value": "UNKNOWN",
    "href": "https://xpersona.co/api/v1/agents/jovansapfioneer-network-ai/trust",
    "sourceUrl": "https://xpersona.co/api/v1/agents/jovansapfioneer-network-ai/trust",
    "sourceType": "trust",
    "confidence": "medium",
    "observedAt": null,
    "isPublic": true
  }
]

Change Events JSON

[
  {
    "eventType": "docs_update",
    "title": "Docs refreshed: Sign in to GitHub · GitHub",
    "description": "Fresh crawlable documentation was indexed for the official domain.",
    "href": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
    "sourceUrl": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
    "sourceType": "search_document",
    "confidence": "medium",
    "observedAt": "2026-04-15T05:03:46.393Z",
    "isPublic": true
  }
]

Sponsored

Ads related to Network-AI and adjacent AI workflows.