Crawler Summary

agent_trace answer-first brief

Zero-config visual debugging and auto-evaluation for LLM agents. Local-first tracing with a beautiful dashboard for OpenAI, LangChain, CrewAI, and more. <div align="center"> ๐Ÿ” AgentTrace **Zero-config visual debugging and auto-evaluation for LLM agents.** $1 $1 $1 $1 *One import. Zero config. Instant visual timeline of every LLM call, tool execution, and crash your agent makes.* </div> --- The Problem You build an AI agent. It calls an LLM, uses tools, chains prompts together. Then it hallucinates, loops infinitely, or silently drops context โ€” and you have **no idea Capability contract not published. No trust telemetry is available yet. 4 GitHub stars reported by the source. Last updated 4/15/2026.

Freshness

Last checked 4/15/2026

Best For

agent_trace is best for crewai, multi-agent workflows where OpenClaw compatibility matters.

Not Ideal For

Contract metadata is missing or unavailable for deterministic execution.

Evidence Sources Checked

editorial-content, GITHUB REPOS, runtime-metrics, public facts pack

Claim this agent
Agent DossierGITHUB REPOSSafety: 66/100

agent_trace

Zero-config visual debugging and auto-evaluation for LLM agents. Local-first tracing with a beautiful dashboard for OpenAI, LangChain, CrewAI, and more. <div align="center"> ๐Ÿ” AgentTrace **Zero-config visual debugging and auto-evaluation for LLM agents.** $1 $1 $1 $1 *One import. Zero config. Instant visual timeline of every LLM call, tool execution, and crash your agent makes.* </div> --- The Problem You build an AI agent. It calls an LLM, uses tools, chains prompts together. Then it hallucinates, loops infinitely, or silently drops context โ€” and you have **no idea

OpenClawself-declared

Public facts

5

Change events

1

Artifacts

0

Freshness

Apr 15, 2026

Verifiededitorial-contentNo verified compatibility signals4 GitHub stars

Capability contract not published. No trust telemetry is available yet. 4 GitHub stars reported by the source. Last updated 4/15/2026.

4 GitHub starsTrust evidence available

Trust score

Unknown

Compatibility

OpenClaw

Freshness

Apr 15, 2026

Vendor

Cursed Me

Artifacts

0

Benchmarks

0

Last release

Unpublished

Executive Summary

Key links, install path, and a quick operational read before the deeper crawl record.

Verifiededitorial-content

Summary

Capability contract not published. No trust telemetry is available yet. 4 GitHub stars reported by the source. Last updated 4/15/2026.

Setup snapshot

  1. 1

    Setup complexity is LOW. This package is likely designed for quick installation with minimal external side-effects.

  2. 2

    Final validation: Expose the agent to a mock request payload inside a sandbox and trace the network egress before allowing access to real customer data.

Evidence Ledger

Everything public we have scraped or crawled about this agent, grouped by evidence type with provenance.

Verifiededitorial-content
Vendor (1)

Vendor

Cursed Me

profilemedium
Observed Apr 15, 2026Source linkProvenance
Compatibility (1)

Protocol compatibility

OpenClaw

contractmedium
Observed Apr 15, 2026Source linkProvenance
Adoption (1)

Adoption signal

4 GitHub stars

profilemedium
Observed Apr 15, 2026Source linkProvenance
Security (1)

Handshake status

UNKNOWN

trustmedium
Observed unknownSource linkProvenance
Integration (1)

Crawlable docs

6 indexed pages on the official domain

search_documentmedium
Observed Apr 15, 2026Source linkProvenance

Release & Crawl Timeline

Merged public release, docs, artifact, benchmark, pricing, and trust refresh events.

Self-declaredagent-index

Artifacts Archive

Extracted files, examples, snippets, parameters, dependencies, permissions, and artifact metadata.

Self-declaredGITHUB REPOS

Extracted files

0

Examples

6

Snippets

0

Languages

python

Executable Examples

python

import agenttrace.auto  # โ† That's it. One line.

# ... your existing agent code runs normally ...
# When it finishes, a local dashboard opens automatically at localhost:8000

python

import os
os.environ["AGENTTRACE_SESSION_ID"] = "user-123-conversation"
os.environ["AGENTTRACE_TAGS"] = "env=prod,agent=support"

bash

# Python โ€” Core (works with LangChain out of the box)
pip install agenttrace-ai

# Python โ€” With OpenAI/Groq support
pip install "agenttrace-ai[openai]"

# Python โ€” With everything (OpenAI + Auto-Judge + LangChain)
pip install "agenttrace-ai[all]"

# Node.js / TypeScript
npm install agenttrace-node

# Go
go get github.com/CURSED-ME/AgentTrace/agenttrace-go

python

import agenttrace.auto  # โ† Add this one line
import openai

client = openai.OpenAI()
response = client.chat.completions.create(
    model="gpt-4",
    messages=[{"role": "user", "content": "What is the capital of France?"}]
)
print(response.choices[0].message.content)
# Dashboard opens automatically at http://localhost:8000 when your script finishes

python

import agenttrace.auto  # โ† Same one line
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate

llm = ChatOpenAI(model="gpt-4")
prompt = ChatPromptTemplate.from_messages([
    ("system", "You are a helpful assistant."),
    ("human", "{input}")
])

chain = prompt | llm
result = chain.invoke({"input": "Explain quantum computing"})
# All LLM calls automatically appear in the AgentTrace dashboard

bash

npm install agenttrace-node

Docs & README

Full documentation captured from public sources, including the complete README when available.

Self-declaredGITHUB REPOS

Docs source

GITHUB REPOS

Editorial quality

ready

Zero-config visual debugging and auto-evaluation for LLM agents. Local-first tracing with a beautiful dashboard for OpenAI, LangChain, CrewAI, and more. <div align="center"> ๐Ÿ” AgentTrace **Zero-config visual debugging and auto-evaluation for LLM agents.** $1 $1 $1 $1 *One import. Zero config. Instant visual timeline of every LLM call, tool execution, and crash your agent makes.* </div> --- The Problem You build an AI agent. It calls an LLM, uses tools, chains prompts together. Then it hallucinates, loops infinitely, or silently drops context โ€” and you have **no idea

Full README
<div align="center">

๐Ÿ” AgentTrace

Zero-config visual debugging and auto-evaluation for LLM agents.

License: MIT Python 3.9+ Go 1.21+ OpenTelemetry

One import. Zero config. Instant visual timeline of every LLM call, tool execution, and crash your agent makes.

</div>

The Problem

You build an AI agent. It calls an LLM, uses tools, chains prompts together. Then it hallucinates, loops infinitely, or silently drops context โ€” and you have no idea where it went wrong.

Every other observability tool requires accounts, API keys, cloud dashboards, and framework-specific setup. You just want to see what happened.

The Solution

import agenttrace.auto  # โ† That's it. One line.

# ... your existing agent code runs normally ...
# When it finishes, a local dashboard opens automatically at localhost:8000

AgentTrace intercepts every LLM call, tool execution, and unhandled crash โ€” then serves a beautiful local timeline you can replay step-by-step.


โœจ Features

๐Ÿช„ True Zero-Config

Add import agenttrace.auto to the top of your script. No API keys, no accounts, no cloud. Works with OpenAI, Groq, Anthropic, Mistral, Google Gemini, LangChain, CrewAI, Vercel AI SDK, and 15+ more out of the box.

๐Ÿง  Smart Auto-Judge

AgentTrace doesn't just show you what happened โ€” it tells you what went wrong:

| Evaluation | How It Works | Cost | |---|---|---| | ๐Ÿ” Loop Detection | Flags 3+ identical consecutive tool calls | Free (pure Python) | | ๐Ÿ’ฐ Cost Anomaly | Flags steps using >2x average tokens | Free (pure Python) | | โฑ๏ธ Latency Regression | Flags steps >3x slower than average | Free (pure Python) | | ๐Ÿ”ง Tool Misuse | Detects wrong arguments or failed tool calls | LLM-powered (optional) | | ๐Ÿ“ Instruction Drift | Detects when LLM ignores the system prompt | LLM-powered (optional) |

LLM-powered checks require a free Groq API key. Install with pip install "agenttrace-ai[judge]".

โ–ถ๏ธ Trace Replay

Press Play and watch your agent's execution animate step-by-step โ€” like a video recording of its thought process. Drag the scrubber to jump to any moment. Flagged steps pulse red.

๐Ÿ’ฅ Crash Detection

If your agent throws an unhandled exception, AgentTrace catches it and logs the full traceback as a trace step โ€” so you never lose debugging data.

๐Ÿ”— Session Tracing

Group related traces into sessions for multi-turn agent workflows. Tag traces with custom key-value pairs for filtering and organization:

import os
os.environ["AGENTTRACE_SESSION_ID"] = "user-123-conversation"
os.environ["AGENTTRACE_TAGS"] = "env=prod,agent=support"

๐Ÿ”€ Trace Comparison (Diff Mode)

Select any two traces and diff them side-by-side. AgentTrace uses an LCS-based algorithm to classify each step as added, removed, changed, or unchanged โ€” with a metrics delta bar showing differences in tokens, latency, and step count.

๐Ÿ“ฆ Evaluation Datasets

Build golden test datasets directly from your traces:

  • Save individual LLM call inputs/outputs to a dataset with one click
  • Batch import all traces from a session or tag filter
  • Export datasets as .jsonl for use in fine-tuning or CI evaluation pipelines

๐Ÿ”Œ Framework Support

LLM Providers

| Provider | Status | Install | |---|---|---| | OpenAI | โœ… Native | pip install "agenttrace-ai[openai]" | | Groq | โœ… Native | pip install "agenttrace-ai[openai]" | | Anthropic (Claude) | โœ… Native | pip install "agenttrace-ai[anthropic]" | | Mistral AI | โœ… Native | pip install "agenttrace-ai[mistral]" | | Google Gemini | โœ… Native | pip install "agenttrace-ai[google]" | | Cohere | โœ… Native | pip install "agenttrace-ai[cohere]" | | AWS Bedrock | โœ… Native | pip install "agenttrace-ai[bedrock]" | | Ollama | โœ… Native | pip install "agenttrace-ai[ollama]" | | Replicate | โœ… Native | pip install "agenttrace-ai[all]" | | Together AI | โœ… Native | pip install "agenttrace-ai[all]" |

Agent Frameworks

| Framework | Status | Install | |---|---|---| | LangChain | โœ… Adapter | None (auto-detected) | | CrewAI | โœ… Adapter | None (auto-detected) | | Vercel AI SDK | โœ… Experimental | npm install agenttrace-node ai | | LlamaIndex | โœ… Native | pip install "agenttrace-ai[all]" | | Haystack | โœ… Native | pip install "agenttrace-ai[all]" |

Vector Databases

| Database | Status | Install | |---|---|---| | ChromaDB | โœ… Native | pip install "agenttrace-ai[vectordb]" | | Pinecone | โœ… Native | pip install "agenttrace-ai[vectordb]" |


๐Ÿš€ Quickstart

Install

# Python โ€” Core (works with LangChain out of the box)
pip install agenttrace-ai

# Python โ€” With OpenAI/Groq support
pip install "agenttrace-ai[openai]"

# Python โ€” With everything (OpenAI + Auto-Judge + LangChain)
pip install "agenttrace-ai[all]"

# Node.js / TypeScript
npm install agenttrace-node

# Go
go get github.com/CURSED-ME/AgentTrace/agenttrace-go

Basic Usage (OpenAI / Groq)

import agenttrace.auto  # โ† Add this one line
import openai

client = openai.OpenAI()
response = client.chat.completions.create(
    model="gpt-4",
    messages=[{"role": "user", "content": "What is the capital of France?"}]
)
print(response.choices[0].message.content)
# Dashboard opens automatically at http://localhost:8000 when your script finishes

LangChain (Zero-Config)

import agenttrace.auto  # โ† Same one line
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate

llm = ChatOpenAI(model="gpt-4")
prompt = ChatPromptTemplate.from_messages([
    ("system", "You are a helpful assistant."),
    ("human", "{input}")
])

chain = prompt | llm
result = chain.invoke({"input": "Explain quantum computing"})
# All LLM calls automatically appear in the AgentTrace dashboard

Node.js & TypeScript SDK

AgentTrace now natively supports Javascript/Typescript AI agents via the @opentelemetry standard!

1. Install the SDK:

npm install agenttrace-node

2. Initialize tracking at the top of your index file:

import { init, shutdown } from "agenttrace-node";
import { OpenAI } from "openai";

// 1. Initialize OTLP tracer
init({
  serviceName: "my-ai-agent"
});

const client = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });

async function main() {
  const response = await client.chat.completions.create({
    model: "gpt-4o",
    messages: [{ role: "user", content: "Hello!" }]
  });
  
  // 2. Gracefully flush traces before the Node event loop exits
  await shutdown(); 
}
main();

3. Vercel AI SDK Integration (Experimental): AgentTrace supports the Vercel AI SDK out of the box by leveraging its experimental_telemetry flag. Tool calls, streaming responses, and custom metadata are all captured automatically.

Note: Vercel's telemetry API is marked as experimental and may change between SDK versions. AgentTrace is tested against ai@6.0+.

import { init, shutdown } from "agenttrace-node";
import { generateText } from "ai";
import { openai } from "@ai-sdk/openai";

// 1. Initialize OTLP tracer
init({ serviceName: "vercel-ai-agent" });

async function main() {
  const { text } = await generateText({
    model: openai("gpt-4o"),
    prompt: "Write a short poem about space.",
    experimental_telemetry: {
      isEnabled: true,
      functionId: "space-poet",
      metadata: { agent: "SpaceAgent" } // Appears as agent name in AgentTrace UI
    }
  });
  
  // 2. Flush traces
  await shutdown();
}
main();

Custom Tool Tracking (Python)

from agenttrace import track_tool, track_agent

@track_tool
def search_database(query: str) -> str:
    return db.search(query)

@track_agent
def my_agent(task: str) -> str:
    data = search_database(task)
    return llm.complete(f"Answer based on: {data}")

Custom Tool Tracking (Node.js)

import { trackAgent, trackTool } from "agenttrace-node";

const getWeather = trackTool("getWeather", async (location: string) => {
  return await fetchWeatherApi(location);
});

const myAgent = trackAgent("myAgent", async (query: string) => {
  const data = await getWeather("San Francisco");
  // ... call LLM with data
});

Go SDK

package main

import (
    "context"
    "log"
    "github.com/CURSED-ME/AgentTrace/agenttrace-go"
)

func main() {
    agenttrace.Init(agenttrace.WithServiceName("my-go-agent"))
    defer agenttrace.Shutdown(context.Background())

    agenttrace.TrackAgent(context.Background(), "research_agent", func(ctx context.Context) error {
        return agenttrace.TrackTool(ctx, "fetch_data", func(ctx context.Context) error {
            // your tool logic here
            return nil
        })
    })
}

For auto-instrumented OpenAI calls in Go, wrap your HTTP client with openai.RoundTripper โ€” see examples/basic_openai.


๐Ÿ—๏ธ Architecture

Your Agent Script (Python or Node.js)
       โ”‚
       โ–ผ
  import agenttrace.auto          // or: import { init } from "agenttrace-node"
       โ”‚                          // or: agenttrace.Init() (Go)
       โ”‚
       โ”œโ”€โ”€โ”€ OpenTelemetry TracerProvider
       โ”‚         โ”‚
       โ”‚         โ”œโ”€โ”€ OpenAI / Groq Instrumentor
       โ”‚         โ”œโ”€โ”€ Anthropic / Mistral / Cohere Instrumentors
       โ”‚         โ”œโ”€โ”€ Google Gemini / Bedrock / Ollama Instrumentors
       โ”‚         โ”œโ”€โ”€ Vercel AI SDK (experimental_telemetry)
       โ”‚         โ”œโ”€โ”€ LangChain / CrewAI Callback Adapters
       โ”‚         โ””โ”€โ”€ ChromaDB / Pinecone Vector DB Instrumentors
       โ”‚         โ”‚
       โ”‚         โ–ผ
       โ”‚    OTLP Adapter โ†’ SQLite (.agenttrace.db)
       โ”‚
       โ”œโ”€โ”€โ”€ sys.excepthook โ†’ Crash capture (Python)
       โ”‚
       โ””โ”€โ”€โ”€ atexit โ†’ FastAPI Server (localhost:8000)
                         โ”‚
                         โ”œโ”€โ”€ POST /v1/traces       (OTLP ingestion)
                         โ”œโ”€โ”€ GET  /api/traces
                         โ”œโ”€โ”€ GET  /api/trace/{id}
                         โ”œโ”€โ”€ GET  /api/sessions
                         โ”œโ”€โ”€ GET  /api/traces/compare
                         โ”œโ”€โ”€ GET  /api/datasets
                         โ”œโ”€โ”€ POST /api/datasets/{id}/batch
                         โ”œโ”€โ”€ GET  /api/datasets/{id}/export
                         โ””โ”€โ”€ React Dashboard (Vite + Tailwind)

Key Design Decisions

  • OpenTelemetry for instrumentation (industry standard, not fragile monkey-patching)
  • SQLite with WAL mode for zero-config persistence that survives crashes
  • contextvars for thread-safe multi-agent isolation
  • Pre-compiled React UI bundled inside the Python package

๐Ÿ“ Project Structure

agenttrace/                      # Python backend
โ”œโ”€โ”€ auto.py                      # Zero-config entry point (import this)
โ”œโ”€โ”€ exporter.py                  # OTel SpanExporter โ†’ SQLite
โ”œโ”€โ”€ otlp_adapter.py              # OTLP span normalizer (Vercel, OpenAI, etc.)
โ”œโ”€โ”€ judge.py                     # Smart Auto-Judge engine (5 eval types)
โ”œโ”€โ”€ models.py                    # Pydantic data models
โ”œโ”€โ”€ storage.py                   # SQLite with WAL mode
โ”œโ”€โ”€ server.py                    # FastAPI dashboard server + OTLP ingestion
โ”œโ”€โ”€ decorators.py                # @track_tool, @track_agent
โ”œโ”€โ”€ utils.py                     # Payload truncation
โ”œโ”€โ”€ integrations/
โ”‚   โ”œโ”€โ”€ langchain.py             # LangChain callback adapter
โ”‚   โ””โ”€โ”€ crewai.py                # CrewAI callback adapter
โ””โ”€โ”€ static/                      # Pre-compiled React dashboard

agenttrace-node/                 # Node.js / TypeScript SDK
โ”œโ”€โ”€ src/index.ts                 # init(), shutdown(), trackTool(), trackAgent()
โ”œโ”€โ”€ examples/                    # OpenAI, Vercel AI SDK examples
โ””โ”€โ”€ package.json

agenttrace-go/                   # Go SDK
โ”œโ”€โ”€ agenttrace.go                # Init(), Shutdown(), TrackAgent(), TrackTool()
โ”œโ”€โ”€ instrumentation/openai/      # http.RoundTripper auto-instrumentation
โ”œโ”€โ”€ examples/                    # OpenAI, custom tools examples
โ””โ”€โ”€ go.mod

โš™๏ธ Configuration

| Environment Variable | Default | Description | |---|---|---| | GROQ_API_KEY | โ€” | Required for LLM-powered judge evaluations | | AGENTTRACE_DB_PATH | .agenttrace.db | Custom database file path | | AGENTTRACE_FULL_PAYLOAD | 0 | Set to 1 to disable payload truncation | | AGENTTRACE_MAX_CONTENT | 500 | Max characters before truncation | | AGENTTRACE_SESSION_ID | โ€” | Group traces under a session identifier | | AGENTTRACE_TAGS | โ€” | Comma-separated key=value pairs for trace tagging | | AGENTTRACE_MAX_TRACES | 1000 | Maximum number of traces to retain in the database |


๐Ÿค Contributing

We welcome contributions! Here's how to set up the dev environment:

git clone https://github.com/CURSED-ME/AgentTrace.git
cd AgentTrace
pip install -e ".[all]"

# Frontend development
cd ui
npm install
npm run dev    # Dev server with hot reload
npm run build  # Compile to agenttrace/static/

See .env.example for required environment variables.


๐Ÿ“„ License

MIT License โ€” see LICENSE for details.


<div align="center">

Built with โค๏ธ for the agent builder community.

If AgentTrace helped you debug an agent, give us a โญ on GitHub!

</div>

Contract & API

Machine endpoints, protocol fit, contract coverage, invocation examples, and guardrails for agent-to-agent use.

MissingGITHUB REPOS

Contract coverage

Status

missing

Auth

None

Streaming

No

Data region

Unspecified

Protocol support

OpenClaw: self-declared

Requires: none

Forbidden: none

Guardrails

Operational confidence: low

No positive guardrails captured.
Invocation examples
curl -s "https://xpersona.co/api/v1/agents/crewai-cursed-me-agent-trace/snapshot"
curl -s "https://xpersona.co/api/v1/agents/crewai-cursed-me-agent-trace/contract"
curl -s "https://xpersona.co/api/v1/agents/crewai-cursed-me-agent-trace/trust"

Reliability & Benchmarks

Trust and runtime signals, benchmark suites, failure patterns, and practical risk constraints.

Missingruntime-metrics

Trust signals

Handshake

UNKNOWN

Confidence

unknown

Attempts 30d

unknown

Fallback rate

unknown

Runtime metrics

Observed P50

unknown

Observed P95

unknown

Rate limit

unknown

Estimated cost

unknown

Do not use if

Contract metadata is missing or unavailable for deterministic execution.
No benchmark suites or observed failure patterns are available.

Media & Demo

Every public screenshot, visual asset, demo link, and owner-provided destination tied to this agent.

Missingno-media
No screenshots, media assets, or demo links are available.

Related Agents

Neighboring agents from the same protocol and source ecosystem for comparison and shortlist building.

Self-declaredprotocol-neighbors
GITHUB_REPOSactivepieces

Rank

70

AI Agents & MCPs & AI Workflow Automation โ€ข (~400 MCP servers for AI agents) โ€ข AI Automation / AI Agent with MCPs โ€ข AI Workflows & AI Agents โ€ข MCPs for AI Agents

Traction

No public download signal

Freshness

Updated 2d ago

OPENCLAW
GITHUB_REPOScherry-studio

Rank

70

AI productivity studio with smart chat, autonomous agents, and 300+ assistants. Unified access to frontier LLMs

Traction

No public download signal

Freshness

Updated 5d ago

MCPOPENCLAW
GITHUB_REPOSAionUi

Rank

70

Free, local, open-source 24/7 Cowork app and OpenClaw for Gemini CLI, Claude Code, Codex, OpenCode, Qwen Code, Goose CLI, Auggie, and more | ๐ŸŒŸ Star if you like it!

Traction

No public download signal

Freshness

Updated 6d ago

MCPOPENCLAW
GITHUB_REPOSCopilotKit

Rank

70

The Frontend for Agents & Generative UI. React + Angular

Traction

No public download signal

Freshness

Updated 23d ago

OPENCLAW
Machine Appendix

Contract JSON

{
  "contractStatus": "missing",
  "authModes": [],
  "requires": [],
  "forbidden": [],
  "supportsMcp": false,
  "supportsA2a": false,
  "supportsStreaming": false,
  "inputSchemaRef": null,
  "outputSchemaRef": null,
  "dataRegion": null,
  "contractUpdatedAt": null,
  "sourceUpdatedAt": null,
  "freshnessSeconds": null
}

Invocation Guide

{
  "preferredApi": {
    "snapshotUrl": "https://xpersona.co/api/v1/agents/crewai-cursed-me-agent-trace/snapshot",
    "contractUrl": "https://xpersona.co/api/v1/agents/crewai-cursed-me-agent-trace/contract",
    "trustUrl": "https://xpersona.co/api/v1/agents/crewai-cursed-me-agent-trace/trust"
  },
  "curlExamples": [
    "curl -s \"https://xpersona.co/api/v1/agents/crewai-cursed-me-agent-trace/snapshot\"",
    "curl -s \"https://xpersona.co/api/v1/agents/crewai-cursed-me-agent-trace/contract\"",
    "curl -s \"https://xpersona.co/api/v1/agents/crewai-cursed-me-agent-trace/trust\""
  ],
  "jsonRequestTemplate": {
    "query": "summarize this repo",
    "constraints": {
      "maxLatencyMs": 2000,
      "protocolPreference": [
        "OPENCLEW"
      ]
    }
  },
  "jsonResponseTemplate": {
    "ok": true,
    "result": {
      "summary": "...",
      "confidence": 0.9
    },
    "meta": {
      "source": "GITHUB_REPOS",
      "generatedAt": "2026-04-17T00:52:38.145Z"
    }
  },
  "retryPolicy": {
    "maxAttempts": 3,
    "backoffMs": [
      500,
      1500,
      3500
    ],
    "retryableConditions": [
      "HTTP_429",
      "HTTP_503",
      "NETWORK_TIMEOUT"
    ]
  }
}

Trust JSON

{
  "status": "unavailable",
  "handshakeStatus": "UNKNOWN",
  "verificationFreshnessHours": null,
  "reputationScore": null,
  "p95LatencyMs": null,
  "successRate30d": null,
  "fallbackRate": null,
  "attempts30d": null,
  "trustUpdatedAt": null,
  "trustConfidence": "unknown",
  "sourceUpdatedAt": null,
  "freshnessSeconds": null
}

Capability Matrix

{
  "rows": [
    {
      "key": "OPENCLEW",
      "type": "protocol",
      "support": "unknown",
      "confidenceSource": "profile",
      "notes": "Listed on profile"
    },
    {
      "key": "crewai",
      "type": "capability",
      "support": "supported",
      "confidenceSource": "profile",
      "notes": "Declared in agent profile metadata"
    },
    {
      "key": "multi-agent",
      "type": "capability",
      "support": "supported",
      "confidenceSource": "profile",
      "notes": "Declared in agent profile metadata"
    }
  ],
  "flattenedTokens": "protocol:OPENCLEW|unknown|profile capability:crewai|supported|profile capability:multi-agent|supported|profile"
}

Facts JSON

[
  {
    "factKey": "vendor",
    "category": "vendor",
    "label": "Vendor",
    "value": "Cursed Me",
    "href": "https://github.com/CURSED-ME/agent_trace",
    "sourceUrl": "https://github.com/CURSED-ME/agent_trace",
    "sourceType": "profile",
    "confidence": "medium",
    "observedAt": "2026-04-15T06:04:34.090Z",
    "isPublic": true
  },
  {
    "factKey": "protocols",
    "category": "compatibility",
    "label": "Protocol compatibility",
    "value": "OpenClaw",
    "href": "https://xpersona.co/api/v1/agents/crewai-cursed-me-agent-trace/contract",
    "sourceUrl": "https://xpersona.co/api/v1/agents/crewai-cursed-me-agent-trace/contract",
    "sourceType": "contract",
    "confidence": "medium",
    "observedAt": "2026-04-15T06:04:34.090Z",
    "isPublic": true
  },
  {
    "factKey": "traction",
    "category": "adoption",
    "label": "Adoption signal",
    "value": "4 GitHub stars",
    "href": "https://github.com/CURSED-ME/agent_trace",
    "sourceUrl": "https://github.com/CURSED-ME/agent_trace",
    "sourceType": "profile",
    "confidence": "medium",
    "observedAt": "2026-04-15T06:04:34.090Z",
    "isPublic": true
  },
  {
    "factKey": "docs_crawl",
    "category": "integration",
    "label": "Crawlable docs",
    "value": "6 indexed pages on the official domain",
    "href": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
    "sourceUrl": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
    "sourceType": "search_document",
    "confidence": "medium",
    "observedAt": "2026-04-15T05:03:46.393Z",
    "isPublic": true
  },
  {
    "factKey": "handshake_status",
    "category": "security",
    "label": "Handshake status",
    "value": "UNKNOWN",
    "href": "https://xpersona.co/api/v1/agents/crewai-cursed-me-agent-trace/trust",
    "sourceUrl": "https://xpersona.co/api/v1/agents/crewai-cursed-me-agent-trace/trust",
    "sourceType": "trust",
    "confidence": "medium",
    "observedAt": null,
    "isPublic": true
  }
]

Change Events JSON

[
  {
    "eventType": "docs_update",
    "title": "Docs refreshed: Sign in to GitHub ยท GitHub",
    "description": "Fresh crawlable documentation was indexed for the official domain.",
    "href": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
    "sourceUrl": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
    "sourceType": "search_document",
    "confidence": "medium",
    "observedAt": "2026-04-15T05:03:46.393Z",
    "isPublic": true
  }
]

Sponsored

Ads related to agent_trace and adjacent AI workflows.