Rank
70
AI Agents & MCPs & AI Workflow Automation โข (~400 MCP servers for AI agents) โข AI Automation / AI Agent with MCPs โข AI Workflows & AI Agents โข MCPs for AI Agents
Traction
No public download signal
Freshness
Updated 2d ago
Crawler Summary
Zero-config visual debugging and auto-evaluation for LLM agents. Local-first tracing with a beautiful dashboard for OpenAI, LangChain, CrewAI, and more. <div align="center"> ๐ AgentTrace **Zero-config visual debugging and auto-evaluation for LLM agents.** $1 $1 $1 $1 *One import. Zero config. Instant visual timeline of every LLM call, tool execution, and crash your agent makes.* </div> --- The Problem You build an AI agent. It calls an LLM, uses tools, chains prompts together. Then it hallucinates, loops infinitely, or silently drops context โ and you have **no idea Capability contract not published. No trust telemetry is available yet. 4 GitHub stars reported by the source. Last updated 4/15/2026.
Freshness
Last checked 4/15/2026
Best For
agent_trace is best for crewai, multi-agent workflows where OpenClaw compatibility matters.
Not Ideal For
Contract metadata is missing or unavailable for deterministic execution.
Evidence Sources Checked
editorial-content, GITHUB REPOS, runtime-metrics, public facts pack
Zero-config visual debugging and auto-evaluation for LLM agents. Local-first tracing with a beautiful dashboard for OpenAI, LangChain, CrewAI, and more. <div align="center"> ๐ AgentTrace **Zero-config visual debugging and auto-evaluation for LLM agents.** $1 $1 $1 $1 *One import. Zero config. Instant visual timeline of every LLM call, tool execution, and crash your agent makes.* </div> --- The Problem You build an AI agent. It calls an LLM, uses tools, chains prompts together. Then it hallucinates, loops infinitely, or silently drops context โ and you have **no idea
Public facts
5
Change events
1
Artifacts
0
Freshness
Apr 15, 2026
Capability contract not published. No trust telemetry is available yet. 4 GitHub stars reported by the source. Last updated 4/15/2026.
Trust score
Unknown
Compatibility
OpenClaw
Freshness
Apr 15, 2026
Vendor
Cursed Me
Artifacts
0
Benchmarks
0
Last release
Unpublished
Key links, install path, and a quick operational read before the deeper crawl record.
Summary
Capability contract not published. No trust telemetry is available yet. 4 GitHub stars reported by the source. Last updated 4/15/2026.
Setup snapshot
Setup complexity is LOW. This package is likely designed for quick installation with minimal external side-effects.
Final validation: Expose the agent to a mock request payload inside a sandbox and trace the network egress before allowing access to real customer data.
Everything public we have scraped or crawled about this agent, grouped by evidence type with provenance.
Vendor
Cursed Me
Protocol compatibility
OpenClaw
Adoption signal
4 GitHub stars
Handshake status
UNKNOWN
Crawlable docs
6 indexed pages on the official domain
Merged public release, docs, artifact, benchmark, pricing, and trust refresh events.
Extracted files, examples, snippets, parameters, dependencies, permissions, and artifact metadata.
Extracted files
0
Examples
6
Snippets
0
Languages
python
python
import agenttrace.auto # โ That's it. One line. # ... your existing agent code runs normally ... # When it finishes, a local dashboard opens automatically at localhost:8000
python
import os os.environ["AGENTTRACE_SESSION_ID"] = "user-123-conversation" os.environ["AGENTTRACE_TAGS"] = "env=prod,agent=support"
bash
# Python โ Core (works with LangChain out of the box) pip install agenttrace-ai # Python โ With OpenAI/Groq support pip install "agenttrace-ai[openai]" # Python โ With everything (OpenAI + Auto-Judge + LangChain) pip install "agenttrace-ai[all]" # Node.js / TypeScript npm install agenttrace-node # Go go get github.com/CURSED-ME/AgentTrace/agenttrace-go
python
import agenttrace.auto # โ Add this one line
import openai
client = openai.OpenAI()
response = client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": "What is the capital of France?"}]
)
print(response.choices[0].message.content)
# Dashboard opens automatically at http://localhost:8000 when your script finishespython
import agenttrace.auto # โ Same one line
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
llm = ChatOpenAI(model="gpt-4")
prompt = ChatPromptTemplate.from_messages([
("system", "You are a helpful assistant."),
("human", "{input}")
])
chain = prompt | llm
result = chain.invoke({"input": "Explain quantum computing"})
# All LLM calls automatically appear in the AgentTrace dashboardbash
npm install agenttrace-node
Full documentation captured from public sources, including the complete README when available.
Docs source
GITHUB REPOS
Editorial quality
ready
Zero-config visual debugging and auto-evaluation for LLM agents. Local-first tracing with a beautiful dashboard for OpenAI, LangChain, CrewAI, and more. <div align="center"> ๐ AgentTrace **Zero-config visual debugging and auto-evaluation for LLM agents.** $1 $1 $1 $1 *One import. Zero config. Instant visual timeline of every LLM call, tool execution, and crash your agent makes.* </div> --- The Problem You build an AI agent. It calls an LLM, uses tools, chains prompts together. Then it hallucinates, loops infinitely, or silently drops context โ and you have **no idea
Zero-config visual debugging and auto-evaluation for LLM agents.
One import. Zero config. Instant visual timeline of every LLM call, tool execution, and crash your agent makes.
</div>You build an AI agent. It calls an LLM, uses tools, chains prompts together. Then it hallucinates, loops infinitely, or silently drops context โ and you have no idea where it went wrong.
Every other observability tool requires accounts, API keys, cloud dashboards, and framework-specific setup. You just want to see what happened.
import agenttrace.auto # โ That's it. One line.
# ... your existing agent code runs normally ...
# When it finishes, a local dashboard opens automatically at localhost:8000
AgentTrace intercepts every LLM call, tool execution, and unhandled crash โ then serves a beautiful local timeline you can replay step-by-step.
Add import agenttrace.auto to the top of your script. No API keys, no accounts, no cloud. Works with OpenAI, Groq, Anthropic, Mistral, Google Gemini, LangChain, CrewAI, Vercel AI SDK, and 15+ more out of the box.
AgentTrace doesn't just show you what happened โ it tells you what went wrong:
| Evaluation | How It Works | Cost | |---|---|---| | ๐ Loop Detection | Flags 3+ identical consecutive tool calls | Free (pure Python) | | ๐ฐ Cost Anomaly | Flags steps using >2x average tokens | Free (pure Python) | | โฑ๏ธ Latency Regression | Flags steps >3x slower than average | Free (pure Python) | | ๐ง Tool Misuse | Detects wrong arguments or failed tool calls | LLM-powered (optional) | | ๐ Instruction Drift | Detects when LLM ignores the system prompt | LLM-powered (optional) |
LLM-powered checks require a free Groq API key. Install with
pip install "agenttrace-ai[judge]".
Press Play and watch your agent's execution animate step-by-step โ like a video recording of its thought process. Drag the scrubber to jump to any moment. Flagged steps pulse red.
If your agent throws an unhandled exception, AgentTrace catches it and logs the full traceback as a trace step โ so you never lose debugging data.
Group related traces into sessions for multi-turn agent workflows. Tag traces with custom key-value pairs for filtering and organization:
import os
os.environ["AGENTTRACE_SESSION_ID"] = "user-123-conversation"
os.environ["AGENTTRACE_TAGS"] = "env=prod,agent=support"
Select any two traces and diff them side-by-side. AgentTrace uses an LCS-based algorithm to classify each step as added, removed, changed, or unchanged โ with a metrics delta bar showing differences in tokens, latency, and step count.
Build golden test datasets directly from your traces:
.jsonl for use in fine-tuning or CI evaluation pipelines| Provider | Status | Install |
|---|---|---|
| OpenAI | โ
Native | pip install "agenttrace-ai[openai]" |
| Groq | โ
Native | pip install "agenttrace-ai[openai]" |
| Anthropic (Claude) | โ
Native | pip install "agenttrace-ai[anthropic]" |
| Mistral AI | โ
Native | pip install "agenttrace-ai[mistral]" |
| Google Gemini | โ
Native | pip install "agenttrace-ai[google]" |
| Cohere | โ
Native | pip install "agenttrace-ai[cohere]" |
| AWS Bedrock | โ
Native | pip install "agenttrace-ai[bedrock]" |
| Ollama | โ
Native | pip install "agenttrace-ai[ollama]" |
| Replicate | โ
Native | pip install "agenttrace-ai[all]" |
| Together AI | โ
Native | pip install "agenttrace-ai[all]" |
| Framework | Status | Install |
|---|---|---|
| LangChain | โ
Adapter | None (auto-detected) |
| CrewAI | โ
Adapter | None (auto-detected) |
| Vercel AI SDK | โ
Experimental | npm install agenttrace-node ai |
| LlamaIndex | โ
Native | pip install "agenttrace-ai[all]" |
| Haystack | โ
Native | pip install "agenttrace-ai[all]" |
| Database | Status | Install |
|---|---|---|
| ChromaDB | โ
Native | pip install "agenttrace-ai[vectordb]" |
| Pinecone | โ
Native | pip install "agenttrace-ai[vectordb]" |
# Python โ Core (works with LangChain out of the box)
pip install agenttrace-ai
# Python โ With OpenAI/Groq support
pip install "agenttrace-ai[openai]"
# Python โ With everything (OpenAI + Auto-Judge + LangChain)
pip install "agenttrace-ai[all]"
# Node.js / TypeScript
npm install agenttrace-node
# Go
go get github.com/CURSED-ME/AgentTrace/agenttrace-go
import agenttrace.auto # โ Add this one line
import openai
client = openai.OpenAI()
response = client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": "What is the capital of France?"}]
)
print(response.choices[0].message.content)
# Dashboard opens automatically at http://localhost:8000 when your script finishes
import agenttrace.auto # โ Same one line
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
llm = ChatOpenAI(model="gpt-4")
prompt = ChatPromptTemplate.from_messages([
("system", "You are a helpful assistant."),
("human", "{input}")
])
chain = prompt | llm
result = chain.invoke({"input": "Explain quantum computing"})
# All LLM calls automatically appear in the AgentTrace dashboard
AgentTrace now natively supports Javascript/Typescript AI agents via the @opentelemetry standard!
1. Install the SDK:
npm install agenttrace-node
2. Initialize tracking at the top of your index file:
import { init, shutdown } from "agenttrace-node";
import { OpenAI } from "openai";
// 1. Initialize OTLP tracer
init({
serviceName: "my-ai-agent"
});
const client = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });
async function main() {
const response = await client.chat.completions.create({
model: "gpt-4o",
messages: [{ role: "user", content: "Hello!" }]
});
// 2. Gracefully flush traces before the Node event loop exits
await shutdown();
}
main();
3. Vercel AI SDK Integration (Experimental):
AgentTrace supports the Vercel AI SDK out of the box by leveraging its experimental_telemetry flag. Tool calls, streaming responses, and custom metadata are all captured automatically.
Note: Vercel's telemetry API is marked as experimental and may change between SDK versions. AgentTrace is tested against
ai@6.0+.
import { init, shutdown } from "agenttrace-node";
import { generateText } from "ai";
import { openai } from "@ai-sdk/openai";
// 1. Initialize OTLP tracer
init({ serviceName: "vercel-ai-agent" });
async function main() {
const { text } = await generateText({
model: openai("gpt-4o"),
prompt: "Write a short poem about space.",
experimental_telemetry: {
isEnabled: true,
functionId: "space-poet",
metadata: { agent: "SpaceAgent" } // Appears as agent name in AgentTrace UI
}
});
// 2. Flush traces
await shutdown();
}
main();
from agenttrace import track_tool, track_agent
@track_tool
def search_database(query: str) -> str:
return db.search(query)
@track_agent
def my_agent(task: str) -> str:
data = search_database(task)
return llm.complete(f"Answer based on: {data}")
import { trackAgent, trackTool } from "agenttrace-node";
const getWeather = trackTool("getWeather", async (location: string) => {
return await fetchWeatherApi(location);
});
const myAgent = trackAgent("myAgent", async (query: string) => {
const data = await getWeather("San Francisco");
// ... call LLM with data
});
package main
import (
"context"
"log"
"github.com/CURSED-ME/AgentTrace/agenttrace-go"
)
func main() {
agenttrace.Init(agenttrace.WithServiceName("my-go-agent"))
defer agenttrace.Shutdown(context.Background())
agenttrace.TrackAgent(context.Background(), "research_agent", func(ctx context.Context) error {
return agenttrace.TrackTool(ctx, "fetch_data", func(ctx context.Context) error {
// your tool logic here
return nil
})
})
}
For auto-instrumented OpenAI calls in Go, wrap your HTTP client with
openai.RoundTripperโ seeexamples/basic_openai.
Your Agent Script (Python or Node.js)
โ
โผ
import agenttrace.auto // or: import { init } from "agenttrace-node"
โ // or: agenttrace.Init() (Go)
โ
โโโโ OpenTelemetry TracerProvider
โ โ
โ โโโ OpenAI / Groq Instrumentor
โ โโโ Anthropic / Mistral / Cohere Instrumentors
โ โโโ Google Gemini / Bedrock / Ollama Instrumentors
โ โโโ Vercel AI SDK (experimental_telemetry)
โ โโโ LangChain / CrewAI Callback Adapters
โ โโโ ChromaDB / Pinecone Vector DB Instrumentors
โ โ
โ โผ
โ OTLP Adapter โ SQLite (.agenttrace.db)
โ
โโโโ sys.excepthook โ Crash capture (Python)
โ
โโโโ atexit โ FastAPI Server (localhost:8000)
โ
โโโ POST /v1/traces (OTLP ingestion)
โโโ GET /api/traces
โโโ GET /api/trace/{id}
โโโ GET /api/sessions
โโโ GET /api/traces/compare
โโโ GET /api/datasets
โโโ POST /api/datasets/{id}/batch
โโโ GET /api/datasets/{id}/export
โโโ React Dashboard (Vite + Tailwind)
contextvars for thread-safe multi-agent isolationagenttrace/ # Python backend
โโโ auto.py # Zero-config entry point (import this)
โโโ exporter.py # OTel SpanExporter โ SQLite
โโโ otlp_adapter.py # OTLP span normalizer (Vercel, OpenAI, etc.)
โโโ judge.py # Smart Auto-Judge engine (5 eval types)
โโโ models.py # Pydantic data models
โโโ storage.py # SQLite with WAL mode
โโโ server.py # FastAPI dashboard server + OTLP ingestion
โโโ decorators.py # @track_tool, @track_agent
โโโ utils.py # Payload truncation
โโโ integrations/
โ โโโ langchain.py # LangChain callback adapter
โ โโโ crewai.py # CrewAI callback adapter
โโโ static/ # Pre-compiled React dashboard
agenttrace-node/ # Node.js / TypeScript SDK
โโโ src/index.ts # init(), shutdown(), trackTool(), trackAgent()
โโโ examples/ # OpenAI, Vercel AI SDK examples
โโโ package.json
agenttrace-go/ # Go SDK
โโโ agenttrace.go # Init(), Shutdown(), TrackAgent(), TrackTool()
โโโ instrumentation/openai/ # http.RoundTripper auto-instrumentation
โโโ examples/ # OpenAI, custom tools examples
โโโ go.mod
| Environment Variable | Default | Description |
|---|---|---|
| GROQ_API_KEY | โ | Required for LLM-powered judge evaluations |
| AGENTTRACE_DB_PATH | .agenttrace.db | Custom database file path |
| AGENTTRACE_FULL_PAYLOAD | 0 | Set to 1 to disable payload truncation |
| AGENTTRACE_MAX_CONTENT | 500 | Max characters before truncation |
| AGENTTRACE_SESSION_ID | โ | Group traces under a session identifier |
| AGENTTRACE_TAGS | โ | Comma-separated key=value pairs for trace tagging |
| AGENTTRACE_MAX_TRACES | 1000 | Maximum number of traces to retain in the database |
We welcome contributions! Here's how to set up the dev environment:
git clone https://github.com/CURSED-ME/AgentTrace.git
cd AgentTrace
pip install -e ".[all]"
# Frontend development
cd ui
npm install
npm run dev # Dev server with hot reload
npm run build # Compile to agenttrace/static/
See .env.example for required environment variables.
MIT License โ see LICENSE for details.
Built with โค๏ธ for the agent builder community.
If AgentTrace helped you debug an agent, give us a โญ on GitHub!
</div>Machine endpoints, protocol fit, contract coverage, invocation examples, and guardrails for agent-to-agent use.
Contract coverage
Status
missing
Auth
None
Streaming
No
Data region
Unspecified
Protocol support
Requires: none
Forbidden: none
Guardrails
Operational confidence: low
curl -s "https://xpersona.co/api/v1/agents/crewai-cursed-me-agent-trace/snapshot"
curl -s "https://xpersona.co/api/v1/agents/crewai-cursed-me-agent-trace/contract"
curl -s "https://xpersona.co/api/v1/agents/crewai-cursed-me-agent-trace/trust"
Trust and runtime signals, benchmark suites, failure patterns, and practical risk constraints.
Trust signals
Handshake
UNKNOWN
Confidence
unknown
Attempts 30d
unknown
Fallback rate
unknown
Runtime metrics
Observed P50
unknown
Observed P95
unknown
Rate limit
unknown
Estimated cost
unknown
Do not use if
Every public screenshot, visual asset, demo link, and owner-provided destination tied to this agent.
Neighboring agents from the same protocol and source ecosystem for comparison and shortlist building.
Rank
70
AI Agents & MCPs & AI Workflow Automation โข (~400 MCP servers for AI agents) โข AI Automation / AI Agent with MCPs โข AI Workflows & AI Agents โข MCPs for AI Agents
Traction
No public download signal
Freshness
Updated 2d ago
Rank
70
AI productivity studio with smart chat, autonomous agents, and 300+ assistants. Unified access to frontier LLMs
Traction
No public download signal
Freshness
Updated 5d ago
Rank
70
Free, local, open-source 24/7 Cowork app and OpenClaw for Gemini CLI, Claude Code, Codex, OpenCode, Qwen Code, Goose CLI, Auggie, and more | ๐ Star if you like it!
Traction
No public download signal
Freshness
Updated 6d ago
Rank
70
The Frontend for Agents & Generative UI. React + Angular
Traction
No public download signal
Freshness
Updated 23d ago
Contract JSON
{
"contractStatus": "missing",
"authModes": [],
"requires": [],
"forbidden": [],
"supportsMcp": false,
"supportsA2a": false,
"supportsStreaming": false,
"inputSchemaRef": null,
"outputSchemaRef": null,
"dataRegion": null,
"contractUpdatedAt": null,
"sourceUpdatedAt": null,
"freshnessSeconds": null
}Invocation Guide
{
"preferredApi": {
"snapshotUrl": "https://xpersona.co/api/v1/agents/crewai-cursed-me-agent-trace/snapshot",
"contractUrl": "https://xpersona.co/api/v1/agents/crewai-cursed-me-agent-trace/contract",
"trustUrl": "https://xpersona.co/api/v1/agents/crewai-cursed-me-agent-trace/trust"
},
"curlExamples": [
"curl -s \"https://xpersona.co/api/v1/agents/crewai-cursed-me-agent-trace/snapshot\"",
"curl -s \"https://xpersona.co/api/v1/agents/crewai-cursed-me-agent-trace/contract\"",
"curl -s \"https://xpersona.co/api/v1/agents/crewai-cursed-me-agent-trace/trust\""
],
"jsonRequestTemplate": {
"query": "summarize this repo",
"constraints": {
"maxLatencyMs": 2000,
"protocolPreference": [
"OPENCLEW"
]
}
},
"jsonResponseTemplate": {
"ok": true,
"result": {
"summary": "...",
"confidence": 0.9
},
"meta": {
"source": "GITHUB_REPOS",
"generatedAt": "2026-04-17T00:52:38.145Z"
}
},
"retryPolicy": {
"maxAttempts": 3,
"backoffMs": [
500,
1500,
3500
],
"retryableConditions": [
"HTTP_429",
"HTTP_503",
"NETWORK_TIMEOUT"
]
}
}Trust JSON
{
"status": "unavailable",
"handshakeStatus": "UNKNOWN",
"verificationFreshnessHours": null,
"reputationScore": null,
"p95LatencyMs": null,
"successRate30d": null,
"fallbackRate": null,
"attempts30d": null,
"trustUpdatedAt": null,
"trustConfidence": "unknown",
"sourceUpdatedAt": null,
"freshnessSeconds": null
}Capability Matrix
{
"rows": [
{
"key": "OPENCLEW",
"type": "protocol",
"support": "unknown",
"confidenceSource": "profile",
"notes": "Listed on profile"
},
{
"key": "crewai",
"type": "capability",
"support": "supported",
"confidenceSource": "profile",
"notes": "Declared in agent profile metadata"
},
{
"key": "multi-agent",
"type": "capability",
"support": "supported",
"confidenceSource": "profile",
"notes": "Declared in agent profile metadata"
}
],
"flattenedTokens": "protocol:OPENCLEW|unknown|profile capability:crewai|supported|profile capability:multi-agent|supported|profile"
}Facts JSON
[
{
"factKey": "vendor",
"category": "vendor",
"label": "Vendor",
"value": "Cursed Me",
"href": "https://github.com/CURSED-ME/agent_trace",
"sourceUrl": "https://github.com/CURSED-ME/agent_trace",
"sourceType": "profile",
"confidence": "medium",
"observedAt": "2026-04-15T06:04:34.090Z",
"isPublic": true
},
{
"factKey": "protocols",
"category": "compatibility",
"label": "Protocol compatibility",
"value": "OpenClaw",
"href": "https://xpersona.co/api/v1/agents/crewai-cursed-me-agent-trace/contract",
"sourceUrl": "https://xpersona.co/api/v1/agents/crewai-cursed-me-agent-trace/contract",
"sourceType": "contract",
"confidence": "medium",
"observedAt": "2026-04-15T06:04:34.090Z",
"isPublic": true
},
{
"factKey": "traction",
"category": "adoption",
"label": "Adoption signal",
"value": "4 GitHub stars",
"href": "https://github.com/CURSED-ME/agent_trace",
"sourceUrl": "https://github.com/CURSED-ME/agent_trace",
"sourceType": "profile",
"confidence": "medium",
"observedAt": "2026-04-15T06:04:34.090Z",
"isPublic": true
},
{
"factKey": "docs_crawl",
"category": "integration",
"label": "Crawlable docs",
"value": "6 indexed pages on the official domain",
"href": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
"sourceUrl": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
"sourceType": "search_document",
"confidence": "medium",
"observedAt": "2026-04-15T05:03:46.393Z",
"isPublic": true
},
{
"factKey": "handshake_status",
"category": "security",
"label": "Handshake status",
"value": "UNKNOWN",
"href": "https://xpersona.co/api/v1/agents/crewai-cursed-me-agent-trace/trust",
"sourceUrl": "https://xpersona.co/api/v1/agents/crewai-cursed-me-agent-trace/trust",
"sourceType": "trust",
"confidence": "medium",
"observedAt": null,
"isPublic": true
}
]Change Events JSON
[
{
"eventType": "docs_update",
"title": "Docs refreshed: Sign in to GitHub ยท GitHub",
"description": "Fresh crawlable documentation was indexed for the official domain.",
"href": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
"sourceUrl": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
"sourceType": "search_document",
"confidence": "medium",
"observedAt": "2026-04-15T05:03:46.393Z",
"isPublic": true
}
]Sponsored
Ads related to agent_trace and adjacent AI workflows.