Rank
70
AI Agents & MCPs & AI Workflow Automation • (~400 MCP servers for AI agents) • AI Automation / AI Agent with MCPs • AI Workflows & AI Agents • MCPs for AI Agents
Traction
No public download signal
Freshness
Updated 2d ago
Crawler Summary
CrewAI + Agent Output Guard: validated task handoffs with zero LLM cost. Schema validation, hallucination detection, self-healing crews with retry. CrewAI + Agent Output Guard: Validated Task Handoffs **Problem:** CrewAI crews pass task outputs between agents automatically. When Agent A produces bad data — wrong format, hallucinated numbers, stale information — Agent B builds on it without question. By the time you notice, the whole crew's output is wrong. **Solution:** Add validation between crew task handoffs using $1. Zero LLM cost — pure computation that cat Capability contract not published. No trust telemetry is available yet. Last updated 4/15/2026.
Freshness
Last checked 4/15/2026
Best For
crewai-output-guard-example is best for crewai, multi-agent workflows where OpenClaw compatibility matters.
Not Ideal For
Contract metadata is missing or unavailable for deterministic execution.
Evidence Sources Checked
editorial-content, GITHUB REPOS, runtime-metrics, public facts pack
CrewAI + Agent Output Guard: validated task handoffs with zero LLM cost. Schema validation, hallucination detection, self-healing crews with retry. CrewAI + Agent Output Guard: Validated Task Handoffs **Problem:** CrewAI crews pass task outputs between agents automatically. When Agent A produces bad data — wrong format, hallucinated numbers, stale information — Agent B builds on it without question. By the time you notice, the whole crew's output is wrong. **Solution:** Add validation between crew task handoffs using $1. Zero LLM cost — pure computation that cat
Public facts
4
Change events
1
Artifacts
0
Freshness
Apr 15, 2026
Capability contract not published. No trust telemetry is available yet. Last updated 4/15/2026.
Trust score
Unknown
Compatibility
OpenClaw
Freshness
Apr 15, 2026
Vendor
Agenson Tools
Artifacts
0
Benchmarks
0
Last release
Unpublished
Key links, install path, and a quick operational read before the deeper crawl record.
Summary
Capability contract not published. No trust telemetry is available yet. Last updated 4/15/2026.
Setup snapshot
Setup complexity is LOW. This package is likely designed for quick installation with minimal external side-effects.
Final validation: Expose the agent to a mock request payload inside a sandbox and trace the network egress before allowing access to real customer data.
Everything public we have scraped or crawled about this agent, grouped by evidence type with provenance.
Vendor
Agenson Tools
Protocol compatibility
OpenClaw
Handshake status
UNKNOWN
Crawlable docs
6 indexed pages on the official domain
Merged public release, docs, artifact, benchmark, pricing, and trust refresh events.
Extracted files, examples, snippets, parameters, dependencies, permissions, and artifact metadata.
Extracted files
0
Examples
5
Snippets
0
Languages
python
bash
pip install crewai crewai-tools npm install -g @agenson-horrowitz/agent-output-guard-mcp export OPENAI_API_KEY=your-key python examples/01_validated_crew.py python examples/02_task_output_guard.py python examples/03_crew_with_retry.py
text
Researcher Agent → "Revenue is $50B" (hallucinated)
↓ (no validation)
Analyst Agent → builds financial model on $50B
↓ (no validation)
Writer Agent → publishes report with wrong numbers
↓
You → discovers the error after it's been sharedtext
Researcher Agent → "Revenue is $50B"
↓
🛡️ Guard → flags uncertainty markers, checks schema
↓ (validated or rejected)
Analyst Agent → works with verified data only
↓
🛡️ Guard → validates model consistency
↓
Writer Agent → publishes reliable reportpython
from crewai import Task
from guard_client import AgentOutputGuard
guard = AgentOutputGuard()
def validate_research(output):
"""Callback that runs after the research task completes."""
result = guard.verify_json_schema(
data=output.raw,
schema=RESEARCH_SCHEMA,
source_agent="researcher",
)
if not result.get("valid"):
raise ValueError(f"Research output invalid: {result['errors']}")
hallucination = guard.detect_hallucination_markers(
text=output.raw,
content_type="analysis",
sensitivity="high",
)
if hallucination.get("confidence_score", 1.0) < 0.6:
raise ValueError("Research output has high hallucination risk")
research_task = Task(
description="Research competitor pricing...",
agent=researcher,
callback=validate_research, # ← validation happens here
expected_output="JSON with competitor data",
)text
┌─────────────────────────────────────────────────┐ │ CrewAI Crew │ │ │ │ ┌──────────┐ callback ┌──────────┐ callback │ │ │ Task A │───────────▶│ Task B │──────────▶│ │ │Researcher│ 🛡️ Guard │ Analyst │ 🛡️ Guard │ │ └──────────┘ └──────────┘ │ │ │ │ Validation checks (zero LLM cost): │ │ • JSON schema compliance │ │ • Hallucination marker detection │ │ • Data freshness verification │ │ • Cross-reference between task outputs │ └─────────────────────────────────────────────────┘
Full documentation captured from public sources, including the complete README when available.
Docs source
GITHUB REPOS
Editorial quality
ready
CrewAI + Agent Output Guard: validated task handoffs with zero LLM cost. Schema validation, hallucination detection, self-healing crews with retry. CrewAI + Agent Output Guard: Validated Task Handoffs **Problem:** CrewAI crews pass task outputs between agents automatically. When Agent A produces bad data — wrong format, hallucinated numbers, stale information — Agent B builds on it without question. By the time you notice, the whole crew's output is wrong. **Solution:** Add validation between crew task handoffs using $1. Zero LLM cost — pure computation that cat
Problem: CrewAI crews pass task outputs between agents automatically. When Agent A produces bad data — wrong format, hallucinated numbers, stale information — Agent B builds on it without question. By the time you notice, the whole crew's output is wrong.
Solution: Add validation between crew task handoffs using Agent Output Guard. Zero LLM cost — pure computation that catches problems before they cascade through your crew.
pip install crewai crewai-tools
npm install -g @agenson-horrowitz/agent-output-guard-mcp
export OPENAI_API_KEY=your-key
python examples/01_validated_crew.py
python examples/02_task_output_guard.py
python examples/03_crew_with_retry.py
Researcher Agent → "Revenue is $50B" (hallucinated)
↓ (no validation)
Analyst Agent → builds financial model on $50B
↓ (no validation)
Writer Agent → publishes report with wrong numbers
↓
You → discovers the error after it's been shared
Researcher Agent → "Revenue is $50B"
↓
🛡️ Guard → flags uncertainty markers, checks schema
↓ (validated or rejected)
Analyst Agent → works with verified data only
↓
🛡️ Guard → validates model consistency
↓
Writer Agent → publishes reliable report
examples/01_validated_crew.py)A three-agent crew (Researcher → Analyst → Writer) with validation callbacks between each task handoff. The simplest way to add guards to an existing crew.
examples/02_task_output_guard.py)A reusable TaskOutputGuard class that wraps CrewAI's task output handling. Define schemas per task and the guard validates automatically. Drop-in addition to any existing crew.
examples/03_crew_with_retry.py)When validation fails, the crew retries the failed task with feedback about what went wrong. Shows how to build self-healing crews that fix their own output quality issues.
CrewAI's task.callback is the natural integration point:
from crewai import Task
from guard_client import AgentOutputGuard
guard = AgentOutputGuard()
def validate_research(output):
"""Callback that runs after the research task completes."""
result = guard.verify_json_schema(
data=output.raw,
schema=RESEARCH_SCHEMA,
source_agent="researcher",
)
if not result.get("valid"):
raise ValueError(f"Research output invalid: {result['errors']}")
hallucination = guard.detect_hallucination_markers(
text=output.raw,
content_type="analysis",
sensitivity="high",
)
if hallucination.get("confidence_score", 1.0) < 0.6:
raise ValueError("Research output has high hallucination risk")
research_task = Task(
description="Research competitor pricing...",
agent=researcher,
callback=validate_research, # ← validation happens here
expected_output="JSON with competitor data",
)
┌─────────────────────────────────────────────────┐
│ CrewAI Crew │
│ │
│ ┌──────────┐ callback ┌──────────┐ callback │
│ │ Task A │───────────▶│ Task B │──────────▶│
│ │Researcher│ 🛡️ Guard │ Analyst │ 🛡️ Guard │
│ └──────────┘ └──────────┘ │
│ │
│ Validation checks (zero LLM cost): │
│ • JSON schema compliance │
│ • Hallucination marker detection │
│ • Data freshness verification │
│ • Cross-reference between task outputs │
└─────────────────────────────────────────────────┘
MIT
Machine endpoints, protocol fit, contract coverage, invocation examples, and guardrails for agent-to-agent use.
Contract coverage
Status
missing
Auth
None
Streaming
No
Data region
Unspecified
Protocol support
Requires: none
Forbidden: none
Guardrails
Operational confidence: low
curl -s "https://xpersona.co/api/v1/agents/crewai-agenson-tools-crewai-output-guard-example/snapshot"
curl -s "https://xpersona.co/api/v1/agents/crewai-agenson-tools-crewai-output-guard-example/contract"
curl -s "https://xpersona.co/api/v1/agents/crewai-agenson-tools-crewai-output-guard-example/trust"
Trust and runtime signals, benchmark suites, failure patterns, and practical risk constraints.
Trust signals
Handshake
UNKNOWN
Confidence
unknown
Attempts 30d
unknown
Fallback rate
unknown
Runtime metrics
Observed P50
unknown
Observed P95
unknown
Rate limit
unknown
Estimated cost
unknown
Do not use if
Every public screenshot, visual asset, demo link, and owner-provided destination tied to this agent.
Neighboring agents from the same protocol and source ecosystem for comparison and shortlist building.
Rank
70
AI Agents & MCPs & AI Workflow Automation • (~400 MCP servers for AI agents) • AI Automation / AI Agent with MCPs • AI Workflows & AI Agents • MCPs for AI Agents
Traction
No public download signal
Freshness
Updated 2d ago
Rank
70
AI productivity studio with smart chat, autonomous agents, and 300+ assistants. Unified access to frontier LLMs
Traction
No public download signal
Freshness
Updated 5d ago
Rank
70
Free, local, open-source 24/7 Cowork app and OpenClaw for Gemini CLI, Claude Code, Codex, OpenCode, Qwen Code, Goose CLI, Auggie, and more | 🌟 Star if you like it!
Traction
No public download signal
Freshness
Updated 6d ago
Rank
70
The Frontend for Agents & Generative UI. React + Angular
Traction
No public download signal
Freshness
Updated 23d ago
Contract JSON
{
"contractStatus": "missing",
"authModes": [],
"requires": [],
"forbidden": [],
"supportsMcp": false,
"supportsA2a": false,
"supportsStreaming": false,
"inputSchemaRef": null,
"outputSchemaRef": null,
"dataRegion": null,
"contractUpdatedAt": null,
"sourceUpdatedAt": null,
"freshnessSeconds": null
}Invocation Guide
{
"preferredApi": {
"snapshotUrl": "https://xpersona.co/api/v1/agents/crewai-agenson-tools-crewai-output-guard-example/snapshot",
"contractUrl": "https://xpersona.co/api/v1/agents/crewai-agenson-tools-crewai-output-guard-example/contract",
"trustUrl": "https://xpersona.co/api/v1/agents/crewai-agenson-tools-crewai-output-guard-example/trust"
},
"curlExamples": [
"curl -s \"https://xpersona.co/api/v1/agents/crewai-agenson-tools-crewai-output-guard-example/snapshot\"",
"curl -s \"https://xpersona.co/api/v1/agents/crewai-agenson-tools-crewai-output-guard-example/contract\"",
"curl -s \"https://xpersona.co/api/v1/agents/crewai-agenson-tools-crewai-output-guard-example/trust\""
],
"jsonRequestTemplate": {
"query": "summarize this repo",
"constraints": {
"maxLatencyMs": 2000,
"protocolPreference": [
"OPENCLEW"
]
}
},
"jsonResponseTemplate": {
"ok": true,
"result": {
"summary": "...",
"confidence": 0.9
},
"meta": {
"source": "GITHUB_REPOS",
"generatedAt": "2026-04-16T23:46:08.290Z"
}
},
"retryPolicy": {
"maxAttempts": 3,
"backoffMs": [
500,
1500,
3500
],
"retryableConditions": [
"HTTP_429",
"HTTP_503",
"NETWORK_TIMEOUT"
]
}
}Trust JSON
{
"status": "unavailable",
"handshakeStatus": "UNKNOWN",
"verificationFreshnessHours": null,
"reputationScore": null,
"p95LatencyMs": null,
"successRate30d": null,
"fallbackRate": null,
"attempts30d": null,
"trustUpdatedAt": null,
"trustConfidence": "unknown",
"sourceUpdatedAt": null,
"freshnessSeconds": null
}Capability Matrix
{
"rows": [
{
"key": "OPENCLEW",
"type": "protocol",
"support": "unknown",
"confidenceSource": "profile",
"notes": "Listed on profile"
},
{
"key": "crewai",
"type": "capability",
"support": "supported",
"confidenceSource": "profile",
"notes": "Declared in agent profile metadata"
},
{
"key": "multi-agent",
"type": "capability",
"support": "supported",
"confidenceSource": "profile",
"notes": "Declared in agent profile metadata"
}
],
"flattenedTokens": "protocol:OPENCLEW|unknown|profile capability:crewai|supported|profile capability:multi-agent|supported|profile"
}Facts JSON
[
{
"factKey": "vendor",
"category": "vendor",
"label": "Vendor",
"value": "Agenson Tools",
"href": "https://github.com/agenson-tools/crewai-output-guard-example",
"sourceUrl": "https://github.com/agenson-tools/crewai-output-guard-example",
"sourceType": "profile",
"confidence": "medium",
"observedAt": "2026-04-15T06:04:17.736Z",
"isPublic": true
},
{
"factKey": "protocols",
"category": "compatibility",
"label": "Protocol compatibility",
"value": "OpenClaw",
"href": "https://xpersona.co/api/v1/agents/crewai-agenson-tools-crewai-output-guard-example/contract",
"sourceUrl": "https://xpersona.co/api/v1/agents/crewai-agenson-tools-crewai-output-guard-example/contract",
"sourceType": "contract",
"confidence": "medium",
"observedAt": "2026-04-15T06:04:17.736Z",
"isPublic": true
},
{
"factKey": "docs_crawl",
"category": "integration",
"label": "Crawlable docs",
"value": "6 indexed pages on the official domain",
"href": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
"sourceUrl": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
"sourceType": "search_document",
"confidence": "medium",
"observedAt": "2026-04-15T05:03:46.393Z",
"isPublic": true
},
{
"factKey": "handshake_status",
"category": "security",
"label": "Handshake status",
"value": "UNKNOWN",
"href": "https://xpersona.co/api/v1/agents/crewai-agenson-tools-crewai-output-guard-example/trust",
"sourceUrl": "https://xpersona.co/api/v1/agents/crewai-agenson-tools-crewai-output-guard-example/trust",
"sourceType": "trust",
"confidence": "medium",
"observedAt": null,
"isPublic": true
}
]Change Events JSON
[
{
"eventType": "docs_update",
"title": "Docs refreshed: Sign in to GitHub · GitHub",
"description": "Fresh crawlable documentation was indexed for the official domain.",
"href": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
"sourceUrl": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
"sourceType": "search_document",
"confidence": "medium",
"observedAt": "2026-04-15T05:03:46.393Z",
"isPublic": true
}
]Sponsored
Ads related to crewai-output-guard-example and adjacent AI workflows.