Rank
70
AI Agents & MCPs & AI Workflow Automation • (~400 MCP servers for AI agents) • AI Automation / AI Agent with MCPs • AI Workflows & AI Agents • MCPs for AI Agents
Traction
No public download signal
Freshness
Updated 2d ago
Crawler Summary
Open-source security firewall for AI agents — validates tool calls, strips ghost arguments, enforces type safety, PII masking, RBAC, cost tracking & sandbox isolation. Works with LangChain, OpenAI Agents SDK, PydanticAI & CrewAI. <div align="center"> <!-- Animated Typing Header --> <a href="https://github.com/sattyamjjain/agent-airlock"> <img src="https://readme-typing-svg.demolab.com?font=Fira+Code&weight=700&size=28&duration=3000&pause=1000&color=00D4FF¢er=true&vCenter=true&multiline=true&repeat=true&width=700&height=100&lines=%F0%9F%9B%A1%EF%B8%8F+Agent-Airlock;Your+AI+Agent+Just+Tried+rm+-rf+%2F.+We+Stopped+It." alt="Agent-Airlock Typ Capability contract not published. No trust telemetry is available yet. 5 GitHub stars reported by the source. Last updated 4/15/2026.
Freshness
Last checked 4/15/2026
Best For
agent-airlock is best for crewai, multi-agent workflows where OpenClaw compatibility matters.
Not Ideal For
Contract metadata is missing or unavailable for deterministic execution.
Evidence Sources Checked
editorial-content, GITHUB OPENCLEW, runtime-metrics, public facts pack
Open-source security firewall for AI agents — validates tool calls, strips ghost arguments, enforces type safety, PII masking, RBAC, cost tracking & sandbox isolation. Works with LangChain, OpenAI Agents SDK, PydanticAI & CrewAI. <div align="center"> <!-- Animated Typing Header --> <a href="https://github.com/sattyamjjain/agent-airlock"> <img src="https://readme-typing-svg.demolab.com?font=Fira+Code&weight=700&size=28&duration=3000&pause=1000&color=00D4FF¢er=true&vCenter=true&multiline=true&repeat=true&width=700&height=100&lines=%F0%9F%9B%A1%EF%B8%8F+Agent-Airlock;Your+AI+Agent+Just+Tried+rm+-rf+%2F.+We+Stopped+It." alt="Agent-Airlock Typ
Public facts
5
Change events
1
Artifacts
0
Freshness
Apr 15, 2026
Capability contract not published. No trust telemetry is available yet. 5 GitHub stars reported by the source. Last updated 4/15/2026.
Trust score
Unknown
Compatibility
OpenClaw
Freshness
Apr 15, 2026
Vendor
Sattyamjjain
Artifacts
0
Benchmarks
0
Last release
Unpublished
Key links, install path, and a quick operational read before the deeper crawl record.
Summary
Capability contract not published. No trust telemetry is available yet. 5 GitHub stars reported by the source. Last updated 4/15/2026.
Setup snapshot
git clone https://github.com/sattyamjjain/agent-airlock.gitSetup complexity is LOW. This package is likely designed for quick installation with minimal external side-effects.
Final validation: Expose the agent to a mock request payload inside a sandbox and trace the network egress before allowing access to real customer data.
Everything public we have scraped or crawled about this agent, grouped by evidence type with provenance.
Vendor
Sattyamjjain
Protocol compatibility
OpenClaw
Adoption signal
5 GitHub stars
Handshake status
UNKNOWN
Crawlable docs
6 indexed pages on the official domain
Merged public release, docs, artifact, benchmark, pricing, and trust refresh events.
Extracted files, examples, snippets, parameters, dependencies, permissions, and artifact metadata.
Extracted files
0
Examples
6
Snippets
0
Languages
python
text
┌────────────────────────────────────────────────────────────────┐ │ 🤖 AI Agent: "Let me help clean up disk space..." │ │ ↓ │ │ rm -rf / --no-preserve-root │ │ ↓ │ │ ┌──────────────────────────────────────────────────────────┐ │ │ │ 🛡️ AIRLOCK: BLOCKED │ │ │ │ │ │ │ │ Reason: Matches denied pattern 'rm_*' │ │ │ │ Policy: STRICT_POLICY │ │ │ │ Fix: Use approved cleanup tools only │ │ │ └──────────────────────────────────────────────────────────┘ │ └────────────────────────────────────────────────────────────────┘
bash
pip install agent-airlock
python
from agent_airlock import Airlock
@Airlock()
def transfer_funds(account: str, amount: int) -> dict:
return {"status": "transferred", "amount": amount}
# LLM sends amount="500" (string) → BLOCKED with fix_hint
# LLM sends force=True (invented arg) → STRIPPED silently
# LLM sends amount=500 (correct) → EXECUTED safelypython
from agent_airlock import Airlock, STRICT_POLICY
@Airlock(sandbox=True, sandbox_required=True, policy=STRICT_POLICY)
def execute_code(code: str) -> str:
"""Runs in an E2B Firecracker MicroVM. Not on your machine."""
exec(code)
return "executed"python
from agent_airlock import (
PERMISSIVE_POLICY, # Dev - no restrictions
STRICT_POLICY, # Prod - rate limited, agent ID required
READ_ONLY_POLICY, # Analytics - query only
BUSINESS_HOURS_POLICY, # Dangerous ops 9-5 only
)
# Or build your own:
from agent_airlock import SecurityPolicy
MY_POLICY = SecurityPolicy(
allowed_tools=["read_*", "query_*"],
denied_tools=["delete_*", "drop_*", "rm_*"],
rate_limits={"*": "1000/hour", "write_*": "100/hour"},
time_restrictions={"deploy_*": "09:00-17:00"},
)python
from agent_airlock import Airlock, AirlockConfig
config = AirlockConfig(
max_output_chars=5000, # Truncate before token explosion
max_output_tokens=2000, # Hard limit on response size
)
@Airlock(config=config)
def query_logs(query: str) -> str:
return massive_log_query(query) # 10MB → 5KBFull documentation captured from public sources, including the complete README when available.
Docs source
GITHUB OPENCLEW
Editorial quality
ready
Open-source security firewall for AI agents — validates tool calls, strips ghost arguments, enforces type safety, PII masking, RBAC, cost tracking & sandbox isolation. Works with LangChain, OpenAI Agents SDK, PydanticAI & CrewAI. <div align="center"> <!-- Animated Typing Header --> <a href="https://github.com/sattyamjjain/agent-airlock"> <img src="https://readme-typing-svg.demolab.com?font=Fira+Code&weight=700&size=28&duration=3000&pause=1000&color=00D4FF¢er=true&vCenter=true&multiline=true&repeat=true&width=700&height=100&lines=%F0%9F%9B%A1%EF%B8%8F+Agent-Airlock;Your+AI+Agent+Just+Tried+rm+-rf+%2F.+We+Stopped+It." alt="Agent-Airlock Typ
One decorator. Zero trust. Full control.
<!-- Primary Badges Row --> <!-- Secondary Badges Row --> <br/>Get Started in 30 Seconds · Why Airlock? · All Frameworks · Docs
<br/> </div>┌────────────────────────────────────────────────────────────────┐
│ 🤖 AI Agent: "Let me help clean up disk space..." │
│ ↓ │
│ rm -rf / --no-preserve-root │
│ ↓ │
│ ┌──────────────────────────────────────────────────────────┐ │
│ │ 🛡️ AIRLOCK: BLOCKED │ │
│ │ │ │
│ │ Reason: Matches denied pattern 'rm_*' │ │
│ │ Policy: STRICT_POLICY │ │
│ │ Fix: Use approved cleanup tools only │ │
│ └──────────────────────────────────────────────────────────┘ │
└────────────────────────────────────────────────────────────────┘
</div>
pip install agent-airlock
from agent_airlock import Airlock
@Airlock()
def transfer_funds(account: str, amount: int) -> dict:
return {"status": "transferred", "amount": amount}
# LLM sends amount="500" (string) → BLOCKED with fix_hint
# LLM sends force=True (invented arg) → STRIPPED silently
# LLM sends amount=500 (correct) → EXECUTED safely
That's it. Your function now has ghost argument stripping, strict type validation, and self-healing errors.
</td> <td width="50%">"MCP has 16,000+ servers on GitHub!" "OpenAI adopted it!" "Linux Foundation hosts it!"
LLMs hallucinate tool calls. Every. Single. Day.
"100" when you need 100Enterprise solutions exist: Prompt Security ($50K/year), Pangea (proxy your data), Cisco ("coming soon").
We built the open-source alternative. One decorator. No vendor lock-in. Your data never leaves your infrastructure.
from agent_airlock import Airlock, STRICT_POLICY
@Airlock(sandbox=True, sandbox_required=True, policy=STRICT_POLICY)
def execute_code(code: str) -> str:
"""Runs in an E2B Firecracker MicroVM. Not on your machine."""
exec(code)
return "executed"
| Feature | Value |
|---------|-------|
| Boot time | ~125ms cold, <200ms warm |
| Isolation | Firecracker MicroVM |
| Fallback | sandbox_required=True blocks local execution |
from agent_airlock import (
PERMISSIVE_POLICY, # Dev - no restrictions
STRICT_POLICY, # Prod - rate limited, agent ID required
READ_ONLY_POLICY, # Analytics - query only
BUSINESS_HOURS_POLICY, # Dangerous ops 9-5 only
)
# Or build your own:
from agent_airlock import SecurityPolicy
MY_POLICY = SecurityPolicy(
allowed_tools=["read_*", "query_*"],
denied_tools=["delete_*", "drop_*", "rm_*"],
rate_limits={"*": "1000/hour", "write_*": "100/hour"},
time_restrictions={"deploy_*": "09:00-17:00"},
)
A runaway agent can burn $500 in API costs before you notice.
from agent_airlock import Airlock, AirlockConfig
config = AirlockConfig(
max_output_chars=5000, # Truncate before token explosion
max_output_tokens=2000, # Hard limit on response size
)
@Airlock(config=config)
def query_logs(query: str) -> str:
return massive_log_query(query) # 10MB → 5KB
ROI: 10MB logs = ~2.5M tokens = $25/response. Truncated = ~1.25K tokens = $0.01. 99.96% savings.
config = AirlockConfig(
mask_pii=True, # SSN, credit cards, phones, emails
mask_secrets=True, # API keys, passwords, JWTs
)
@Airlock(config=config)
def get_user(user_id: str) -> dict:
return db.users.find_one({"id": user_id})
# LLM sees: {"name": "John", "ssn": "[REDACTED]", "api_key": "sk-...XXXX"}
12 PII types detected · 4 masking strategies · Zero data leakage
Block data exfiltration during tool execution:
from agent_airlock import network_airgap, NO_NETWORK_POLICY
# Block ALL network access
with network_airgap(NO_NETWORK_POLICY):
result = untrusted_tool() # Any socket call → NetworkBlockedError
# Or allow specific hosts only
from agent_airlock import NetworkPolicy
INTERNAL_ONLY = NetworkPolicy(
allow_egress=True,
allowed_hosts=["api.internal.com", "*.company.local"],
allowed_ports=[443],
)
Secure existing code without changing a single line:
from agent_airlock import vaccinate, STRICT_POLICY
# Before: Your existing LangChain tools are unprotected
vaccinate("langchain", policy=STRICT_POLICY)
# After: ALL @tool decorators now include Airlock security
# No code changes required!
Supported: LangChain, OpenAI Agents SDK, PydanticAI, CrewAI
Prevent cascading failures with fault tolerance:
from agent_airlock import CircuitBreaker, AGGRESSIVE_BREAKER
breaker = CircuitBreaker("external_api", config=AGGRESSIVE_BREAKER)
@breaker
def call_external_api(query: str) -> dict:
return external_service.query(query)
# After 5 failures → circuit OPENS → fast-fails for 30s
# Then HALF_OPEN → allows 1 test request → recovers or reopens
Enterprise-grade monitoring:
from agent_airlock import configure_observability, observe
configure_observability(
service_name="my-agent",
otlp_endpoint="http://otel-collector:4317",
)
@observe(name="critical_operation")
def process_data(data: dict) -> dict:
# Automatic span creation, metrics, and audit logging
return transform(data)
The Golden Rule:
@Airlockmust be closest to the function definition.
@framework_decorator # ← Framework sees secured function
@Airlock() # ← Security layer (innermost)
def my_function(): # ← Your code
<table>
<tr>
<td>
from langchain_core.tools import tool
from agent_airlock import Airlock
@tool
@Airlock()
def search(query: str) -> str:
"""Search for information."""
return f"Results for: {query}"
</td>
<td>
from agents import function_tool
from agent_airlock import Airlock
@function_tool
@Airlock()
def get_weather(city: str) -> str:
"""Get weather for a city."""
return f"Weather in {city}: 22°C"
</td>
</tr>
<tr>
<td>
from pydantic_ai import Agent
from agent_airlock import Airlock
@Airlock()
def get_stock(symbol: str) -> str:
return f"Stock {symbol}: $150"
agent = Agent("openai:gpt-4o", tools=[get_stock])
</td>
<td>
from crewai.tools import tool
from agent_airlock import Airlock
@tool
@Airlock()
def search_docs(query: str) -> str:
"""Search internal docs."""
return f"Found 5 docs for: {query}"
</td>
</tr>
</table>
<details>
<summary><b>More frameworks: LlamaIndex, AutoGen, smolagents, Anthropic</b></summary>
from llama_index.core.tools import FunctionTool
from agent_airlock import Airlock
@Airlock()
def calculate(expression: str) -> int:
return eval(expression, {"__builtins__": {}})
calc_tool = FunctionTool.from_defaults(fn=calculate)
from autogen import ConversableAgent
from agent_airlock import Airlock
@Airlock()
def analyze_data(dataset: str) -> str:
return f"Analysis of {dataset}: mean=42.5"
assistant = ConversableAgent(name="analyst", llm_config={"model": "gpt-4o"})
assistant.register_for_llm()(analyze_data)
from smolagents import tool
from agent_airlock import Airlock
@tool
@Airlock(sandbox=True)
def run_code(code: str) -> str:
"""Execute in E2B sandbox."""
exec(code)
return "Executed"
from agent_airlock import Airlock
@Airlock()
def get_weather(city: str) -> str:
return f"Weather in {city}: 22°C"
# Use in tool handler
def handle_tool_call(name, inputs):
if name == "get_weather":
return get_weather(**inputs) # Airlock validates
</details>
| Framework | Example | Key Features |
|-----------|---------|--------------|
| LangChain | langchain_integration.py | @tool, AgentExecutor |
| LangGraph | langgraph_integration.py | StateGraph, ToolNode |
| OpenAI Agents | openai_agents_sdk_integration.py | Handoffs, manager pattern |
| PydanticAI | pydanticai_integration.py | Dependencies, structured output |
| LlamaIndex | llamaindex_integration.py | ReActAgent |
| CrewAI | crewai_integration.py | Crews, roles |
| AutoGen | autogen_integration.py | ConversableAgent |
| smolagents | smolagents_integration.py | CodeAgent, E2B |
| Anthropic | anthropic_integration.py | Direct API |
from fastmcp import FastMCP
from agent_airlock.mcp import secure_tool, STRICT_POLICY
mcp = FastMCP("production-server")
@secure_tool(mcp, policy=STRICT_POLICY)
def delete_user(user_id: str) -> dict:
"""One decorator: MCP registration + Airlock protection."""
return db.users.delete(user_id)
| | Prompt Security | Pangea | Agent-Airlock | |---|:---:|:---:|:---:| | Pricing | $50K+/year | Enterprise | Free forever | | Integration | Proxy gateway | Proxy gateway | One decorator | | Self-Healing | ❌ | ❌ | ✅ | | E2B Sandboxing | ❌ | ❌ | ✅ Native | | Your Data | Their servers | Their servers | Never leaves you | | Source Code | Closed | Closed | MIT Licensed |
We're not anti-enterprise. We're anti-gatekeeping. Security for AI agents shouldn't require a procurement process.
# Core (validation + policies + sanitization)
pip install agent-airlock
# With E2B sandbox support
pip install agent-airlock[sandbox]
# With FastMCP integration
pip install agent-airlock[mcp]
# Everything
pip install agent-airlock[all]
# E2B key for sandbox execution
export E2B_API_KEY="your-key-here"
Agent-Airlock mitigates the OWASP Top 10 for LLMs (2025):
| OWASP Risk | Mitigation | |------------|------------| | LLM01: Prompt Injection | Strict type validation blocks injected payloads | | LLM02: Sensitive Data Disclosure | Network airgap prevents data exfiltration | | LLM05: Improper Output Handling | PII/secret masking sanitizes outputs | | LLM06: Excessive Agency | Rate limits + RBAC + capability gating prevent runaway agents | | LLM07: System Prompt Leakage | Honeypot returns fake data instead of errors | | LLM09: Misinformation | Ghost argument rejection blocks hallucinated params |
Agent-Airlock secures AI agent systems in production:
| Project | Use Case | |---------|----------| | Attri.ai | Multi-agent orchestration platform — governance & security layer | | FerrumDeck | AgentOps control plane — deny-by-default tool execution | | Mnemo | MCP-native memory database — secure tool call validation |
Using Agent-Airlock in production? Open a PR to add your project!
| Metric | Value | |--------|-------| | Tests | 1,157 passing | | Coverage | 79%+ (enforced in CI) | | Lines of Code | ~25,900 | | Validation overhead | <50ms | | Sandbox cold start | ~125ms | | Sandbox warm pool | <200ms | | Framework integrations | 9 | | Core dependencies | 0 (Pydantic only) |
| Resource | Description | |----------|-------------| | Examples | 9 framework integrations with copy-paste code | | Security Guide | Production deployment checklist | | API Reference | Every function, every parameter |
Built by Sattyam Jain — AI infrastructure engineer.
This started as an internal tool after watching an agent hallucinate its way through a production database. Now it's yours.
We review every PR within 48 hours.
git clone https://github.com/sattyamjjain/agent-airlock
cd agent-airlock
pip install -e ".[dev]"
pytest tests/ -v
If Agent-Airlock saved your production database:
Built with 🛡️ by Sattyam Jain
<sub>Making AI agents safe, one decorator at a time.</sub>
</div>Sources: This README follows best practices from awesome-readme, Best-README-Template, and the GitHub Blog.
</sub> </div>Machine endpoints, protocol fit, contract coverage, invocation examples, and guardrails for agent-to-agent use.
Contract coverage
Status
missing
Auth
None
Streaming
No
Data region
Unspecified
Protocol support
Requires: none
Forbidden: none
Guardrails
Operational confidence: low
curl -s "https://xpersona.co/api/v1/agents/crewai-sattyamjjain-agent-airlock/snapshot"
curl -s "https://xpersona.co/api/v1/agents/crewai-sattyamjjain-agent-airlock/contract"
curl -s "https://xpersona.co/api/v1/agents/crewai-sattyamjjain-agent-airlock/trust"
Trust and runtime signals, benchmark suites, failure patterns, and practical risk constraints.
Trust signals
Handshake
UNKNOWN
Confidence
unknown
Attempts 30d
unknown
Fallback rate
unknown
Runtime metrics
Observed P50
unknown
Observed P95
unknown
Rate limit
unknown
Estimated cost
unknown
Do not use if
Every public screenshot, visual asset, demo link, and owner-provided destination tied to this agent.
Neighboring agents from the same protocol and source ecosystem for comparison and shortlist building.
Rank
70
AI Agents & MCPs & AI Workflow Automation • (~400 MCP servers for AI agents) • AI Automation / AI Agent with MCPs • AI Workflows & AI Agents • MCPs for AI Agents
Traction
No public download signal
Freshness
Updated 2d ago
Rank
70
AI productivity studio with smart chat, autonomous agents, and 300+ assistants. Unified access to frontier LLMs
Traction
No public download signal
Freshness
Updated 6d ago
Rank
70
Free, local, open-source 24/7 Cowork app and OpenClaw for Gemini CLI, Claude Code, Codex, OpenCode, Qwen Code, Goose CLI, Auggie, and more | 🌟 Star if you like it!
Traction
No public download signal
Freshness
Updated 6d ago
Rank
70
The Frontend for Agents & Generative UI. React + Angular
Traction
No public download signal
Freshness
Updated 23d ago
Contract JSON
{
"contractStatus": "missing",
"authModes": [],
"requires": [],
"forbidden": [],
"supportsMcp": false,
"supportsA2a": false,
"supportsStreaming": false,
"inputSchemaRef": null,
"outputSchemaRef": null,
"dataRegion": null,
"contractUpdatedAt": null,
"sourceUpdatedAt": null,
"freshnessSeconds": null
}Invocation Guide
{
"preferredApi": {
"snapshotUrl": "https://xpersona.co/api/v1/agents/crewai-sattyamjjain-agent-airlock/snapshot",
"contractUrl": "https://xpersona.co/api/v1/agents/crewai-sattyamjjain-agent-airlock/contract",
"trustUrl": "https://xpersona.co/api/v1/agents/crewai-sattyamjjain-agent-airlock/trust"
},
"curlExamples": [
"curl -s \"https://xpersona.co/api/v1/agents/crewai-sattyamjjain-agent-airlock/snapshot\"",
"curl -s \"https://xpersona.co/api/v1/agents/crewai-sattyamjjain-agent-airlock/contract\"",
"curl -s \"https://xpersona.co/api/v1/agents/crewai-sattyamjjain-agent-airlock/trust\""
],
"jsonRequestTemplate": {
"query": "summarize this repo",
"constraints": {
"maxLatencyMs": 2000,
"protocolPreference": [
"OPENCLEW"
]
}
},
"jsonResponseTemplate": {
"ok": true,
"result": {
"summary": "...",
"confidence": 0.9
},
"meta": {
"source": "GITHUB_OPENCLEW",
"generatedAt": "2026-04-17T02:26:46.487Z"
}
},
"retryPolicy": {
"maxAttempts": 3,
"backoffMs": [
500,
1500,
3500
],
"retryableConditions": [
"HTTP_429",
"HTTP_503",
"NETWORK_TIMEOUT"
]
}
}Trust JSON
{
"status": "unavailable",
"handshakeStatus": "UNKNOWN",
"verificationFreshnessHours": null,
"reputationScore": null,
"p95LatencyMs": null,
"successRate30d": null,
"fallbackRate": null,
"attempts30d": null,
"trustUpdatedAt": null,
"trustConfidence": "unknown",
"sourceUpdatedAt": null,
"freshnessSeconds": null
}Capability Matrix
{
"rows": [
{
"key": "OPENCLEW",
"type": "protocol",
"support": "unknown",
"confidenceSource": "profile",
"notes": "Listed on profile"
},
{
"key": "crewai",
"type": "capability",
"support": "supported",
"confidenceSource": "profile",
"notes": "Declared in agent profile metadata"
},
{
"key": "multi-agent",
"type": "capability",
"support": "supported",
"confidenceSource": "profile",
"notes": "Declared in agent profile metadata"
}
],
"flattenedTokens": "protocol:OPENCLEW|unknown|profile capability:crewai|supported|profile capability:multi-agent|supported|profile"
}Facts JSON
[
{
"factKey": "vendor",
"category": "vendor",
"label": "Vendor",
"value": "Sattyamjjain",
"href": "https://github.com/sattyamjjain/agent-airlock",
"sourceUrl": "https://github.com/sattyamjjain/agent-airlock",
"sourceType": "profile",
"confidence": "medium",
"observedAt": "2026-04-15T06:04:34.103Z",
"isPublic": true
},
{
"factKey": "protocols",
"category": "compatibility",
"label": "Protocol compatibility",
"value": "OpenClaw",
"href": "https://xpersona.co/api/v1/agents/crewai-sattyamjjain-agent-airlock/contract",
"sourceUrl": "https://xpersona.co/api/v1/agents/crewai-sattyamjjain-agent-airlock/contract",
"sourceType": "contract",
"confidence": "medium",
"observedAt": "2026-04-15T06:04:34.103Z",
"isPublic": true
},
{
"factKey": "traction",
"category": "adoption",
"label": "Adoption signal",
"value": "5 GitHub stars",
"href": "https://github.com/sattyamjjain/agent-airlock",
"sourceUrl": "https://github.com/sattyamjjain/agent-airlock",
"sourceType": "profile",
"confidence": "medium",
"observedAt": "2026-04-15T06:04:34.103Z",
"isPublic": true
},
{
"factKey": "docs_crawl",
"category": "integration",
"label": "Crawlable docs",
"value": "6 indexed pages on the official domain",
"href": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
"sourceUrl": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
"sourceType": "search_document",
"confidence": "medium",
"observedAt": "2026-04-15T05:03:46.393Z",
"isPublic": true
},
{
"factKey": "handshake_status",
"category": "security",
"label": "Handshake status",
"value": "UNKNOWN",
"href": "https://xpersona.co/api/v1/agents/crewai-sattyamjjain-agent-airlock/trust",
"sourceUrl": "https://xpersona.co/api/v1/agents/crewai-sattyamjjain-agent-airlock/trust",
"sourceType": "trust",
"confidence": "medium",
"observedAt": null,
"isPublic": true
}
]Change Events JSON
[
{
"eventType": "docs_update",
"title": "Docs refreshed: Sign in to GitHub · GitHub",
"description": "Fresh crawlable documentation was indexed for the official domain.",
"href": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
"sourceUrl": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
"sourceType": "search_document",
"confidence": "medium",
"observedAt": "2026-04-15T05:03:46.393Z",
"isPublic": true
}
]Sponsored
Ads related to agent-airlock and adjacent AI workflows.