Rank
70
AI Agents & MCPs & AI Workflow Automation • (~400 MCP servers for AI agents) • AI Automation / AI Agent with MCPs • AI Workflows & AI Agents • MCPs for AI Agents
Traction
No public download signal
Freshness
Updated 2d ago
Xpersona Agent
Complete startup metrics command center — from raw data to investor-ready dashboards. Covers every stage (pre-seed to Series B+), every model (SaaS, marketplace, consumer, hardware), with diagnostic frameworks, benchmark databases, and board-ready reporting. --- name: afrexai-startup-metrics-engine model: default version: 1.0.0 description: > Complete startup metrics command center — from raw data to investor-ready dashboards. Covers every stage (pre-seed to Series B+), every model (SaaS, marketplace, consumer, hardware), with diagnostic frameworks, benchmark databases, and board-ready reporting. tags: [startup, metrics, saas, kpis, unit-economics, growth, fundraising, i
clawhub skill install skills:1kalin:afrexai-startup-metrics-engineOverall rank
#62
Adoption
No public adoption signal
Trust
Unknown
Freshness
Feb 25, 2026
Freshness
Last checked Feb 25, 2026
Best For
afrexai-startup-metrics-engine is best for metrics workflows where OpenClaw compatibility matters.
Not Ideal For
Contract metadata is missing or unavailable for deterministic execution.
Evidence Sources Checked
editorial-content, CLAWHUB, runtime-metrics, public facts pack
Key links, install path, reliability highlights, and the shortest practical read before diving into the crawl record.
Overview
Complete startup metrics command center — from raw data to investor-ready dashboards. Covers every stage (pre-seed to Series B+), every model (SaaS, marketplace, consumer, hardware), with diagnostic frameworks, benchmark databases, and board-ready reporting. --- name: afrexai-startup-metrics-engine model: default version: 1.0.0 description: > Complete startup metrics command center — from raw data to investor-ready dashboards. Covers every stage (pre-seed to Series B+), every model (SaaS, marketplace, consumer, hardware), with diagnostic frameworks, benchmark databases, and board-ready reporting. tags: [startup, metrics, saas, kpis, unit-economics, growth, fundraising, i Capability contract not published. No trust telemetry is available yet. Last updated 4/15/2026.
Trust score
Unknown
Compatibility
OpenClaw
Freshness
Feb 25, 2026
Vendor
Openclaw
Artifacts
0
Benchmarks
0
Last release
Unpublished
Install & run
clawhub skill install skills:1kalin:afrexai-startup-metrics-engineSetup complexity is LOW. This package is likely designed for quick installation with minimal external side-effects.
Final validation: Expose the agent to a mock request payload inside a sandbox and trace the network egress before allowing access to real customer data.
Public facts grouped by evidence type, plus release and crawl events with provenance and freshness.
Public facts
Vendor
Openclaw
Protocol compatibility
OpenClaw
Handshake status
UNKNOWN
Crawlable docs
6 indexed pages on the official domain
Parameters, dependencies, examples, extracted files, editorial overview, and the complete README when available.
Captured outputs
Extracted files
0
Examples
6
Snippets
0
Languages
typescript
Parameters
yaml
model_type:
saas:
sub_type: # self-serve | sales-led | PLG | hybrid
pricing: # per-seat | usage-based | flat | tiered
contract: # monthly | annual | multi-year
marketplace:
type: # managed | unmanaged | SaaS-enabled
unit: # GMV | take-rate | transaction
consumer:
type: # subscription | ad-supported | freemium | transactional
engagement_model: # DAU/MAU | session-based | content
hardware_plus_software:
type: # device + subscription | IoT | embeddedtext
- Revenue: MRR, ARR, net new MRR - Growth: MoM growth rate, WoW for early stage - Retention: Logo churn rate, revenue churn rate - Cash: Monthly burn, runway in months
text
- Unit economics: CAC, LTV, LTV:CAC ratio, payback months - Sales: Pipeline coverage, win rate, sales cycle length - Product: Activation rate, feature adoption, NPS/CSAT - Team: Revenue per employee, quota attainment
text
- NDR (Net Dollar Retention) - Burn multiple - Rule of 40 score - Magic number - Cohort analysis curves
text
MRR = Σ(active_subscriptions × monthly_price) ARR = MRR × 12 Net New MRR = New MRR + Expansion MRR - Churned MRR - Contraction MRR MRR Components: new_mrr: First-time customer revenue this month expansion_mrr: Upsell + cross-sell from existing customers churned_mrr: Revenue lost from customers who left contraction_mrr: Revenue lost from downgrades (customer stayed) reactivation_mrr: Revenue from returning churned customers MoM Growth = (MRR_current - MRR_previous) / MRR_previous CMGR (Compound Monthly Growth Rate) = (MRR_end / MRR_start)^(1/months) - 1
text
CAC = Total_Sales_Marketing_Spend / New_Customers_Acquired - Include: salaries, commissions, tools, ads, events, content costs - Exclude: product/engineering, CS (post-sale) - Time-lag adjustment: match spend to cohort it generated (typically 1-3 month lag) Blended CAC vs Channel CAC: blended_cac = total_spend / total_new_customers channel_cac = channel_spend / channel_new_customers # Always track both — blended hides channel problems LTV = ARPU × Gross_Margin% × Average_Customer_Lifetime # Or: LTV = ARPU × Gross_Margin% × (1 / Monthly_Churn_Rate) # Cap at 5 years for conservative estimates LTV:CAC Ratio — THE ratio: > 5.0 → Under-investing in growth (spend more!) 3.0-5.0 → Excellent efficiency 1.5-3.0 → Healthy but watch payback period 1.0-1.5 → Marginal — fix churn or reduce CAC < 1.0 → Burning cash per customer — STOP and fix CAC Payback = CAC / (Monthly_ARPU × Gross_Margin%) < 6 months → Elite (PLG companies) 6-12 months → Great 12-18 months → Acceptable for enterprise > 18 months → Danger zone (unless >130% NDR)
Editorial read
Docs source
CLAWHUB
Editorial quality
ready
Complete startup metrics command center — from raw data to investor-ready dashboards. Covers every stage (pre-seed to Series B+), every model (SaaS, marketplace, consumer, hardware), with diagnostic frameworks, benchmark databases, and board-ready reporting. --- name: afrexai-startup-metrics-engine model: default version: 1.0.0 description: > Complete startup metrics command center — from raw data to investor-ready dashboards. Covers every stage (pre-seed to Series B+), every model (SaaS, marketplace, consumer, hardware), with diagnostic frameworks, benchmark databases, and board-ready reporting. tags: [startup, metrics, saas, kpis, unit-economics, growth, fundraising, i
Your complete system for tracking, diagnosing, and communicating startup health — not just formulas, but the thinking behind what to measure, when, and what to do when numbers go wrong.
Before tracking anything, classify yourself:
Business Model:
model_type:
saas:
sub_type: # self-serve | sales-led | PLG | hybrid
pricing: # per-seat | usage-based | flat | tiered
contract: # monthly | annual | multi-year
marketplace:
type: # managed | unmanaged | SaaS-enabled
unit: # GMV | take-rate | transaction
consumer:
type: # subscription | ad-supported | freemium | transactional
engagement_model: # DAU/MAU | session-based | content
hardware_plus_software:
type: # device + subscription | IoT | embedded
Stage (determines what matters):
| Stage | ARR Range | North Star Focus | Board Cares About | |-------|-----------|-------------------|-------------------| | Pre-seed | $0-$50K | Engagement + retention signal | Problem-solution fit evidence | | Seed | $50K-$500K | Cohort retention + early revenue | Product-market fit signals | | Series A | $500K-$3M | Growth efficiency + unit economics | LTV:CAC, NDR, growth rate | | Series B | $3M-$15M | Scalability + operating leverage | Rule of 40, magic number, burn multiple | | Growth | $15M+ | Capital efficiency + market share | Net margins, NRR, competitive moat |
Layer 1: Health Vitals (track daily)
- Revenue: MRR, ARR, net new MRR
- Growth: MoM growth rate, WoW for early stage
- Retention: Logo churn rate, revenue churn rate
- Cash: Monthly burn, runway in months
Layer 2: Efficiency (track weekly)
- Unit economics: CAC, LTV, LTV:CAC ratio, payback months
- Sales: Pipeline coverage, win rate, sales cycle length
- Product: Activation rate, feature adoption, NPS/CSAT
- Team: Revenue per employee, quota attainment
Layer 3: Strategic (track monthly)
- NDR (Net Dollar Retention)
- Burn multiple
- Rule of 40 score
- Magic number
- Cohort analysis curves
MRR = Σ(active_subscriptions × monthly_price)
ARR = MRR × 12
Net New MRR = New MRR + Expansion MRR - Churned MRR - Contraction MRR
MRR Components:
new_mrr: First-time customer revenue this month
expansion_mrr: Upsell + cross-sell from existing customers
churned_mrr: Revenue lost from customers who left
contraction_mrr: Revenue lost from downgrades (customer stayed)
reactivation_mrr: Revenue from returning churned customers
MoM Growth = (MRR_current - MRR_previous) / MRR_previous
CMGR (Compound Monthly Growth Rate) = (MRR_end / MRR_start)^(1/months) - 1
Why CMGR > MoM: Monthly growth is noisy. CMGR smooths 6-12 month periods for real trend.
CAC = Total_Sales_Marketing_Spend / New_Customers_Acquired
- Include: salaries, commissions, tools, ads, events, content costs
- Exclude: product/engineering, CS (post-sale)
- Time-lag adjustment: match spend to cohort it generated (typically 1-3 month lag)
Blended CAC vs Channel CAC:
blended_cac = total_spend / total_new_customers
channel_cac = channel_spend / channel_new_customers
# Always track both — blended hides channel problems
LTV = ARPU × Gross_Margin% × Average_Customer_Lifetime
# Or: LTV = ARPU × Gross_Margin% × (1 / Monthly_Churn_Rate)
# Cap at 5 years for conservative estimates
LTV:CAC Ratio — THE ratio:
> 5.0 → Under-investing in growth (spend more!)
3.0-5.0 → Excellent efficiency
1.5-3.0 → Healthy but watch payback period
1.0-1.5 → Marginal — fix churn or reduce CAC
< 1.0 → Burning cash per customer — STOP and fix
CAC Payback = CAC / (Monthly_ARPU × Gross_Margin%)
< 6 months → Elite (PLG companies)
6-12 months → Great
12-18 months → Acceptable for enterprise
> 18 months → Danger zone (unless >130% NDR)
Logo Churn Rate = Customers_Lost / Customers_Start_of_Period
Revenue Churn Rate = MRR_Lost / MRR_Start_of_Period
# Revenue churn > logo churn = losing big customers (very bad)
# Revenue churn < logo churn = losing small customers (less bad)
Net Dollar Retention (NDR) = (Starting_MRR + Expansion - Contraction - Churn) / Starting_MRR
> 130% → World-class (Snowflake, Twilio territory)
110-130% → Excellent
100-110% → Good
90-100% → Acceptable but concerning
< 90% → Leaky bucket — growth can't outrun churn
Gross Dollar Retention (GDR) = (Starting_MRR - Contraction - Churn) / Starting_MRR
# NDR without expansion — shows your floor
> 90% → Sticky product
80-90% → Normal for SMB
< 80% → Product or market problem
Burn Multiple = Net_Burn / Net_New_ARR
< 1.0 → Amazing (rare at early stage)
1.0-1.5 → Great
1.5-2.0 → Good
2.0-3.0 → Mediocre
> 3.0 → Bad — inefficient growth
Rule of 40 = Revenue_Growth_Rate% + Profit_Margin%
> 40 → Healthy SaaS (IPO-ready)
# Example: 60% growth + -20% margin = 40 ✓
# Example: 20% growth + 20% margin = 40 ✓
Magic Number = Net_New_ARR_This_Quarter / Sales_Marketing_Spend_Last_Quarter
> 1.0 → Efficient, invest more in S&M
0.5-1.0 → OK, optimize before scaling
< 0.5 → Inefficient — fix before spending more
Hype Ratio = Valuation / ARR
# Reality check on fundraising expectations
# Median SaaS multiples: 6-12x ARR (varies by growth + retention)
Monthly Burn = Total_Monthly_Expenses - Total_Monthly_Revenue
Gross Burn = Total_Monthly_Expenses (ignoring revenue)
Net Burn = Gross_Burn - Revenue
Runway = Cash_Balance / Monthly_Net_Burn
> 18 months → Comfortable
12-18 months → Start planning next raise
6-12 months → Urgently fundraising
< 6 months → Default alive or dead calculation needed
Default Alive? = Can_Current_Growth_Rate_Make_Revenue > Expenses_Before_Cash_Runs_Out
# Paul Graham's test — if growing, project the intersection
Sales Cycle Length = Avg_Days(First_Touch → Closed_Won)
Pipeline Coverage = Total_Pipeline_Value / Revenue_Target
# Need 3-4x for predictable revenue
Win Rate = Deals_Won / Total_Deals_in_Stage
By stage: SQL→Opp (30-40%), Opp→Proposal (50-60%), Proposal→Close (60-70%)
ACV (Annual Contract Value) = Total_Contract_Value / Contract_Years
ASP (Average Selling Price) = Total_Revenue / Deals_Closed
Quota Attainment = Actual_Bookings / Quota_Target
# Healthy org: 60-70% of reps hitting quota
Sales Efficiency = Net_New_ARR / Fully_Loaded_Sales_Cost
> 1.0 → Scalable
When a metric is off, don't just report it — diagnose it.
Questions:
- Is this a trend (3+ months) or a blip (1 month)?
- Is it seasonal or structural?
- Did it change gradually or suddenly?
- Which cohorts/segments are affected?
Every metric has upstream drivers. Trace back:
Revenue declining? →
├── New MRR down? → Lead volume? → Conversion rate? → Channel performance?
├── Expansion down? → Upsell attempts? → Product adoption? → CSM activity?
└── Churn up? → Which segment? → Voluntary vs involuntary? → Reasons?
CAC increasing? →
├── Spend up? → Which channels? → CPM/CPC changes?
├── Volume same but cost up? → Market saturation? → Competition?
└── Conversion down? → Funnel stage? → Lead quality? → Sales process?
Find the highest-impact intervention:
- Which single metric, if improved 10%, would cascade the most?
- What's the cheapest/fastest fix vs highest-impact fix?
- Score: Impact (1-5) × Feasibility (1-5) × Speed (1-5)
Convert metric into business language:
- "Churn increased 2%" → "We'll lose $X00K ARR this year at this rate"
- "CAC payback is 18 months" → "Each new customer is cash-negative for 1.5 years"
- "NDR is 95%" → "Even with zero new sales, we shrink 5% annually"
diagnostic_experiment:
hypothesis: "[Metric] is declining because [upstream cause]"
test: "[Specific action] for [time period]"
success_metric: "[Metric] improves by [X%] within [timeframe]"
sample: "[Segment/cohort to test on]"
kill_criteria: "Stop if [negative signal] within [days]"
Aggregate metrics lie. Cohorts tell the truth.
Track each monthly cohort's MRR over time:
Month 0 Month 1 Month 3 Month 6 Month 12
Jan '25 $50K $48K $45K $42K $38K
Feb '25 $55K $53K $50K $48K —
Mar '25 $60K $58K $57K $56K —
Apr '25 $45K $44K $43K — —
Reading this:
- Jan cohort retained 76% at month 12 → mediocre
- Mar cohort retained 93% at month 3 → improving! What changed?
- Apr cohort started smaller but retention looks good
cohort_engagement:
week_1_activation: # % completing key action within 7 days
week_4_habit: # % using product 3+ days in week 4
month_3_retention: # % still active at 90 days
# Leading indicators of revenue retention
# If engagement drops, revenue follows 1-3 months later
🚩 Each new cohort retains worse → product-market fit eroding
🚩 Large cohorts churn more → scaling quality issues
🚩 Specific channel cohorts churn fast → bad-fit leads
🚩 Expansion only in old cohorts → pricing/packaging problem
investor_update:
subject: "[Company] — [Month] Update: [One-line headline]"
# 1. TL;DR (3 bullets max)
highlights:
- "ARR: $X (+Y% MoM) — [context]"
- "Key win: [biggest achievement]"
- "Challenge: [biggest problem + what you're doing]"
# 2. Key Metrics Table
metrics:
arr: {current: "", prior_month: "", delta: ""}
mrr: {current: "", growth_mom: ""}
customers: {total: "", new: "", churned: ""}
ndr: ""
burn_rate: ""
runway_months: ""
cash_balance: ""
# 3. What Happened (5-7 bullets)
wins: []
challenges: []
# 4. What's Next (3-5 bullets)
next_month_priorities: []
# 5. Asks (be specific!)
asks:
- intro: "Looking for intro to [person/company] for [reason]"
- advice: "Would love 15 min on [specific topic]"
- hiring: "Seeking [role] — know anyone?"
Slide 1: Business Health Dashboard
ARR: $___ MoM: ___% NDR: ___%
Customers: ___ New: ___ Churned: ___
Runway: ___ months Burn Multiple: ___
Traffic light: 🟢 On track | 🟡 Watch | 🔴 Action needed
Slide 2: Revenue Waterfall
Starting MRR: $___
+ New: $___
+ Expansion: $___
- Contraction: $___
- Churn: $___
= Ending MRR: $___
Slide 3: Unit Economics
CAC: $___ → LTV: $___ → LTV:CAC: ___x
Payback: ___ months
Blended vs top channel efficiency
Quick Ratio = (New MRR + Expansion MRR) / (Churned MRR + Contraction MRR)
> 4.0 → Very healthy growth
2.0-4.0 → Good
1.0-2.0 → Sustainable but slow
< 1.0 → Shrinking
Logo-to-Revenue Retention Gap:
If logo retention 85% but revenue retention 95% → upsell compensates
If logo retention 85% and revenue retention 85% → no expansion = problem
Expansion Revenue % = Expansion MRR / Total New MRR
> 30% → Healthy at scale
# Best SaaS: expansion > new revenue (Twilio was 170% NDR)
GMV (Gross Merchandise Value) = Total value of transactions on platform
Take Rate = Platform Revenue / GMV
5-15% → Typical for most marketplaces
15-30% → Managed/full-service marketplaces
Supply-side metrics:
supply_liquidity = listings_with_transaction / total_listings
time_to_first_match = avg_days_from_listing_to_sale
Demand-side metrics:
search_to_fill = completed_transactions / searches
repeat_purchase_rate = returning_buyers / total_buyers
DAU/MAU Ratio:
> 50% → Exceptional (messaging apps)
25-50% → Strong habit (social, productivity)
10-25% → Good (media, entertainment)
< 10% → Weak engagement
Viral Coefficient (K-factor) = Invites_per_User × Conversion_Rate
> 1.0 → Viral growth (each user brings >1 new user)
0.5-1.0 → Amplified growth
< 0.5 → Not viral — need paid acquisition
Free-to-Paid Conversion:
PLG benchmark: 2-5% of free users convert
Freemium benchmark: 1-3%
Enterprise self-serve: 5-15%
Time to Value = Time from signup to "aha moment"
# Reduce this aggressively — strongest lever for activation
| Vanity (Avoid) | Real (Track) | |----------------|--------------| | Total signups | Activated users (completed key action) | | Page views | Engaged sessions (>2 min or action taken) | | "Pipeline" | Qualified pipeline (met ICP criteria) | | Gross revenue | Net revenue (after refunds + credits) | | Total customers | Active customers (logged in last 30d) | | Downloads | WAU/MAU | | "Partnerships" | Revenue from partnerships |
🚩 Counting annual contracts as MRR at signing (vs. monthly recognition)
🚩 Excluding "one-time" churns from churn rate
🚩 Using gross revenue instead of net
🚩 Measuring CAC without fully-loaded costs
🚩 Cherry-picking best cohort as "representative"
🚩 Counting reactivations as new customers
🚩 Using "committed ARR" (signed but not live)
🚩 Trailing-12-month NDR when recent cohorts are worse
1. Audit channel efficiency — kill bottom 20% channels
2. Improve activation rate (reduces wasted spend)
3. Increase conversion at each funnel stage (+10% each = compound effect)
4. Shift mix: more organic/PLG, less paid
5. Reduce sales cycle length (lower cost per deal)
6. Tighten ICP — stop selling to bad-fit customers
1. Segment: which customers churn? (Size, channel, use case)
2. Time: when do they churn? (Month 1-3 = onboarding, 6-12 = value, 12+ = competition)
3. Reason: exit survey + CS interviews (top 3 reasons)
4. Fix activation if month 1-3 churn
5. Fix value delivery if month 6-12 churn
6. Fix switching cost / competitive moat if 12+ churn
1. Check: is TAM exhausted in current segment? → Expand to adjacent
2. Check: conversion rates declining? → Product or message fatigue
3. Check: CAC rising with flat volume? → Channel saturation
4. Check: expansion revenue flat? → Packaging/pricing problem
5. Check: sales cycle lengthening? → Market conditions or competition
Metrics investors care about BY STAGE:
Pre-seed: Engagement, retention curves, market size
Seed: MoM growth (15%+), retention cohorts, early unit economics
Series A: $1M+ ARR, 3x+ YoY growth, LTV:CAC > 3, NDR > 100%
Series B: $5M+ ARR, path to Rule of 40, burn multiple < 2, sales efficiency
Track metrics per product line AND blended. Watch for cross-subsidization where one product's margins mask another's losses.
MRR is estimated, not contracted. Track committed vs consumed. Expansion is automatic (usage growth), so NDR is naturally higher — compare to usage-based peers, not seat-based.
If NDR > 100% only because of price increases (not organic expansion), this is fragile. Separate price-driven vs usage-driven expansion.
Track leading indicators: activation rate, engagement frequency, NPS, waitlist growth, organic traffic, time-to-value. Revenue metrics come later — don't force them.
Use YoY comparisons, not MoM. Adjust cohort analysis for seasonal patterns. Build seasonal forecast models.
Built by AfrexAI — turning data into revenue.
Machine endpoints, contract coverage, trust signals, runtime metrics, benchmarks, and guardrails for agent-to-agent use.
Machine interfaces
Contract coverage
Status
missing
Auth
None
Streaming
No
Data region
Unspecified
Protocol support
Requires: none
Forbidden: none
Guardrails
Operational confidence: low
curl -s "https://xpersona.co/api/v1/agents/clawhub-skills-1kalin-afrexai-startup-metrics-engine/snapshot"
curl -s "https://xpersona.co/api/v1/agents/clawhub-skills-1kalin-afrexai-startup-metrics-engine/contract"
curl -s "https://xpersona.co/api/v1/agents/clawhub-skills-1kalin-afrexai-startup-metrics-engine/trust"
Operational fit
Trust signals
Handshake
UNKNOWN
Confidence
unknown
Attempts 30d
unknown
Fallback rate
unknown
Runtime metrics
Observed P50
unknown
Observed P95
unknown
Rate limit
unknown
Estimated cost
unknown
Do not use if
Raw contract, invocation, trust, capability, facts, and change-event payloads for machine-side inspection.
Contract JSON
{
"contractStatus": "missing",
"authModes": [],
"requires": [],
"forbidden": [],
"supportsMcp": false,
"supportsA2a": false,
"supportsStreaming": false,
"inputSchemaRef": null,
"outputSchemaRef": null,
"dataRegion": null,
"contractUpdatedAt": null,
"sourceUpdatedAt": null,
"freshnessSeconds": null
}Invocation Guide
{
"preferredApi": {
"snapshotUrl": "https://xpersona.co/api/v1/agents/clawhub-skills-1kalin-afrexai-startup-metrics-engine/snapshot",
"contractUrl": "https://xpersona.co/api/v1/agents/clawhub-skills-1kalin-afrexai-startup-metrics-engine/contract",
"trustUrl": "https://xpersona.co/api/v1/agents/clawhub-skills-1kalin-afrexai-startup-metrics-engine/trust"
},
"curlExamples": [
"curl -s \"https://xpersona.co/api/v1/agents/clawhub-skills-1kalin-afrexai-startup-metrics-engine/snapshot\"",
"curl -s \"https://xpersona.co/api/v1/agents/clawhub-skills-1kalin-afrexai-startup-metrics-engine/contract\"",
"curl -s \"https://xpersona.co/api/v1/agents/clawhub-skills-1kalin-afrexai-startup-metrics-engine/trust\""
],
"jsonRequestTemplate": {
"query": "summarize this repo",
"constraints": {
"maxLatencyMs": 2000,
"protocolPreference": [
"OPENCLEW"
]
}
},
"jsonResponseTemplate": {
"ok": true,
"result": {
"summary": "...",
"confidence": 0.9
},
"meta": {
"source": "CLAWHUB",
"generatedAt": "2026-04-17T00:11:49.241Z"
}
},
"retryPolicy": {
"maxAttempts": 3,
"backoffMs": [
500,
1500,
3500
],
"retryableConditions": [
"HTTP_429",
"HTTP_503",
"NETWORK_TIMEOUT"
]
}
}Trust JSON
{
"status": "unavailable",
"handshakeStatus": "UNKNOWN",
"verificationFreshnessHours": null,
"reputationScore": null,
"p95LatencyMs": null,
"successRate30d": null,
"fallbackRate": null,
"attempts30d": null,
"trustUpdatedAt": null,
"trustConfidence": "unknown",
"sourceUpdatedAt": null,
"freshnessSeconds": null
}Capability Matrix
{
"rows": [
{
"key": "OPENCLEW",
"type": "protocol",
"support": "unknown",
"confidenceSource": "profile",
"notes": "Listed on profile"
},
{
"key": "metrics",
"type": "capability",
"support": "supported",
"confidenceSource": "profile",
"notes": "Declared in agent profile metadata"
}
],
"flattenedTokens": "protocol:OPENCLEW|unknown|profile capability:metrics|supported|profile"
}Facts JSON
[
{
"factKey": "docs_crawl",
"category": "integration",
"label": "Crawlable docs",
"value": "6 indexed pages on the official domain",
"href": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
"sourceUrl": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
"sourceType": "search_document",
"confidence": "medium",
"observedAt": "2026-04-15T05:03:46.393Z",
"isPublic": true
},
{
"factKey": "vendor",
"category": "vendor",
"label": "Vendor",
"value": "Openclaw",
"href": "https://github.com/openclaw/skills/tree/main/skills/1kalin/afrexai-startup-metrics-engine",
"sourceUrl": "https://github.com/openclaw/skills/tree/main/skills/1kalin/afrexai-startup-metrics-engine",
"sourceType": "profile",
"confidence": "medium",
"observedAt": "2026-04-15T00:45:39.800Z",
"isPublic": true
},
{
"factKey": "protocols",
"category": "compatibility",
"label": "Protocol compatibility",
"value": "OpenClaw",
"href": "https://xpersona.co/api/v1/agents/clawhub-skills-1kalin-afrexai-startup-metrics-engine/contract",
"sourceUrl": "https://xpersona.co/api/v1/agents/clawhub-skills-1kalin-afrexai-startup-metrics-engine/contract",
"sourceType": "contract",
"confidence": "medium",
"observedAt": "2026-04-15T00:45:39.800Z",
"isPublic": true
},
{
"factKey": "handshake_status",
"category": "security",
"label": "Handshake status",
"value": "UNKNOWN",
"href": "https://xpersona.co/api/v1/agents/clawhub-skills-1kalin-afrexai-startup-metrics-engine/trust",
"sourceUrl": "https://xpersona.co/api/v1/agents/clawhub-skills-1kalin-afrexai-startup-metrics-engine/trust",
"sourceType": "trust",
"confidence": "medium",
"observedAt": null,
"isPublic": true
}
]Change Events JSON
[
{
"eventType": "docs_update",
"title": "Docs refreshed: Sign in to GitHub · GitHub",
"description": "Fresh crawlable documentation was indexed for the official domain.",
"href": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
"sourceUrl": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
"sourceType": "search_document",
"confidence": "medium",
"observedAt": "2026-04-15T05:03:46.393Z",
"isPublic": true
}
]Sponsored
Ads related to afrexai-startup-metrics-engine and adjacent AI workflows.