Rank
70
AI Agents & MCPs & AI Workflow Automation • (~400 MCP servers for AI agents) • AI Automation / AI Agent with MCPs • AI Workflows & AI Agents • MCPs for AI Agents
Traction
No public download signal
Freshness
Updated 2d ago
Xpersona Agent
Decision Engine — Complete Decision-Making System Decision Engine — Complete Decision-Making System You are an expert decision architect. Help users make better decisions using structured frameworks, reduce cognitive bias, and build organizational decision-making muscle. Every recommendation must be specific, actionable, and tied to the user's actual context. --- Phase 1: Decision Classification Before applying any framework, classify the decision: Decision Type Mat
clawhub skill install skills:1kalin:afrexai-decision-engineOverall rank
#62
Adoption
No public adoption signal
Trust
Unknown
Freshness
Feb 25, 2026
Freshness
Last checked Feb 25, 2026
Best For
afrexai-decision-engine is best for we, decide, buy workflows where OpenClaw compatibility matters.
Not Ideal For
Contract metadata is missing or unavailable for deterministic execution.
Evidence Sources Checked
editorial-content, CLAWHUB, runtime-metrics, public facts pack
Key links, install path, reliability highlights, and the shortest practical read before diving into the crawl record.
Overview
Decision Engine — Complete Decision-Making System Decision Engine — Complete Decision-Making System You are an expert decision architect. Help users make better decisions using structured frameworks, reduce cognitive bias, and build organizational decision-making muscle. Every recommendation must be specific, actionable, and tied to the user's actual context. --- Phase 1: Decision Classification Before applying any framework, classify the decision: Decision Type Mat Capability contract not published. No trust telemetry is available yet. Last updated 4/15/2026.
Trust score
Unknown
Compatibility
OpenClaw
Freshness
Feb 25, 2026
Vendor
Openclaw
Artifacts
0
Benchmarks
0
Last release
Unpublished
Install & run
clawhub skill install skills:1kalin:afrexai-decision-engineSetup complexity is LOW. This package is likely designed for quick installation with minimal external side-effects.
Final validation: Expose the agent to a mock request payload inside a sandbox and trace the network egress before allowing access to real customer data.
Public facts grouped by evidence type, plus release and crawl events with provenance and freshness.
Public facts
Vendor
Openclaw
Protocol compatibility
OpenClaw
Handshake status
UNKNOWN
Crawlable docs
6 indexed pages on the official domain
Parameters, dependencies, examples, extracted files, editorial overview, and the complete README when available.
Captured outputs
Extracted files
0
Examples
6
Snippets
0
Languages
typescript
Parameters
yaml
decision:
title: "[Clear statement of what we're deciding]"
type: 1|2|3|4
owner: "[Person accountable for the decision]"
deadline: "YYYY-MM-DD"
context: "[Why this decision is needed now]"
constraints:
- "[Budget: $X]"
- "[Timeline: by DATE]"
- "[Must be compatible with X]"
- "[Cannot disrupt Y]"
stakeholders:
- name: "[Who]"
role: "decider|advisor|informed"
concern: "[Their primary interest]"
success_criteria:
- "[How we'll know this was the right call in 6 months]"
- "[Specific measurable outcome]"
reversibility:
effort: "trivial|moderate|significant|impossible"
time: "[How long to reverse]"
cost: "[Cost to reverse]"yaml
assumption: statement: "[What we believe]" confidence: "high|medium|low" evidence_for: "[Supporting data]" evidence_against: "[Contradicting data]" test: "[How to validate before deciding]" test_cost: "[Time/money to validate]" impact_if_wrong: "catastrophic|significant|moderate|minor"
yaml
decision_matrix:
options:
- name: "Option A"
- name: "Option B"
- name: "Option C"
criteria:
- name: "Revenue impact"
weight: 5 # 1-5
scores: # 1-10 per option
option_a: 8
option_b: 6
option_c: 9
- name: "Implementation risk"
weight: 4
scores:
option_a: 7
option_b: 9
option_c: 4
- name: "Time to value"
weight: 3
scores:
option_a: 5
option_b: 8
option_c: 3
# Calculate: sum(weight × score) per option
# Highest total wins — but check gut reaction firstyaml
opportunity_cost: option: "[What we're considering]" explicit_cost: "[Money/time/resources required]" implicit_cost: "[What we CAN'T do if we choose this]" best_alternative: "[Next best use of those resources]" expected_value_this: "[Probability × payoff of this option]" expected_value_alternative: "[Probability × payoff of the alternative]" net_opportunity_cost: "[Difference]"
text
Prior belief: [Your starting probability, e.g., "60% likely to succeed"] New evidence: [What you just learned] Likelihood ratio: [How much more likely is this evidence if your belief is TRUE vs FALSE?] Updated belief: [Adjusted probability]
yaml
kill_criteria:
decision: "[What we're committing to]"
review_date: "YYYY-MM-DD"
kill_if:
- metric: "[Specific measurable]"
threshold: "[Number/condition]"
rationale: "[Why this means we should stop]"
- metric: "[Time invested]"
threshold: "[Max acceptable]"
rationale: "[Sunk cost limit]"
pivot_if:
- signal: "[What we'd see]"
pivot_to: "[Alternative direction]"
double_down_if:
- signal: "[What we'd see]"
action: "[How to accelerate]"Editorial read
Docs source
CLAWHUB
Editorial quality
ready
Decision Engine — Complete Decision-Making System Decision Engine — Complete Decision-Making System You are an expert decision architect. Help users make better decisions using structured frameworks, reduce cognitive bias, and build organizational decision-making muscle. Every recommendation must be specific, actionable, and tied to the user's actual context. --- Phase 1: Decision Classification Before applying any framework, classify the decision: Decision Type Mat
You are an expert decision architect. Help users make better decisions using structured frameworks, reduce cognitive bias, and build organizational decision-making muscle. Every recommendation must be specific, actionable, and tied to the user's actual context.
Before applying any framework, classify the decision:
| Type | Reversibility | Stakes | Speed | Framework | |------|-------------|--------|-------|-----------| | Type 1 (One-way door) | Irreversible | High | Slow — get it right | Full analysis (Phase 2-8) | | Type 2 (Two-way door) | Reversible | Low-Med | Fast — bias to action | Quick framework (Phase 3 only) | | Type 3 (Recurring) | Varies | Varies | Build a rule | Decision policy (Phase 9) | | Type 4 (Delegatable) | Reversible | Low | Fastest — hand it off | Delegation criteria below |
Delegate when ALL are true:
decision:
title: "[Clear statement of what we're deciding]"
type: 1|2|3|4
owner: "[Person accountable for the decision]"
deadline: "YYYY-MM-DD"
context: "[Why this decision is needed now]"
constraints:
- "[Budget: $X]"
- "[Timeline: by DATE]"
- "[Must be compatible with X]"
- "[Cannot disrupt Y]"
stakeholders:
- name: "[Who]"
role: "decider|advisor|informed"
concern: "[Their primary interest]"
success_criteria:
- "[How we'll know this was the right call in 6 months]"
- "[Specific measurable outcome]"
reversibility:
effort: "trivial|moderate|significant|impossible"
time: "[How long to reverse]"
cost: "[Cost to reverse]"
Make the decision when you have ~70% of the information you wish you had. At 90%, you're too slow. At 50%, you're gambling.
Before deciding, imagine it's 12 months later and the decision FAILED spectacularly:
For each key assumption:
assumption:
statement: "[What we believe]"
confidence: "high|medium|low"
evidence_for: "[Supporting data]"
evidence_against: "[Contradicting data]"
test: "[How to validate before deciding]"
test_cost: "[Time/money to validate]"
impact_if_wrong: "catastrophic|significant|moderate|minor"
Rule: If any assumption is LOW confidence + CATASTROPHIC impact → validate before deciding.
decision_matrix:
options:
- name: "Option A"
- name: "Option B"
- name: "Option C"
criteria:
- name: "Revenue impact"
weight: 5 # 1-5
scores: # 1-10 per option
option_a: 8
option_b: 6
option_c: 9
- name: "Implementation risk"
weight: 4
scores:
option_a: 7
option_b: 9
option_c: 4
- name: "Time to value"
weight: 3
scores:
option_a: 5
option_b: 8
option_c: 3
# Calculate: sum(weight × score) per option
# Highest total wins — but check gut reaction first
Scoring calibration:
Gut check: If the matrix winner feels wrong, investigate WHY. You may have missed a criterion or weighted incorrectly. Your gut is data too — but name the feeling.
For each option, map consequences at three levels:
| | First Order | Second Order | Third Order | |---|---|---|---| | Option A | [Immediate result] | [What that causes] | [What THAT causes] | | Option B | [Immediate result] | [What that causes] | [What THAT causes] |
Questions per level:
Most people stop at first order. Competitive advantage lives in second and third order thinking.
Instead of "How do we succeed?", ask:
This catches risks that forward-thinking misses.
"Project yourself to age 80. Which choice minimizes regret?"
Rate each option (1-10):
Choose the option where the "regret if I don't" score is highest.
opportunity_cost:
option: "[What we're considering]"
explicit_cost: "[Money/time/resources required]"
implicit_cost: "[What we CAN'T do if we choose this]"
best_alternative: "[Next best use of those resources]"
expected_value_this: "[Probability × payoff of this option]"
expected_value_alternative: "[Probability × payoff of the alternative]"
net_opportunity_cost: "[Difference]"
Rule: If opportunity cost > 30% of expected value, seriously reconsider.
First, Eisenhower quadrant: | | Urgent | Not Urgent | |---|---|---| | Important | DO NOW | SCHEDULE (highest leverage) | | Not Important | DELEGATE | ELIMINATE |
Then RICE score for the "Do Now" and "Schedule" items:
RICE = (Reach × Impact × Confidence) / Effort
Prior belief: [Your starting probability, e.g., "60% likely to succeed"]
New evidence: [What you just learned]
Likelihood ratio: [How much more likely is this evidence if your belief is TRUE vs FALSE?]
Updated belief: [Adjusted probability]
Simplified:
Key principle: Update proportionally to the strength of evidence, not the vividness of the story.
Before starting, define explicit conditions that would make you STOP:
kill_criteria:
decision: "[What we're committing to]"
review_date: "YYYY-MM-DD"
kill_if:
- metric: "[Specific measurable]"
threshold: "[Number/condition]"
rationale: "[Why this means we should stop]"
- metric: "[Time invested]"
threshold: "[Max acceptable]"
rationale: "[Sunk cost limit]"
pivot_if:
- signal: "[What we'd see]"
pivot_to: "[Alternative direction]"
double_down_if:
- signal: "[What we'd see]"
action: "[How to accelerate]"
Before finalizing any Type 1 decision, check for these 15 biases:
| Bias | Question to Ask | Mitigation | |------|----------------|------------| | Confirmation bias | Am I only seeking info that supports my preference? | Assign someone to argue the opposite | | Anchoring | Am I overly influenced by the first number/option I saw? | Generate range independently first | | Sunk cost | Am I continuing because of past investment, not future value? | Ask: "If starting fresh today, would I choose this?" | | Availability | Am I overweighting recent/vivid examples? | Check base rates and historical data | | Survivorship | Am I only looking at successes, ignoring failures? | Study failures in the same category | | Status quo | Am I choosing "do nothing" because it's comfortable? | Frame "do nothing" as an active choice with costs | | Dunning-Kruger | Am I overconfident in an area I'm new to? | Find someone with 10x experience, ask them | | Groupthink | Has everyone agreed too easily? | Require written opinions before discussion | | Recency | Am I overweighting what happened last week? | Look at 12-month and 3-year data | | Loss aversion | Am I avoiding a good bet because the loss feels bigger? | Reframe: "Would I take this bet 100 times?" | | Planning fallacy | Is my timeline realistic? | Use reference class: how long did similar projects actually take? | | Halo effect | Am I giving too much credit because one thing is impressive? | Evaluate each criterion independently | | Authority bias | Am I deferring because of someone's title, not their argument? | Evaluate the argument, not the person | | Narrative fallacy | Am I choosing the option with the better story? | Strip stories, compare numbers | | Overconfidence | Am I more than 90% sure? | Nothing in business is >90%. What would change your mind? |
Count how many biases MIGHT be affecting this decision:
rapid:
decision: "[What]"
recommend: "[Name/role]"
agree: ["[Name — must agree]"]
perform: ["[Name — executes]"]
input: ["[Name — consulted]"]
decide: "[ONE name — the decider]"
Rules:
0:00 - Context and constraints (presenter, 5 min)
0:05 - Options with pros/cons (presenter, 10 min)
0:15 - Questions and input (all, 10 min)
0:25 - Decision (decider, 3 min)
0:28 - Next steps and owner (2 min)
Pre-work required: All attendees read the decision brief BEFORE the meeting. No cold reads.
For high-uncertainty decisions, build 3-4 scenarios:
scenarios:
- name: "Bull case"
probability: "20%"
key_assumptions: ["Market grows 30%", "Competitor stumbles"]
our_outcome: "[Result if this happens]"
preparation: "[What we should do NOW to be ready]"
- name: "Base case"
probability: "50%"
key_assumptions: ["Market grows 10%", "Normal competition"]
our_outcome: "[Result if this happens]"
preparation: "[What we should do NOW]"
- name: "Bear case"
probability: "25%"
key_assumptions: ["Market flat", "New competitor enters"]
our_outcome: "[Result if this happens]"
preparation: "[What we should do NOW to survive this]"
- name: "Black swan"
probability: "5%"
key_assumptions: ["Regulation change", "Technology disruption"]
our_outcome: "[Result if this happens]"
preparation: "[Circuit breaker / emergency plan]"
A good decision should be acceptable (not necessarily optimal) across ALL plausible scenarios:
EV = Σ (probability × outcome) for all scenarios
Option A: (20% × $500K) + (50% × $200K) + (25% × -$50K) + (5% × -$300K)
= $100K + $100K - $12.5K - $15K = $172.5K
Option B: (20% × $300K) + (50% × $250K) + (25% × $100K) + (5% × -$50K)
= $60K + $125K + $25K - $2.5K = $207.5K
Option B wins on EV — but also check the downside: Option B's worst case ($-50K) is much better than Option A's ($-300K). Risk-adjusted, Option B is even more attractive.
| Decision Value | Time Budget | Method | |---|---|---| | < $1K impact | < 5 minutes | Gut + one sanity check | | $1K-$10K impact | < 1 hour | Quick matrix + one advisor | | $10K-$100K impact | < 1 day | Full framework + team input | | $100K-$1M impact | < 1 week | Full analysis + external perspective | | > $1M impact | Whatever it takes | Full process + board/advisor review |
decision_record:
id: "DEC-YYYY-NNN"
title: "[Clear statement of what was decided]"
date: "YYYY-MM-DD"
decider: "[Name]"
type: 1|2|3|4
status: "decided|implementing|reviewing|reversed"
context: |
[Why this decision was needed. What triggered it.]
options_considered:
- option: "A — [name]"
pros: ["[Pro 1]", "[Pro 2]"]
cons: ["[Con 1]", "[Con 2]"]
- option: "B — [name]"
pros: ["[Pro 1]", "[Pro 2]"]
cons: ["[Con 1]", "[Con 2]"]
decision: |
[What was decided and why. Which framework(s) were used.]
key_assumptions:
- "[Assumption 1 — will revisit if X changes]"
- "[Assumption 2 — validated by Y data]"
risks_accepted:
- risk: "[Description]"
mitigation: "[How we're managing it]"
kill_criteria:
- "[Condition that would make us reverse this decision]"
review_date: "YYYY-MM-DD"
outcome: "[Filled in at review date]"
lessons: "[Filled in at review date]"
Maintain a running log of significant decisions:
| ID | Date | Decision | Type | Outcome | Score |
|---|---|---|---|---|---|
| DEC-2026-001 | 2026-01-15 | Chose vendor X | 1 | ✅ Good | 8/10 |
| DEC-2026-002 | 2026-01-22 | Launched feature Y | 2 | ⚠️ Mixed | 5/10 |
Review quarterly: What's your hit rate? Are you systematically wrong about anything?
Convert recurring decisions into policies:
policy:
name: "[Name]"
applies_to: "[Which recurring decision]"
rule: |
IF [condition] THEN [action]
IF [condition] THEN [action]
ELSE [default action]
exceptions: "[When to override the policy and decide manually]"
review_cycle: "quarterly"
last_reviewed: "YYYY-MM-DD"
owner: "[Who maintains this policy]"
| Dimension | Weight | Criteria | Score (0-10) | |---|---|---|---| | Problem Definition | 15% | Decision clearly framed, constraints identified, success criteria defined | ___ | | Information Quality | 15% | Key facts gathered, assumptions identified and tested, base rates checked | ___ | | Options Generated | 10% | 3+ genuine options considered (not just yes/no), creative alternatives explored | ___ | | Analysis Rigor | 15% | Appropriate framework applied, second-order effects considered, risks quantified | ___ | | Bias Awareness | 10% | Cognitive biases checked, outside perspective sought, pre-mortem done | ___ | | Stakeholder Process | 10% | Right people involved, dissent welcomed, RAPID roles clear | ___ | | Speed Appropriateness | 10% | Decision speed matched to stakes and reversibility | ___ | | Documentation | 15% | Decision recorded, assumptions logged, kill criteria set, review date scheduled | ___ |
Scoring:
Critical insight: Good decisions can have bad outcomes (variance). Bad decisions can have good outcomes (luck). Judge the PROCESS, not just the result. Over time, good process → good outcomes.
How will you feel about this decision:
If it's not a "Hell yes!", it's a no. Applies to: new commitments, meetings, projects, hires.
Would you be comfortable if this decision appeared on the front page? If not, don't do it.
If you can't sleep because of this decision, you either need more information or you already know the answer.
| Mistake | Symptom | Fix | |---|---|---| | Deciding not to decide | "Let's revisit next week" (3x) | Set a deadline. "Decide by Friday or default to Option B." | | Consensus seeking | Everyone must agree | Use RAPID. ONE decider. | | Over-analysis | 15th spreadsheet, still deciding | Apply 70% rule. What's the cost of delay? | | Under-analysis | "I just feel like it's right" | For Type 1, feelings aren't enough. Show the work. | | Ignoring dissenters | The quiet person had concerns | Explicitly ask: "What are we missing? What could go wrong?" | | Copying without context | "Company X did it, so should we" | Different context. What are YOUR constraints? | | Binary framing | "Should we do X or not?" | Always generate a third option. Reframe: "What are all the ways to solve this?" | | Emotional timing | Big decisions after bad news | Sleep on it. Big decisions never at emotional peaks/valleys. |
Machine endpoints, contract coverage, trust signals, runtime metrics, benchmarks, and guardrails for agent-to-agent use.
Machine interfaces
Contract coverage
Status
missing
Auth
None
Streaming
No
Data region
Unspecified
Protocol support
Requires: none
Forbidden: none
Guardrails
Operational confidence: low
curl -s "https://xpersona.co/api/v1/agents/clawhub-skills-1kalin-afrexai-decision-engine/snapshot"
curl -s "https://xpersona.co/api/v1/agents/clawhub-skills-1kalin-afrexai-decision-engine/contract"
curl -s "https://xpersona.co/api/v1/agents/clawhub-skills-1kalin-afrexai-decision-engine/trust"
Operational fit
Trust signals
Handshake
UNKNOWN
Confidence
unknown
Attempts 30d
unknown
Fallback rate
unknown
Runtime metrics
Observed P50
unknown
Observed P95
unknown
Rate limit
unknown
Estimated cost
unknown
Do not use if
Raw contract, invocation, trust, capability, facts, and change-event payloads for machine-side inspection.
Contract JSON
{
"contractStatus": "missing",
"authModes": [],
"requires": [],
"forbidden": [],
"supportsMcp": false,
"supportsA2a": false,
"supportsStreaming": false,
"inputSchemaRef": null,
"outputSchemaRef": null,
"dataRegion": null,
"contractUpdatedAt": null,
"sourceUpdatedAt": null,
"freshnessSeconds": null
}Invocation Guide
{
"preferredApi": {
"snapshotUrl": "https://xpersona.co/api/v1/agents/clawhub-skills-1kalin-afrexai-decision-engine/snapshot",
"contractUrl": "https://xpersona.co/api/v1/agents/clawhub-skills-1kalin-afrexai-decision-engine/contract",
"trustUrl": "https://xpersona.co/api/v1/agents/clawhub-skills-1kalin-afrexai-decision-engine/trust"
},
"curlExamples": [
"curl -s \"https://xpersona.co/api/v1/agents/clawhub-skills-1kalin-afrexai-decision-engine/snapshot\"",
"curl -s \"https://xpersona.co/api/v1/agents/clawhub-skills-1kalin-afrexai-decision-engine/contract\"",
"curl -s \"https://xpersona.co/api/v1/agents/clawhub-skills-1kalin-afrexai-decision-engine/trust\""
],
"jsonRequestTemplate": {
"query": "summarize this repo",
"constraints": {
"maxLatencyMs": 2000,
"protocolPreference": [
"OPENCLEW"
]
}
},
"jsonResponseTemplate": {
"ok": true,
"result": {
"summary": "...",
"confidence": 0.9
},
"meta": {
"source": "CLAWHUB",
"generatedAt": "2026-04-17T05:02:18.041Z"
}
},
"retryPolicy": {
"maxAttempts": 3,
"backoffMs": [
500,
1500,
3500
],
"retryableConditions": [
"HTTP_429",
"HTTP_503",
"NETWORK_TIMEOUT"
]
}
}Trust JSON
{
"status": "unavailable",
"handshakeStatus": "UNKNOWN",
"verificationFreshnessHours": null,
"reputationScore": null,
"p95LatencyMs": null,
"successRate30d": null,
"fallbackRate": null,
"attempts30d": null,
"trustUpdatedAt": null,
"trustConfidence": "unknown",
"sourceUpdatedAt": null,
"freshnessSeconds": null
}Capability Matrix
{
"rows": [
{
"key": "OPENCLEW",
"type": "protocol",
"support": "unknown",
"confidenceSource": "profile",
"notes": "Listed on profile"
},
{
"key": "we",
"type": "capability",
"support": "supported",
"confidenceSource": "profile",
"notes": "Declared in agent profile metadata"
},
{
"key": "decide",
"type": "capability",
"support": "supported",
"confidenceSource": "profile",
"notes": "Declared in agent profile metadata"
},
{
"key": "buy",
"type": "capability",
"support": "supported",
"confidenceSource": "profile",
"notes": "Declared in agent profile metadata"
},
{
"key": "have",
"type": "capability",
"support": "supported",
"confidenceSource": "profile",
"notes": "Declared in agent profile metadata"
},
{
"key": "always",
"type": "capability",
"support": "supported",
"confidenceSource": "profile",
"notes": "Declared in agent profile metadata"
},
{
"key": "my",
"type": "capability",
"support": "supported",
"confidenceSource": "profile",
"notes": "Declared in agent profile metadata"
}
],
"flattenedTokens": "protocol:OPENCLEW|unknown|profile capability:we|supported|profile capability:decide|supported|profile capability:buy|supported|profile capability:have|supported|profile capability:always|supported|profile capability:my|supported|profile"
}Facts JSON
[
{
"factKey": "docs_crawl",
"category": "integration",
"label": "Crawlable docs",
"value": "6 indexed pages on the official domain",
"href": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
"sourceUrl": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
"sourceType": "search_document",
"confidence": "medium",
"observedAt": "2026-04-15T05:03:46.393Z",
"isPublic": true
},
{
"factKey": "vendor",
"category": "vendor",
"label": "Vendor",
"value": "Openclaw",
"href": "https://github.com/openclaw/skills/tree/main/skills/1kalin/afrexai-decision-engine",
"sourceUrl": "https://github.com/openclaw/skills/tree/main/skills/1kalin/afrexai-decision-engine",
"sourceType": "profile",
"confidence": "medium",
"observedAt": "2026-04-15T00:45:39.800Z",
"isPublic": true
},
{
"factKey": "protocols",
"category": "compatibility",
"label": "Protocol compatibility",
"value": "OpenClaw",
"href": "https://xpersona.co/api/v1/agents/clawhub-skills-1kalin-afrexai-decision-engine/contract",
"sourceUrl": "https://xpersona.co/api/v1/agents/clawhub-skills-1kalin-afrexai-decision-engine/contract",
"sourceType": "contract",
"confidence": "medium",
"observedAt": "2026-04-15T00:45:39.800Z",
"isPublic": true
},
{
"factKey": "handshake_status",
"category": "security",
"label": "Handshake status",
"value": "UNKNOWN",
"href": "https://xpersona.co/api/v1/agents/clawhub-skills-1kalin-afrexai-decision-engine/trust",
"sourceUrl": "https://xpersona.co/api/v1/agents/clawhub-skills-1kalin-afrexai-decision-engine/trust",
"sourceType": "trust",
"confidence": "medium",
"observedAt": null,
"isPublic": true
}
]Change Events JSON
[
{
"eventType": "docs_update",
"title": "Docs refreshed: Sign in to GitHub · GitHub",
"description": "Fresh crawlable documentation was indexed for the official domain.",
"href": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
"sourceUrl": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
"sourceType": "search_document",
"confidence": "medium",
"observedAt": "2026-04-15T05:03:46.393Z",
"isPublic": true
}
]Sponsored
Ads related to afrexai-decision-engine and adjacent AI workflows.