Crawler Summary

model-hierarchy answer-first brief

Cost-optimize AI agent operations by routing tasks to appropriate models based on complexity. Use this skill when: (1) deciding which model to use for a task, (2) spawning sub-agents, (3) considering cost efficiency, (4) the current model feels like overkill for the task. Triggers: "model routing", "cost optimization", "which model", "too expensive", "spawn agent". --- name: model-hierarchy description: > Cost-optimize AI agent operations by routing tasks to appropriate models based on complexity. Use this skill when: (1) deciding which model to use for a task, (2) spawning sub-agents, (3) considering cost efficiency, (4) the current model feels like overkill for the task. Triggers: "model routing", "cost optimization", "which model", "too expensive", "spawn agent". --- Model H Capability contract not published. No trust telemetry is available yet. 321 GitHub stars reported by the source. Last updated 2/25/2026.

Freshness

Last checked 2/25/2026

Best For

model-hierarchy is best for handle, figure workflows where OpenClaw compatibility matters.

Not Ideal For

Contract metadata is missing or unavailable for deterministic execution.

Evidence Sources Checked

editorial-content, GITHUB OPENCLEW, runtime-metrics, public facts pack

Claim this agent
Agent DossierGitHubSafety: 100/100

model-hierarchy

Cost-optimize AI agent operations by routing tasks to appropriate models based on complexity. Use this skill when: (1) deciding which model to use for a task, (2) spawning sub-agents, (3) considering cost efficiency, (4) the current model feels like overkill for the task. Triggers: "model routing", "cost optimization", "which model", "too expensive", "spawn agent". --- name: model-hierarchy description: > Cost-optimize AI agent operations by routing tasks to appropriate models based on complexity. Use this skill when: (1) deciding which model to use for a task, (2) spawning sub-agents, (3) considering cost efficiency, (4) the current model feels like overkill for the task. Triggers: "model routing", "cost optimization", "which model", "too expensive", "spawn agent". --- Model H

OpenClawself-declared

Public facts

5

Change events

1

Artifacts

0

Freshness

Feb 25, 2026

Verifiededitorial-contentNo verified compatibility signals321 GitHub stars

Capability contract not published. No trust telemetry is available yet. 321 GitHub stars reported by the source. Last updated 2/25/2026.

321 GitHub starsTrust evidence available

Trust score

Unknown

Compatibility

OpenClaw

Freshness

Feb 25, 2026

Vendor

Zscole

Artifacts

0

Benchmarks

0

Last release

Unpublished

Executive Summary

Key links, install path, and a quick operational read before the deeper crawl record.

Verifiededitorial-content

Summary

Capability contract not published. No trust telemetry is available yet. 321 GitHub stars reported by the source. Last updated 2/25/2026.

Setup snapshot

git clone https://github.com/zscole/model-hierarchy-skill.git
  1. 1

    Setup complexity is LOW. This package is likely designed for quick installation with minimal external side-effects.

  2. 2

    Final validation: Expose the agent to a mock request payload inside a sandbox and trace the network egress before allowing access to real customer data.

Evidence Ledger

Everything public we have scraped or crawled about this agent, grouped by evidence type with provenance.

Verifiededitorial-content
Vendor (1)

Vendor

Zscole

profilemedium
Observed Feb 25, 2026Source linkProvenance
Compatibility (1)

Protocol compatibility

OpenClaw

contractmedium
Observed Feb 25, 2026Source linkProvenance
Adoption (1)

Adoption signal

321 GitHub stars

profilemedium
Observed Feb 25, 2026Source linkProvenance
Security (1)

Handshake status

UNKNOWN

trustmedium
Observed unknownSource linkProvenance
Integration (1)

Crawlable docs

6 indexed pages on the official domain

search_documentmedium
Observed Apr 15, 2026Source linkProvenance

Release & Crawl Timeline

Merged public release, docs, artifact, benchmark, pricing, and trust refresh events.

Self-declaredagent-index

Artifacts Archive

Extracted files, examples, snippets, parameters, dependencies, permissions, and artifact metadata.

Self-declaredGITHUB OPENCLEW

Extracted files

0

Examples

5

Snippets

0

Languages

typescript

Parameters

Executable Examples

text

function selectModel(task):
    # Rule 1: Vision override (Tier 1/2 includes text-only models)
    if task.requiresImageInput or task.requiresVision:
        return VISION_CAPABLE_MODEL  # e.g. Kimi K2.5, GPT-4o, Gemini, Claude; do not use GLM 5 or other text-only
    
    # Rule 2: Escalation override
    if task.previousAttemptFailed:
        return nextTierUp(task.previousModel)
    
    # Rule 3: Explicit complexity signals
    if task.hasSignal("debug", "architect", "design", "security"):
        return TIER_3
    
    if task.hasSignal("write", "code", "summarize", "analyze"):
        return TIER_2
    
    # Rule 4: Default classification
    complexity = classifyTask(task)
    
    if complexity == ROUTINE:
        return TIER_1
    elif complexity == MODERATE:
        return TIER_2
    else:
        return TIER_3

yaml

# config.yml - set default model
model: anthropic/claude-sonnet-4

# In session, switch models
/model opus  # upgrade for complex task
/model deepseek  # downgrade for routine

# Spawn sub-agent on cheap model
sessions_spawn:
  task: "Fetch and parse these 50 URLs"
  model: deepseek

yaml

# Tier 1 with vision — Kimi K2.5 (multimodal)
model: openrouter/moonshotai/kimi-k2.5
# Heartbeats, cron, image-involving tasks: K2.5 handles text and vision.

# Tier 1 text-only — GLM 5 (no vision)
# model: openrouter/z-ai/glm-5  # exact ID TBD on OpenRouter Z.AI
# Routine text-only only; for image tasks use Kimi K2.5 or another vision-capable model.

text

# In CLAUDE.md or project instructions
When spawning background agents, use claude-3-haiku for:
- File operations
- Simple searches  
- Status checks

Reserve claude-sonnet-4 for:
- Code generation
- Analysis tasks

python

def get_model_for_task(task_description: str) -> str:
    routine_signals = ['read', 'fetch', 'check', 'list', 'format', 'status']
    complex_signals = ['debug', 'architect', 'design', 'security', 'why']
    
    desc_lower = task_description.lower()
    
    if any(signal in desc_lower for signal in complex_signals):
        return "claude-opus-4"
    elif any(signal in desc_lower for signal in routine_signals):
        return "deepseek-v3"
    else:
        return "claude-sonnet-4"

Docs & README

Full documentation captured from public sources, including the complete README when available.

Self-declaredGITHUB OPENCLEW

Docs source

GITHUB OPENCLEW

Editorial quality

ready

Cost-optimize AI agent operations by routing tasks to appropriate models based on complexity. Use this skill when: (1) deciding which model to use for a task, (2) spawning sub-agents, (3) considering cost efficiency, (4) the current model feels like overkill for the task. Triggers: "model routing", "cost optimization", "which model", "too expensive", "spawn agent". --- name: model-hierarchy description: > Cost-optimize AI agent operations by routing tasks to appropriate models based on complexity. Use this skill when: (1) deciding which model to use for a task, (2) spawning sub-agents, (3) considering cost efficiency, (4) the current model feels like overkill for the task. Triggers: "model routing", "cost optimization", "which model", "too expensive", "spawn agent". --- Model H

Full README

name: model-hierarchy description: > Cost-optimize AI agent operations by routing tasks to appropriate models based on complexity. Use this skill when: (1) deciding which model to use for a task, (2) spawning sub-agents, (3) considering cost efficiency, (4) the current model feels like overkill for the task. Triggers: "model routing", "cost optimization", "which model", "too expensive", "spawn agent".


Model Hierarchy

Route tasks to the cheapest model that can handle them. Most agent work is routine.

Core Principle

80% of agent tasks are janitorial. File reads, status checks, formatting, simple Q&A. These don't need expensive models. Reserve premium models for problems that actually require deep reasoning.

Model Tiers

Tier 1: Cheap ($0.10-0.50/M tokens)

| Model | Input | Output | Best For | |-------|-------|--------|----------| | DeepSeek V3 | $0.14 | $0.28 | General routine work | | GPT-4o-mini | $0.15 | $0.60 | Quick responses | | Claude Haiku | $0.25 | $1.25 | Fast tool use | | Gemini Flash | $0.075 | $0.30 | High volume | | GLM 5 (Zhipu) | (OpenRouter Z.AI) | (OpenRouter Z.AI) | Routine + moderate text; 200K context; text-only — do not use for image/vision | | Kimi K2.5 (Moonshot) | $0.45 | $2.25 | Routine + moderate; 262K context; multimodal (text + image + video) |

Text-only models (e.g. GLM 5): Do not use for any task that requires image input or vision — no photo analysis, screenshots, image-generation tools, or document/chart vision. Route to a vision-capable model (e.g. Kimi K2.5, GPT-4o, Gemini, Claude with vision, GLM-4.5V/4.6V).

Vision-capable Tier 1/2 (e.g. Kimi K2.5): Use for routine or moderate tasks that may involve images — screenshots, photo analysis, docs, image-generation orchestration — without moving to premium vision models.

Tier 2: Mid ($1-5/M tokens)

| Model | Input | Output | Best For | |-------|-------|--------|----------| | Claude Sonnet | $3.00 | $15.00 | Balanced performance | | GPT-4o | $2.50 | $10.00 | Multimodal tasks | | Gemini Pro | $1.25 | $5.00 | Long context |

Tier 3: Premium ($10-75/M tokens)

| Model | Input | Output | Best For | |-------|-------|--------|----------| | Claude Opus | $15.00 | $75.00 | Complex reasoning | | GPT-4.5 | $75.00 | $150.00 | Frontier tasks | | o1 | $15.00 | $60.00 | Multi-step reasoning | | o3-mini | $1.10 | $4.40 | Reasoning on budget |

Prices as of Feb 2026. Check provider docs for current rates.

Task Classification

Before executing any task, classify it:

ROUTINE → Use Tier 1

Requires image/vision → Do not assign to text-only models (GLM 5, etc.). Use a vision-capable model from Tier 1/2 or 3 (e.g. Kimi K2.5, GPT-4o, Gemini, Claude, GLM-4.5V).

Characteristics:

  • Single-step operations
  • Clear, unambiguous instructions
  • No judgment required
  • Deterministic output expected

Examples:

  • File read/write operations
  • Status checks and health monitoring
  • Simple lookups (time, weather, definitions)
  • Formatting and restructuring text
  • List operations (filter, sort, transform)
  • API calls with known parameters
  • Heartbeat and cron tasks
  • URL fetching and basic parsing

MODERATE → Use Tier 2

Characteristics:

  • Multi-step but well-defined
  • Some synthesis required
  • Standard patterns apply
  • Quality matters but isn't critical

Examples:

  • Code generation (standard patterns)
  • Summarization and synthesis
  • Draft writing (emails, docs, messages)
  • Data analysis and transformation
  • Multi-file operations
  • Tool orchestration
  • Code review (non-security)
  • Search and research tasks

COMPLEX → Use Tier 3

Characteristics:

  • Novel problem solving required
  • Multiple valid approaches
  • Nuanced judgment calls
  • High stakes or irreversible
  • Previous attempts failed

Examples:

  • Multi-step debugging
  • Architecture and design decisions
  • Security-sensitive code review
  • Tasks where cheaper model already failed
  • Ambiguous requirements needing interpretation
  • Long-context reasoning (>50K tokens)
  • Creative work requiring originality
  • Adversarial or edge-case handling

Decision Algorithm

function selectModel(task):
    # Rule 1: Vision override (Tier 1/2 includes text-only models)
    if task.requiresImageInput or task.requiresVision:
        return VISION_CAPABLE_MODEL  # e.g. Kimi K2.5, GPT-4o, Gemini, Claude; do not use GLM 5 or other text-only
    
    # Rule 2: Escalation override
    if task.previousAttemptFailed:
        return nextTierUp(task.previousModel)
    
    # Rule 3: Explicit complexity signals
    if task.hasSignal("debug", "architect", "design", "security"):
        return TIER_3
    
    if task.hasSignal("write", "code", "summarize", "analyze"):
        return TIER_2
    
    # Rule 4: Default classification
    complexity = classifyTask(task)
    
    if complexity == ROUTINE:
        return TIER_1
    elif complexity == MODERATE:
        return TIER_2
    else:
        return TIER_3

Behavioral Rules

For Main Session

  1. Default to Tier 2 for interactive work
  2. Suggest downgrade when doing routine work: "This is routine - I can handle this on a cheaper model or spawn a sub-agent."
  3. Request upgrade when stuck: "This needs more reasoning power. Switching to [premium model]."

For Sub-Agents

  1. Default to Tier 1 unless task is clearly moderate+
  2. Batch similar tasks to amortize overhead
  3. Report failures back to parent for escalation

For Automated Tasks

  1. Heartbeats/monitoring → Always Tier 1
  2. Scheduled reports → Tier 1 or 2 based on complexity
  3. Alert responses → Start Tier 2, escalate if needed

Communication Patterns

When suggesting model changes, use clear language:

Downgrade suggestion:

"This looks like routine file work. Want me to spawn a sub-agent on DeepSeek for this? Same result, fraction of the cost."

Upgrade request:

"I'm hitting the limits of what I can figure out here. This needs Opus-level reasoning. Switching up."

Explaining hierarchy:

"I'm running the heavy analysis on Sonnet while sub-agents fetch the data on DeepSeek. Keeps costs down without sacrificing quality where it matters."

Cost Impact

Assuming 100K tokens/day average usage:

| Strategy | Monthly Cost | Notes | |----------|--------------|-------| | Pure Opus | ~$225 | Maximum capability, maximum spend | | Pure Sonnet | ~$45 | Good default for most work | | Pure DeepSeek | ~$8 | Cheap but limited on hard problems | | Hierarchy (80/15/5) | ~$19 | Best of all worlds |

The 80/15/5 split:

  • 80% routine tasks on Tier 1 (~$6)
  • 15% moderate tasks on Tier 2 (~$7)
  • 5% complex tasks on Tier 3 (~$6)

Result: 10x cost reduction vs pure premium, with equivalent quality on complex tasks.

Integration Examples

OpenClaw

# config.yml - set default model
model: anthropic/claude-sonnet-4

# In session, switch models
/model opus  # upgrade for complex task
/model deepseek  # downgrade for routine

# Spawn sub-agent on cheap model
sessions_spawn:
  task: "Fetch and parse these 50 URLs"
  model: deepseek

OpenRouter (Tier 1 with vision or text-only):

# Tier 1 with vision — Kimi K2.5 (multimodal)
model: openrouter/moonshotai/kimi-k2.5
# Heartbeats, cron, image-involving tasks: K2.5 handles text and vision.

# Tier 1 text-only — GLM 5 (no vision)
# model: openrouter/z-ai/glm-5  # exact ID TBD on OpenRouter Z.AI
# Routine text-only only; for image tasks use Kimi K2.5 or another vision-capable model.

Claude Code

# In CLAUDE.md or project instructions
When spawning background agents, use claude-3-haiku for:
- File operations
- Simple searches  
- Status checks

Reserve claude-sonnet-4 for:
- Code generation
- Analysis tasks

General Agent Systems

def get_model_for_task(task_description: str) -> str:
    routine_signals = ['read', 'fetch', 'check', 'list', 'format', 'status']
    complex_signals = ['debug', 'architect', 'design', 'security', 'why']
    
    desc_lower = task_description.lower()
    
    if any(signal in desc_lower for signal in complex_signals):
        return "claude-opus-4"
    elif any(signal in desc_lower for signal in routine_signals):
        return "deepseek-v3"
    else:
        return "claude-sonnet-4"

Anti-Patterns

DON'T:

  • Run heartbeats on Opus
  • Use premium models for file I/O
  • Keep expensive model when task is clearly routine
  • Spawn sub-agents on premium models by default
  • Use GLM 5 (or any text-only Tier 1/2 model) for image/vision tasks — e.g. photo analysis, screenshot understanding, image-generation skills, or any tool that takes image input

DO:

  • Start mid-tier, adjust based on task
  • Spawn helpers on cheapest viable model
  • Escalate explicitly when stuck
  • Track cost per task type to optimize further

Extending This Skill

To customize for your use case:

  1. Adjust tier definitions based on your provider/budget
  2. Add domain-specific signals to classification rules
  3. Track actual complexity vs predicted to improve heuristics
  4. Set budget alerts to catch runaway premium usage

Contract & API

Machine endpoints, protocol fit, contract coverage, invocation examples, and guardrails for agent-to-agent use.

MissingGITHUB OPENCLEW

Contract coverage

Status

missing

Auth

None

Streaming

No

Data region

Unspecified

Protocol support

OpenClaw: self-declared

Requires: none

Forbidden: none

Guardrails

Operational confidence: low

No positive guardrails captured.
Invocation examples
curl -s "https://xpersona.co/api/v1/agents/zscole-model-hierarchy-skill/snapshot"
curl -s "https://xpersona.co/api/v1/agents/zscole-model-hierarchy-skill/contract"
curl -s "https://xpersona.co/api/v1/agents/zscole-model-hierarchy-skill/trust"

Reliability & Benchmarks

Trust and runtime signals, benchmark suites, failure patterns, and practical risk constraints.

Missingruntime-metrics

Trust signals

Handshake

UNKNOWN

Confidence

unknown

Attempts 30d

unknown

Fallback rate

unknown

Runtime metrics

Observed P50

unknown

Observed P95

unknown

Rate limit

unknown

Estimated cost

unknown

Do not use if

Contract metadata is missing or unavailable for deterministic execution.
No benchmark suites or observed failure patterns are available.

Media & Demo

Every public screenshot, visual asset, demo link, and owner-provided destination tied to this agent.

Missingno-media
No screenshots, media assets, or demo links are available.

Related Agents

Neighboring agents from the same protocol and source ecosystem for comparison and shortlist building.

Self-declaredprotocol-neighbors
GITHUB_REPOSactivepieces

Rank

70

AI Agents & MCPs & AI Workflow Automation • (~400 MCP servers for AI agents) • AI Automation / AI Agent with MCPs • AI Workflows & AI Agents • MCPs for AI Agents

Traction

No public download signal

Freshness

Updated 2d ago

OPENCLAW
GITHUB_REPOScherry-studio

Rank

70

AI productivity studio with smart chat, autonomous agents, and 300+ assistants. Unified access to frontier LLMs

Traction

No public download signal

Freshness

Updated 5d ago

MCPOPENCLAW
GITHUB_REPOSAionUi

Rank

70

Free, local, open-source 24/7 Cowork app and OpenClaw for Gemini CLI, Claude Code, Codex, OpenCode, Qwen Code, Goose CLI, Auggie, and more | 🌟 Star if you like it!

Traction

No public download signal

Freshness

Updated 6d ago

MCPOPENCLAW
GITHUB_REPOSCopilotKit

Rank

70

The Frontend for Agents & Generative UI. React + Angular

Traction

No public download signal

Freshness

Updated 23d ago

OPENCLAW
Machine Appendix

Contract JSON

{
  "contractStatus": "missing",
  "authModes": [],
  "requires": [],
  "forbidden": [],
  "supportsMcp": false,
  "supportsA2a": false,
  "supportsStreaming": false,
  "inputSchemaRef": null,
  "outputSchemaRef": null,
  "dataRegion": null,
  "contractUpdatedAt": null,
  "sourceUpdatedAt": null,
  "freshnessSeconds": null
}

Invocation Guide

{
  "preferredApi": {
    "snapshotUrl": "https://xpersona.co/api/v1/agents/zscole-model-hierarchy-skill/snapshot",
    "contractUrl": "https://xpersona.co/api/v1/agents/zscole-model-hierarchy-skill/contract",
    "trustUrl": "https://xpersona.co/api/v1/agents/zscole-model-hierarchy-skill/trust"
  },
  "curlExamples": [
    "curl -s \"https://xpersona.co/api/v1/agents/zscole-model-hierarchy-skill/snapshot\"",
    "curl -s \"https://xpersona.co/api/v1/agents/zscole-model-hierarchy-skill/contract\"",
    "curl -s \"https://xpersona.co/api/v1/agents/zscole-model-hierarchy-skill/trust\""
  ],
  "jsonRequestTemplate": {
    "query": "summarize this repo",
    "constraints": {
      "maxLatencyMs": 2000,
      "protocolPreference": [
        "OPENCLEW"
      ]
    }
  },
  "jsonResponseTemplate": {
    "ok": true,
    "result": {
      "summary": "...",
      "confidence": 0.9
    },
    "meta": {
      "source": "GITHUB_OPENCLEW",
      "generatedAt": "2026-04-17T01:03:49.084Z"
    }
  },
  "retryPolicy": {
    "maxAttempts": 3,
    "backoffMs": [
      500,
      1500,
      3500
    ],
    "retryableConditions": [
      "HTTP_429",
      "HTTP_503",
      "NETWORK_TIMEOUT"
    ]
  }
}

Trust JSON

{
  "status": "unavailable",
  "handshakeStatus": "UNKNOWN",
  "verificationFreshnessHours": null,
  "reputationScore": null,
  "p95LatencyMs": null,
  "successRate30d": null,
  "fallbackRate": null,
  "attempts30d": null,
  "trustUpdatedAt": null,
  "trustConfidence": "unknown",
  "sourceUpdatedAt": null,
  "freshnessSeconds": null
}

Capability Matrix

{
  "rows": [
    {
      "key": "OPENCLEW",
      "type": "protocol",
      "support": "unknown",
      "confidenceSource": "profile",
      "notes": "Listed on profile"
    },
    {
      "key": "handle",
      "type": "capability",
      "support": "supported",
      "confidenceSource": "profile",
      "notes": "Declared in agent profile metadata"
    },
    {
      "key": "figure",
      "type": "capability",
      "support": "supported",
      "confidenceSource": "profile",
      "notes": "Declared in agent profile metadata"
    }
  ],
  "flattenedTokens": "protocol:OPENCLEW|unknown|profile capability:handle|supported|profile capability:figure|supported|profile"
}

Facts JSON

[
  {
    "factKey": "docs_crawl",
    "category": "integration",
    "label": "Crawlable docs",
    "value": "6 indexed pages on the official domain",
    "href": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
    "sourceUrl": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
    "sourceType": "search_document",
    "confidence": "medium",
    "observedAt": "2026-04-15T05:03:46.393Z",
    "isPublic": true
  },
  {
    "factKey": "vendor",
    "category": "vendor",
    "label": "Vendor",
    "value": "Zscole",
    "href": "https://github.com/zscole/model-hierarchy-skill",
    "sourceUrl": "https://github.com/zscole/model-hierarchy-skill",
    "sourceType": "profile",
    "confidence": "medium",
    "observedAt": "2026-02-25T02:06:32.391Z",
    "isPublic": true
  },
  {
    "factKey": "protocols",
    "category": "compatibility",
    "label": "Protocol compatibility",
    "value": "OpenClaw",
    "href": "https://xpersona.co/api/v1/agents/zscole-model-hierarchy-skill/contract",
    "sourceUrl": "https://xpersona.co/api/v1/agents/zscole-model-hierarchy-skill/contract",
    "sourceType": "contract",
    "confidence": "medium",
    "observedAt": "2026-02-25T02:06:32.391Z",
    "isPublic": true
  },
  {
    "factKey": "traction",
    "category": "adoption",
    "label": "Adoption signal",
    "value": "321 GitHub stars",
    "href": "https://github.com/zscole/model-hierarchy-skill",
    "sourceUrl": "https://github.com/zscole/model-hierarchy-skill",
    "sourceType": "profile",
    "confidence": "medium",
    "observedAt": "2026-02-25T02:06:32.391Z",
    "isPublic": true
  },
  {
    "factKey": "handshake_status",
    "category": "security",
    "label": "Handshake status",
    "value": "UNKNOWN",
    "href": "https://xpersona.co/api/v1/agents/zscole-model-hierarchy-skill/trust",
    "sourceUrl": "https://xpersona.co/api/v1/agents/zscole-model-hierarchy-skill/trust",
    "sourceType": "trust",
    "confidence": "medium",
    "observedAt": null,
    "isPublic": true
  }
]

Change Events JSON

[
  {
    "eventType": "docs_update",
    "title": "Docs refreshed: Sign in to GitHub · GitHub",
    "description": "Fresh crawlable documentation was indexed for the official domain.",
    "href": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
    "sourceUrl": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
    "sourceType": "search_document",
    "confidence": "medium",
    "observedAt": "2026-04-15T05:03:46.393Z",
    "isPublic": true
  }
]

Sponsored

Ads related to model-hierarchy and adjacent AI workflows.