Crawler Summary

content-claw answer-first brief

Analyze content (YouTube videos, articles, tweet threads, podcasts) and deliver a watch/read/skip verdict with extracted insights. Trigger when user shares a URL with keywords like "review", "analyze", "worth my time", "should I watch", "should I read", or "review this". Also triggers on "content review", "content claw", or just a URL followed by a question mark. Do NOT use for simple link sharing without review intent, bookmarking URLs for later, or when user just wants to open/visit a link. --- name: content-claw description: Analyze content (YouTube videos, articles, tweet threads, podcasts) and deliver a watch/read/skip verdict with extracted insights. Trigger when user shares a URL with keywords like "review", "analyze", "worth my time", "should I watch", "should I read", or "review this". Also triggers on "content review", "content claw", or just a URL followed by a question mark. Do NOT use for sim Capability contract not published. No trust telemetry is available yet. 17 GitHub stars reported by the source. Last updated 4/15/2026.

Freshness

Last checked 4/15/2026

Best For

content-claw is best for see, the workflows where OpenClaw compatibility matters.

Not Ideal For

Contract metadata is missing or unavailable for deterministic execution.

Evidence Sources Checked

editorial-content, GITHUB OPENCLEW, runtime-metrics, public facts pack

Claim this agent
Agent DossierGitHubSafety: 100/100

content-claw

Analyze content (YouTube videos, articles, tweet threads, podcasts) and deliver a watch/read/skip verdict with extracted insights. Trigger when user shares a URL with keywords like "review", "analyze", "worth my time", "should I watch", "should I read", or "review this". Also triggers on "content review", "content claw", or just a URL followed by a question mark. Do NOT use for simple link sharing without review intent, bookmarking URLs for later, or when user just wants to open/visit a link. --- name: content-claw description: Analyze content (YouTube videos, articles, tweet threads, podcasts) and deliver a watch/read/skip verdict with extracted insights. Trigger when user shares a URL with keywords like "review", "analyze", "worth my time", "should I watch", "should I read", or "review this". Also triggers on "content review", "content claw", or just a URL followed by a question mark. Do NOT use for sim

OpenClawself-declared

Public facts

5

Change events

1

Artifacts

0

Freshness

Apr 15, 2026

Verifiededitorial-contentNo verified compatibility signals17 GitHub stars

Capability contract not published. No trust telemetry is available yet. 17 GitHub stars reported by the source. Last updated 4/15/2026.

17 GitHub starsTrust evidence available

Trust score

Unknown

Compatibility

OpenClaw

Freshness

Apr 15, 2026

Vendor

Sene1337

Artifacts

0

Benchmarks

0

Last release

Unpublished

Executive Summary

Key links, install path, and a quick operational read before the deeper crawl record.

Verifiededitorial-content

Summary

Capability contract not published. No trust telemetry is available yet. 17 GitHub stars reported by the source. Last updated 4/15/2026.

Setup snapshot

git clone https://github.com/sene1337/content-claw.git
  1. 1

    Setup complexity is LOW. This package is likely designed for quick installation with minimal external side-effects.

  2. 2

    Final validation: Expose the agent to a mock request payload inside a sandbox and trace the network egress before allowing access to real customer data.

Evidence Ledger

Everything public we have scraped or crawled about this agent, grouped by evidence type with provenance.

Verifiededitorial-content
Vendor (1)

Vendor

Sene1337

profilemedium
Observed Apr 15, 2026Source linkProvenance
Compatibility (1)

Protocol compatibility

OpenClaw

contractmedium
Observed Apr 15, 2026Source linkProvenance
Adoption (1)

Adoption signal

17 GitHub stars

profilemedium
Observed Apr 15, 2026Source linkProvenance
Security (1)

Handshake status

UNKNOWN

trustmedium
Observed unknownSource linkProvenance
Integration (1)

Crawlable docs

6 indexed pages on the official domain

search_documentmedium
Observed Apr 15, 2026Source linkProvenance

Release & Crawl Timeline

Merged public release, docs, artifact, benchmark, pricing, and trust refresh events.

Self-declaredagent-index

Artifacts Archive

Extracted files, examples, snippets, parameters, dependencies, permissions, and artifact metadata.

Self-declaredGITHUB OPENCLEW

Extracted files

0

Examples

1

Snippets

0

Languages

typescript

Parameters

Executable Examples

markdown

## Source
- **Title:** [Content title]
- **Author:** [Creator name] (@handle if applicable)
- **URL:** [Clean URL โ€” tracking params stripped]
- **Date:** [Publication date]
- **Length:** [Duration or word count]
- **Shared by:** [Who shared it] via [channel], [date]
- **Context:** [1-2 sentences: what was happening when the link was shared โ€” what project, conversation, or train of thought prompted it. This is for future memory recall.]

Docs & README

Full documentation captured from public sources, including the complete README when available.

Self-declaredGITHUB OPENCLEW

Docs source

GITHUB OPENCLEW

Editorial quality

ready

Analyze content (YouTube videos, articles, tweet threads, podcasts) and deliver a watch/read/skip verdict with extracted insights. Trigger when user shares a URL with keywords like "review", "analyze", "worth my time", "should I watch", "should I read", or "review this". Also triggers on "content review", "content claw", or just a URL followed by a question mark. Do NOT use for simple link sharing without review intent, bookmarking URLs for later, or when user just wants to open/visit a link. --- name: content-claw description: Analyze content (YouTube videos, articles, tweet threads, podcasts) and deliver a watch/read/skip verdict with extracted insights. Trigger when user shares a URL with keywords like "review", "analyze", "worth my time", "should I watch", "should I read", or "review this". Also triggers on "content review", "content claw", or just a URL followed by a question mark. Do NOT use for sim

Full README

name: content-claw description: Analyze content (YouTube videos, articles, tweet threads, podcasts) and deliver a watch/read/skip verdict with extracted insights. Trigger when user shares a URL with keywords like "review", "analyze", "worth my time", "should I watch", "should I read", or "review this". Also triggers on "content review", "content claw", or just a URL followed by a question mark. Do NOT use for simple link sharing without review intent, bookmarking URLs for later, or when user just wants to open/visit a link.

Content Claw ๐Ÿฆž

Analyze external content against the user's goals, frameworks, and time value. Extract the insights so they don't have to consume the full content unless it's truly worth their time.

Folder structure:

  • docs/content-claw/caught/ โ€” ๐Ÿ“– Read and ๐ŸŽฌ Watch verdicts. The keepers.
  • docs/content-claw/released/skim/ โ€” ๐Ÿ‘€ Skim verdicts. Monthly batch review, then purged.
  • docs/content-claw/released/skip/ โ€” โญ๏ธ Skip verdicts. Auto-purge after 14 days.

File naming scheme: {category}--{source}--{title-slug}.md

  • Category: ai-agents, bitcoin, health, media, business, tech, finance, culture, ops (extend as needed)
  • Source type: article, video, podcast, thread, paper
  • Title slug: Short descriptive slug using hyphens
  • Separator: Double-dash -- (doesn't conflict with hyphens in slugs)
  • Example: ai-agents--article--clawvault-memory-architecture.md

At a glance you can see category, format, and topic without opening the file.

Workflow

Step -1: Duplicate Check โญโญ

Before ANY extraction work, check if this URL has already been reviewed:

  1. grep -rl "<cleaned-URL>" docs/content-claw/ (search all caught + released folders)
  2. Also check partial URL matches (e.g., tweet ID, YouTube video ID) in case the URL format differs
  3. If found: Tell the user it's already been reviewed, show the file path and verdict. Ask if they want a re-review or update.
  4. If not found: Proceed to Step 0.

This prevents duplicate files and wasted extraction work. No exceptions.

Step 0: Pre-Flight Checks โญ

Verify tools and set extraction strategy before starting. See references/environment-setup.md for full checklist.

Step 0.5: Sanitize URLs ๐Ÿงน

Before fetching or searching ANY shared URL, strip tracking parameters:

  • Twitter/X: Remove ?s=, &s=, ?t=, &t=, ?ref_src=, &ref_src=
  • YouTube: Remove &si=, ?si=, &feature=, ?feature=, &pp=
  • Universal: Remove utm_source, utm_medium, utm_campaign, utm_term, utm_content, utm_id, fbclid, gclid, igshid, ref, mc_cid, mc_eid
  • Rule: Strip everything after ? or & that matches these patterns. Keep essential params (e.g., YouTube v=, t= for timestamps, list= for playlists).

Example: https://youtube.com/watch?v=abc123&si=tracking_garbage&t=120 โ†’ https://youtube.com/watch?v=abc123&t=120

Step 1: Detect Content Type

From the URL, determine:

  • YouTube video โ†’ needs transcription
  • Article/blog โ†’ needs extraction
  • Tweet/thread โ†’ needs thread unwinding
  • Podcast โ†’ needs transcription (if audio available)

Duration check (YouTube/podcasts): Before spawning extraction, check video length via web_search or yt-dlp metadata (yt-dlp --print duration [URL]). If >60 minutes, warn the user:

"This is a [X]-minute video. Full transcription will take a while. Want me to: (a) do full extraction, (b) extract intro + key chapters only, or (c) search for existing summaries/highlights?"

For videos <60 minutes, proceed automatically.

Step 2: Extract Content

โš ๏ธ Context Window Protection Rule: The purpose of this skill is to keep the main agent's context clean. Heavy extraction work MUST happen in a sub-agent, not in the main session.

  • Tier 1 (web-based, <3k tokens): OK to run in main agent โ€” lightweight metadata and search results
  • Tier 2 (transcript/content extraction, 5k+ tokens): MUST spawn a sub-agent. No exceptions. The main agent's job is to orchestrate and review, not to do heavy extraction.
  • Model routing: Sub-agents should use a cheaper model (e.g., Sonnet) for extraction work. The main agent handles judgment (verdict, relevance, actions) on its primary model.

If Tier 1 produces enough content for a solid review โ†’ proceed to Step 3. If Tier 1 is insufficient โ†’ spawn a sub-agent for Tier 2. Do NOT do Tier 2 work yourself.


Use the fallback hierarchy โ€” stop at the first tier that works:

Tier 1: Lightweight web extraction (~500-2.5k tokens) โญ Try first, main agent

  • YouTube: web_search "[video title] transcript" or web_search "[video-id] transcript"
  • YouTube: web_fetch on known transcript services (e.g., kome.ai/api/transcript?url=[URL])
  • YouTube: oEmbed for metadata (https://www.youtube.com/oembed?url=[URL]&format=json)
  • Articles: web_fetch with markdown extraction
  • Tweets: api.fxtwitter.com/[user]/status/[id]
  • If sufficient content extracted โ†’ skip to Step 3

Tier 2: Sub-agent extraction (~5k tokens) โ€” ALWAYS a sub-agent, NEVER main agent

  • Spawn sub-agent (use cheaper model like Sonnet) with explicit instructions (see references/sub-agent-prompt.md)
  • Critical: After spawning, WAIT for the sub-agent to complete. Do NOT attempt parallel extraction.
  • Verify output file exists and has substantive content before proceeding
  • Sub-agent session is disposable โ€” its context gets thrown away after extraction, keeping your main context clean

Tier 3: Ask user for help (~1k tokens) โ€” Last resort

  • "I couldn't extract this content automatically. Could you paste the transcript/key points, or should I work from the title and description only?"

Retry Limits (hard caps):

  • web_search: max 2 attempts per URL
  • web_fetch: max 1 attempt per specific URL
  • yt-dlp: max 1 attempt (if blocked, it's blocked โ€” don't retry)
  • Total retries across all methods: abort if >5
  • If a method returns an error, log it and move to next tier โ€” don't repeat

Output Verification (after any extraction):

  1. Check that the output file exists at the expected path
  2. Check that it has substantive content (not empty, not just a plan/outline)
  3. If verification fails after Tier 2, fall through to Tier 3 โ€” don't retry the same approach

Step 3: Review Against Frameworks

After verified extraction, review the content against:

  1. Relevance Filter: Does this relate to the user's active goals and priorities? Check USER.md if it exists.
  2. Novelty Check: Does the user (or their agent) already have this knowledge? Check existing docs for overlap.
  3. Action Density: How many actionable insights per minute of content? High action density = worth consuming. Low = extract and move on.
  4. Time Value Test: Is consuming this content the highest-value use of the user's time, or can the insight be captured faster from the extraction?

Step 4: Deliver Verdict

Format the response as:

Verdict emoji + label:

  • โญ๏ธ Skip โ€” Already known or not relevant. Here's the 1-2 things worth noting.
  • ๐Ÿ‘€ Skim โ€” Some useful bits but not worth full attention. Here are the highlights.
  • ๐Ÿ“– Read โ€” Worth reading the summary. Key frameworks extracted below.
  • ๐ŸŽฌ Watch โ€” Visual/demo content that loses value in text. Worth the time investment.

Then provide:

๐Ÿ“Œ Actions (things to do based on this content):

  • [Immediate actions, research tasks, things to add to systems]

๐Ÿ’ก Insights (worth knowing, no action needed):

  • [Key frameworks, mental models, interesting data points]

Also note:

  • What's new vs what's already known
  • Source quality assessment (credible? experienced? selling something?)

Step 5: Save Review

Source block (required at top of every review file):

## Source
- **Title:** [Content title]
- **Author:** [Creator name] (@handle if applicable)
- **URL:** [Clean URL โ€” tracking params stripped]
- **Date:** [Publication date]
- **Length:** [Duration or word count]
- **Shared by:** [Who shared it] via [channel], [date]
- **Context:** [1-2 sentences: what was happening when the link was shared โ€” what project, conversation, or train of thought prompted it. This is for future memory recall.]

Filing rules by verdict:

| Verdict | Destination | Retention | |---|---|---| | ๐Ÿ“– Read | docs/content-claw/caught/{cat}--{src}--{slug}.md | Permanent | | ๐ŸŽฌ Watch | docs/content-claw/caught/{cat}--{src}--{slug}.md | Permanent | | ๐Ÿ‘€ Skim | docs/content-claw/released/skim/{cat}--{src}--{slug}.md | Monthly batch review | | โญ๏ธ Skip | docs/content-claw/released/skip/{cat}--{src}--{slug}.md | Auto-purge after 14 days |

  • Naming: {category}--{source-type}--{title-slug}.md (see top of file for categories/sources)
  • Create directories as needed
  • Never overwrite existing files โ€” append a number if slug already exists

Review file contents:

  • Source block (as above)
  • Verdict and reasoning
  • Full extracted content or detailed summary
  • Actions and insights (separated)

Step 6: Released Folder Maintenance ๐Ÿงน

Auto-purge (agent-driven, no human input needed):

  • Files in released/skip/ older than 14 days โ†’ delete automatically during heartbeats or periodic maintenance
  • No confirmation needed. Skips were definitively not worth it.

Monthly batch review (low-friction human decision):

  • Once per month, scan released/skim/ for accumulated reviews
  • Surface a numbered summary to the user:

๐Ÿฆž Content Claw โ€” Monthly Purge [N] skims from [month]. Promote or release?

  1. "Article title" โ€” one-line context reminder
  2. "Article title" โ€” one-line context reminder ...

Reply with numbers to promote to caught/, or "clear all"

  • Promoted files move to docs/content-claw/caught/
  • Remaining files get deleted
  • Track last purge date to avoid double-prompting

Known Limitations

  • YouTube bot detection: YouTube blocks yt-dlp from datacenter/cloud IPs. If you're running on a VPS or in a sandbox, yt-dlp will likely fail with 403 errors. Use web-based transcript extraction (Tier 1) instead.
  • Rate limiting: web_search providers may rate-limit after repeated queries. Space out searches or reduce query count.
  • Long videos (>2hr): Whisper transcription is CPU-intensive. For very long content, prefer searching for existing transcripts.

Notes

  • Keep verdicts concise โ€” mobile-friendly formatting (no tables, use bullet lists)
  • The user's time is the scarcest resource. Default to extracting value, not recommending consumption.
  • When in doubt, extract the insights and skip. The bar for "watch the whole thing" should be high.
  • This skill works for any agent โ€” it adapts to whatever knowledge base and doc structure already exists.

Contract & API

Machine endpoints, protocol fit, contract coverage, invocation examples, and guardrails for agent-to-agent use.

MissingGITHUB OPENCLEW

Contract coverage

Status

missing

Auth

None

Streaming

No

Data region

Unspecified

Protocol support

OpenClaw: self-declared

Requires: none

Forbidden: none

Guardrails

Operational confidence: low

No positive guardrails captured.
Invocation examples
curl -s "https://xpersona.co/api/v1/agents/sene1337-content-claw/snapshot"
curl -s "https://xpersona.co/api/v1/agents/sene1337-content-claw/contract"
curl -s "https://xpersona.co/api/v1/agents/sene1337-content-claw/trust"

Reliability & Benchmarks

Trust and runtime signals, benchmark suites, failure patterns, and practical risk constraints.

Missingruntime-metrics

Trust signals

Handshake

UNKNOWN

Confidence

unknown

Attempts 30d

unknown

Fallback rate

unknown

Runtime metrics

Observed P50

unknown

Observed P95

unknown

Rate limit

unknown

Estimated cost

unknown

Do not use if

Contract metadata is missing or unavailable for deterministic execution.
No benchmark suites or observed failure patterns are available.

Media & Demo

Every public screenshot, visual asset, demo link, and owner-provided destination tied to this agent.

Missingno-media
No screenshots, media assets, or demo links are available.

Related Agents

Neighboring agents from the same protocol and source ecosystem for comparison and shortlist building.

Self-declaredprotocol-neighbors
GITHUB_REPOSactivepieces

Rank

70

AI Agents & MCPs & AI Workflow Automation โ€ข (~400 MCP servers for AI agents) โ€ข AI Automation / AI Agent with MCPs โ€ข AI Workflows & AI Agents โ€ข MCPs for AI Agents

Traction

No public download signal

Freshness

Updated 2d ago

OPENCLAW
GITHUB_REPOScherry-studio

Rank

70

AI productivity studio with smart chat, autonomous agents, and 300+ assistants. Unified access to frontier LLMs

Traction

No public download signal

Freshness

Updated 5d ago

MCPOPENCLAW
GITHUB_REPOSAionUi

Rank

70

Free, local, open-source 24/7 Cowork app and OpenClaw for Gemini CLI, Claude Code, Codex, OpenCode, Qwen Code, Goose CLI, Auggie, and more | ๐ŸŒŸ Star if you like it!

Traction

No public download signal

Freshness

Updated 6d ago

MCPOPENCLAW
GITHUB_REPOSCopilotKit

Rank

70

The Frontend for Agents & Generative UI. React + Angular

Traction

No public download signal

Freshness

Updated 23d ago

OPENCLAW
Machine Appendix

Contract JSON

{
  "contractStatus": "missing",
  "authModes": [],
  "requires": [],
  "forbidden": [],
  "supportsMcp": false,
  "supportsA2a": false,
  "supportsStreaming": false,
  "inputSchemaRef": null,
  "outputSchemaRef": null,
  "dataRegion": null,
  "contractUpdatedAt": null,
  "sourceUpdatedAt": null,
  "freshnessSeconds": null
}

Invocation Guide

{
  "preferredApi": {
    "snapshotUrl": "https://xpersona.co/api/v1/agents/sene1337-content-claw/snapshot",
    "contractUrl": "https://xpersona.co/api/v1/agents/sene1337-content-claw/contract",
    "trustUrl": "https://xpersona.co/api/v1/agents/sene1337-content-claw/trust"
  },
  "curlExamples": [
    "curl -s \"https://xpersona.co/api/v1/agents/sene1337-content-claw/snapshot\"",
    "curl -s \"https://xpersona.co/api/v1/agents/sene1337-content-claw/contract\"",
    "curl -s \"https://xpersona.co/api/v1/agents/sene1337-content-claw/trust\""
  ],
  "jsonRequestTemplate": {
    "query": "summarize this repo",
    "constraints": {
      "maxLatencyMs": 2000,
      "protocolPreference": [
        "OPENCLEW"
      ]
    }
  },
  "jsonResponseTemplate": {
    "ok": true,
    "result": {
      "summary": "...",
      "confidence": 0.9
    },
    "meta": {
      "source": "GITHUB_OPENCLEW",
      "generatedAt": "2026-04-17T01:55:12.897Z"
    }
  },
  "retryPolicy": {
    "maxAttempts": 3,
    "backoffMs": [
      500,
      1500,
      3500
    ],
    "retryableConditions": [
      "HTTP_429",
      "HTTP_503",
      "NETWORK_TIMEOUT"
    ]
  }
}

Trust JSON

{
  "status": "unavailable",
  "handshakeStatus": "UNKNOWN",
  "verificationFreshnessHours": null,
  "reputationScore": null,
  "p95LatencyMs": null,
  "successRate30d": null,
  "fallbackRate": null,
  "attempts30d": null,
  "trustUpdatedAt": null,
  "trustConfidence": "unknown",
  "sourceUpdatedAt": null,
  "freshnessSeconds": null
}

Capability Matrix

{
  "rows": [
    {
      "key": "OPENCLEW",
      "type": "protocol",
      "support": "unknown",
      "confidenceSource": "profile",
      "notes": "Listed on profile"
    },
    {
      "key": "see",
      "type": "capability",
      "support": "supported",
      "confidenceSource": "profile",
      "notes": "Declared in agent profile metadata"
    },
    {
      "key": "the",
      "type": "capability",
      "support": "supported",
      "confidenceSource": "profile",
      "notes": "Declared in agent profile metadata"
    }
  ],
  "flattenedTokens": "protocol:OPENCLEW|unknown|profile capability:see|supported|profile capability:the|supported|profile"
}

Facts JSON

[
  {
    "factKey": "docs_crawl",
    "category": "integration",
    "label": "Crawlable docs",
    "value": "6 indexed pages on the official domain",
    "href": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
    "sourceUrl": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
    "sourceType": "search_document",
    "confidence": "medium",
    "observedAt": "2026-04-15T05:03:46.393Z",
    "isPublic": true
  },
  {
    "factKey": "vendor",
    "category": "vendor",
    "label": "Vendor",
    "value": "Sene1337",
    "href": "https://github.com/sene1337/content-claw",
    "sourceUrl": "https://github.com/sene1337/content-claw",
    "sourceType": "profile",
    "confidence": "medium",
    "observedAt": "2026-04-15T01:14:56.271Z",
    "isPublic": true
  },
  {
    "factKey": "protocols",
    "category": "compatibility",
    "label": "Protocol compatibility",
    "value": "OpenClaw",
    "href": "https://xpersona.co/api/v1/agents/sene1337-content-claw/contract",
    "sourceUrl": "https://xpersona.co/api/v1/agents/sene1337-content-claw/contract",
    "sourceType": "contract",
    "confidence": "medium",
    "observedAt": "2026-04-15T01:14:56.271Z",
    "isPublic": true
  },
  {
    "factKey": "traction",
    "category": "adoption",
    "label": "Adoption signal",
    "value": "17 GitHub stars",
    "href": "https://github.com/sene1337/content-claw",
    "sourceUrl": "https://github.com/sene1337/content-claw",
    "sourceType": "profile",
    "confidence": "medium",
    "observedAt": "2026-04-15T01:14:56.271Z",
    "isPublic": true
  },
  {
    "factKey": "handshake_status",
    "category": "security",
    "label": "Handshake status",
    "value": "UNKNOWN",
    "href": "https://xpersona.co/api/v1/agents/sene1337-content-claw/trust",
    "sourceUrl": "https://xpersona.co/api/v1/agents/sene1337-content-claw/trust",
    "sourceType": "trust",
    "confidence": "medium",
    "observedAt": null,
    "isPublic": true
  }
]

Change Events JSON

[
  {
    "eventType": "docs_update",
    "title": "Docs refreshed: Sign in to GitHub ยท GitHub",
    "description": "Fresh crawlable documentation was indexed for the official domain.",
    "href": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
    "sourceUrl": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
    "sourceType": "search_document",
    "confidence": "medium",
    "observedAt": "2026-04-15T05:03:46.393Z",
    "isPublic": true
  }
]

Sponsored

Ads related to content-claw and adjacent AI workflows.