Crawler Summary

video-helper-skill answer-first brief

Video Analyzer --- name: video-analyzer description: Analyze videos to extract structured knowledge including mind maps, key highlights, and timestamps. Use when users want to analyze a video (YouTube, Bilibili, or local file), extract video content, generate video summaries, or understand video structure. Triggers: 'analyze video', 'summarize video', 'extract from video', 'video mind map', '视频分析', '总结视频'. --- Video Analyzer Analyz Capability contract not published. No trust telemetry is available yet. Last updated 2/25/2026.

Freshness

Last checked 2/25/2026

Best For

video-helper-skill is best for general automation workflows where OpenClaw compatibility matters.

Not Ideal For

Contract metadata is missing or unavailable for deterministic execution.

Evidence Sources Checked

editorial-content, GITHUB OPENCLEW, runtime-metrics, public facts pack

Claim this agent
Agent DossierGitHubSafety: 89/100

video-helper-skill

Video Analyzer --- name: video-analyzer description: Analyze videos to extract structured knowledge including mind maps, key highlights, and timestamps. Use when users want to analyze a video (YouTube, Bilibili, or local file), extract video content, generate video summaries, or understand video structure. Triggers: 'analyze video', 'summarize video', 'extract from video', 'video mind map', '视频分析', '总结视频'. --- Video Analyzer Analyz

OpenClawself-declared

Public facts

4

Change events

1

Artifacts

0

Freshness

Feb 25, 2026

Verifiededitorial-contentNo verified compatibility signals

Capability contract not published. No trust telemetry is available yet. Last updated 2/25/2026.

Trust evidence available

Trust score

Unknown

Compatibility

OpenClaw

Freshness

Feb 25, 2026

Vendor

Ldj Creat

Artifacts

0

Benchmarks

0

Last release

Unpublished

Executive Summary

Key links, install path, and a quick operational read before the deeper crawl record.

Verifiededitorial-content

Summary

Capability contract not published. No trust telemetry is available yet. Last updated 2/25/2026.

Setup snapshot

git clone https://github.com/LDJ-creat/video-helper-skill.git
  1. 1

    Setup complexity is LOW. This package is likely designed for quick installation with minimal external side-effects.

  2. 2

    Final validation: Expose the agent to a mock request payload inside a sandbox and trace the network egress before allowing access to real customer data.

Evidence Ledger

Everything public we have scraped or crawled about this agent, grouped by evidence type with provenance.

Verifiededitorial-content
Vendor (1)

Vendor

Ldj Creat

profilemedium
Observed Feb 25, 2026Source linkProvenance
Compatibility (1)

Protocol compatibility

OpenClaw

contractmedium
Observed Feb 25, 2026Source linkProvenance
Security (1)

Handshake status

UNKNOWN

trustmedium
Observed unknownSource linkProvenance
Integration (1)

Crawlable docs

6 indexed pages on the official domain

search_documentmedium
Observed Apr 15, 2026Source linkProvenance

Release & Crawl Timeline

Merged public release, docs, artifact, benchmark, pricing, and trust refresh events.

Self-declaredagent-index

Artifacts Archive

Extracted files, examples, snippets, parameters, dependencies, permissions, and artifact metadata.

Self-declaredGITHUB OPENCLEW

Extracted files

0

Examples

6

Snippets

0

Languages

typescript

Parameters

Executable Examples

bash

# Analyze a YouTube video
python scripts/analyze_video.py "https://www.youtube.com/watch?v=VIDEO_ID"

# Analyze a Bilibili video (output in Chinese)
python scripts/analyze_video.py "https://www.bilibili.com/video/BV1..." --lang zh

# Analyze a local video file
python scripts/analyze_video.py "/path/to/video.mp4" --title "My Video"

bash

curl http://localhost:8000/api/v1/health

bash

curl http://localhost:8000/api/v1/health

bash

python scripts/analyze_video.py "VIDEO_URL_OR_PATH" [options]

json

{
  "resultId": "...",
  "projectId": "...",
  "contentBlocks": [
    {
      "blockId": "...",
      "title": "Chapter Title",
      "startMs": 0,
      "endMs": 60000,
      "highlights": [
        {
          "highlightId": "...",
          "text": "Key point extracted from video",
          "startMs": 12000,
          "endMs": 18000,
          "keyframes": [{"assetId": "...", "timeMs": 15000}]
        }
      ]
    }
  ],
  "mindmap": {
    "nodes": [...],
    "edges": [...]
  },
  "assetRefs": [...]
}

text

User: Analyze this video for me: https://www.youtube.com/watch?v=dQw4w9WgXcQ

Docs & README

Full documentation captured from public sources, including the complete README when available.

Self-declaredGITHUB OPENCLEW

Docs source

GITHUB OPENCLEW

Editorial quality

ready

Video Analyzer --- name: video-analyzer description: Analyze videos to extract structured knowledge including mind maps, key highlights, and timestamps. Use when users want to analyze a video (YouTube, Bilibili, or local file), extract video content, generate video summaries, or understand video structure. Triggers: 'analyze video', 'summarize video', 'extract from video', 'video mind map', '视频分析', '总结视频'. --- Video Analyzer Analyz

Full README

name: video-analyzer description: Analyze videos to extract structured knowledge including mind maps, key highlights, and timestamps. Use when users want to analyze a video (YouTube, Bilibili, or local file), extract video content, generate video summaries, or understand video structure. Triggers: 'analyze video', 'summarize video', 'extract from video', 'video mind map', '视频分析', '总结视频'.

Video Analyzer

Analyze videos using the video-helper backend service to generate structured knowledge artifacts: mind maps, content blocks, highlights with timestamps, and keyframes.

Quick Start

# Analyze a YouTube video
python scripts/analyze_video.py "https://www.youtube.com/watch?v=VIDEO_ID"

# Analyze a Bilibili video (output in Chinese)
python scripts/analyze_video.py "https://www.bilibili.com/video/BV1..." --lang zh

# Analyze a local video file
python scripts/analyze_video.py "/path/to/video.mp4" --title "My Video"

Prerequisites

  1. Backend Service: The skill uses the video-helper backend at http://localhost:8000
  • By default, scripts/analyze_video.py will auto-start the backend if it is not running (local dev only: localhost:8000).
  • For actual processing, ensure WORKER_ENABLE=1 is set in the backend environment.
  1. Frontend (Optional): For viewing results in browser at http://localhost:3000

    • Start with: cd apps/web && pnpm dev
  2. Environment Variables:

    • VIDEO_HELPER_API_URL: Backend API URL (default: http://localhost:8000/api/v1)
    • VIDEO_HELPER_FRONTEND_URL: Frontend URL (default: http://localhost:3000)

Workflow

Step 1: Verify Backend Health

Before creating analysis jobs, ensure the backend is running:

curl http://localhost:8000/api/v1/health

Expected response: {"ok": true}

Step 2: Submit Video for Analysis

Use the analyze_video.py script. The script handles:

  • URL validation and source type detection
  • Job creation via API
  • Progress polling until completion
  • Result URL generation
python scripts/analyze_video.py "VIDEO_URL_OR_PATH" [options]

Options:

  • --title, -t: Video title (auto-detected for URLs)
  • --lang, -l: Output language for analysis (e.g., zh, en)
  • --llm-mode: external (default) or backend
  • --no-auto-start-backend: Disable auto-start when backend is down
  • --timeout: Max wait time in seconds (default: 600)
  • --json: Output result as JSON

Step 3: Retrieve Results

After successful analysis, the script outputs:

  • Project ID: Unique identifier for the analyzed video
  • Result API: GET /api/v1/projects/{projectId}/results/latest
  • Frontend URL: Browser link to view interactive results

Result Structure

The analysis result (GET /api/v1/projects/{projectId}/results/latest) contains:

{
  "resultId": "...",
  "projectId": "...",
  "contentBlocks": [
    {
      "blockId": "...",
      "title": "Chapter Title",
      "startMs": 0,
      "endMs": 60000,
      "highlights": [
        {
          "highlightId": "...",
          "text": "Key point extracted from video",
          "startMs": 12000,
          "endMs": 18000,
          "keyframes": [{"assetId": "...", "timeMs": 15000}]
        }
      ]
    }
  ],
  "mindmap": {
    "nodes": [...],
    "edges": [...]
  },
  "assetRefs": [...]
}

Unified Storage

All analysis results are stored in the backend's DATA_DIR:

  • Database: DATA_DIR/core.sqlite3 (projects, jobs, results, assets)
  • Project Files: DATA_DIR/{project_id}/ (videos, audio, keyframes)

This ensures results from both the skill and the frontend are unified and accessible from either interface.

Error Handling

Common errors and solutions:

| Error | Cause | Solution | | ----------------------------- | ------------------------------------------------------ | -------------------------------------------------------------------------------------------------------------------------- | | Backend service unavailable | Backend not running | Start backend service | | Unsupported video URL | URL not supported by yt-dlp | Try a different video source | | LLM credentials missing | No LLM API configured | Set LLM_API_BASE and LLM_API_KEY or configure via frontend | | Job status = blocked | Using --llm-mode external and plan not submitted yet | Call GET /api/v1/jobs/{jobId}/plan-request, ask editor AI to generate a plan JSON, then POST /api/v1/jobs/{jobId}/plan | | Job polling timed out | Analysis took too long | Increase --timeout or check backend logs |

Examples

Example 1: Analyze YouTube Video

User: Analyze this video for me: https://www.youtube.com/watch?v=dQw4w9WgXcQ

Action:

python scripts/analyze_video.py "https://www.youtube.com/watch?v=dQw4w9WgXcQ"

Output:

Analysis completed successfully!
Project ID: 2d2f...
Result API: http://localhost:8000/api/v1/projects/2d2f.../results/latest
View in browser: http://localhost:3000/project/2d2f...

Example 2: Analyze Bilibili Video in Chinese

User: 分析这个B站视频: https://www.bilibili.com/video/BV1xx411c7mD

Action:

python scripts/analyze_video.py "https://www.bilibili.com/video/BV1xx411c7mD" --lang zh

Example 3: Analyze Local Video File

User: I have a video at /home/user/lecture.mp4, please analyze it

Action:

python scripts/analyze_video.py "/home/user/lecture.mp4" --title "Lecture Video"

Supported Video Sources

  • YouTube: youtube.com and youtu.be URLs
  • Bilibili: bilibili.com and b23.tv URLs
  • Generic URLs: Any URL supported by yt-dlp
  • Local Files: .mp4, .mkv, .webm, .mov

Contract & API

Machine endpoints, protocol fit, contract coverage, invocation examples, and guardrails for agent-to-agent use.

MissingGITHUB OPENCLEW

Contract coverage

Status

missing

Auth

None

Streaming

No

Data region

Unspecified

Protocol support

OpenClaw: self-declared

Requires: none

Forbidden: none

Guardrails

Operational confidence: low

No positive guardrails captured.
Invocation examples
curl -s "https://xpersona.co/api/v1/agents/ldj-creat-video-helper-skill/snapshot"
curl -s "https://xpersona.co/api/v1/agents/ldj-creat-video-helper-skill/contract"
curl -s "https://xpersona.co/api/v1/agents/ldj-creat-video-helper-skill/trust"

Reliability & Benchmarks

Trust and runtime signals, benchmark suites, failure patterns, and practical risk constraints.

Missingruntime-metrics

Trust signals

Handshake

UNKNOWN

Confidence

unknown

Attempts 30d

unknown

Fallback rate

unknown

Runtime metrics

Observed P50

unknown

Observed P95

unknown

Rate limit

unknown

Estimated cost

unknown

Do not use if

Contract metadata is missing or unavailable for deterministic execution.
No benchmark suites or observed failure patterns are available.

Media & Demo

Every public screenshot, visual asset, demo link, and owner-provided destination tied to this agent.

Missingno-media
No screenshots, media assets, or demo links are available.

Related Agents

Neighboring agents from the same protocol and source ecosystem for comparison and shortlist building.

Self-declaredprotocol-neighbors
GITHUB_REPOSactivepieces

Rank

70

AI Agents & MCPs & AI Workflow Automation • (~400 MCP servers for AI agents) • AI Automation / AI Agent with MCPs • AI Workflows & AI Agents • MCPs for AI Agents

Traction

No public download signal

Freshness

Updated 2d ago

OPENCLAW
GITHUB_REPOScherry-studio

Rank

70

AI productivity studio with smart chat, autonomous agents, and 300+ assistants. Unified access to frontier LLMs

Traction

No public download signal

Freshness

Updated 5d ago

MCPOPENCLAW
GITHUB_REPOSAionUi

Rank

70

Free, local, open-source 24/7 Cowork app and OpenClaw for Gemini CLI, Claude Code, Codex, OpenCode, Qwen Code, Goose CLI, Auggie, and more | 🌟 Star if you like it!

Traction

No public download signal

Freshness

Updated 6d ago

MCPOPENCLAW
GITHUB_REPOSCopilotKit

Rank

70

The Frontend for Agents & Generative UI. React + Angular

Traction

No public download signal

Freshness

Updated 23d ago

OPENCLAW
Machine Appendix

Contract JSON

{
  "contractStatus": "missing",
  "authModes": [],
  "requires": [],
  "forbidden": [],
  "supportsMcp": false,
  "supportsA2a": false,
  "supportsStreaming": false,
  "inputSchemaRef": null,
  "outputSchemaRef": null,
  "dataRegion": null,
  "contractUpdatedAt": null,
  "sourceUpdatedAt": null,
  "freshnessSeconds": null
}

Invocation Guide

{
  "preferredApi": {
    "snapshotUrl": "https://xpersona.co/api/v1/agents/ldj-creat-video-helper-skill/snapshot",
    "contractUrl": "https://xpersona.co/api/v1/agents/ldj-creat-video-helper-skill/contract",
    "trustUrl": "https://xpersona.co/api/v1/agents/ldj-creat-video-helper-skill/trust"
  },
  "curlExamples": [
    "curl -s \"https://xpersona.co/api/v1/agents/ldj-creat-video-helper-skill/snapshot\"",
    "curl -s \"https://xpersona.co/api/v1/agents/ldj-creat-video-helper-skill/contract\"",
    "curl -s \"https://xpersona.co/api/v1/agents/ldj-creat-video-helper-skill/trust\""
  ],
  "jsonRequestTemplate": {
    "query": "summarize this repo",
    "constraints": {
      "maxLatencyMs": 2000,
      "protocolPreference": [
        "OPENCLEW"
      ]
    }
  },
  "jsonResponseTemplate": {
    "ok": true,
    "result": {
      "summary": "...",
      "confidence": 0.9
    },
    "meta": {
      "source": "GITHUB_OPENCLEW",
      "generatedAt": "2026-04-17T00:55:31.394Z"
    }
  },
  "retryPolicy": {
    "maxAttempts": 3,
    "backoffMs": [
      500,
      1500,
      3500
    ],
    "retryableConditions": [
      "HTTP_429",
      "HTTP_503",
      "NETWORK_TIMEOUT"
    ]
  }
}

Trust JSON

{
  "status": "unavailable",
  "handshakeStatus": "UNKNOWN",
  "verificationFreshnessHours": null,
  "reputationScore": null,
  "p95LatencyMs": null,
  "successRate30d": null,
  "fallbackRate": null,
  "attempts30d": null,
  "trustUpdatedAt": null,
  "trustConfidence": "unknown",
  "sourceUpdatedAt": null,
  "freshnessSeconds": null
}

Capability Matrix

{
  "rows": [
    {
      "key": "OPENCLEW",
      "type": "protocol",
      "support": "unknown",
      "confidenceSource": "profile",
      "notes": "Listed on profile"
    }
  ],
  "flattenedTokens": "protocol:OPENCLEW|unknown|profile"
}

Facts JSON

[
  {
    "factKey": "docs_crawl",
    "category": "integration",
    "label": "Crawlable docs",
    "value": "6 indexed pages on the official domain",
    "href": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
    "sourceUrl": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
    "sourceType": "search_document",
    "confidence": "medium",
    "observedAt": "2026-04-15T05:03:46.393Z",
    "isPublic": true
  },
  {
    "factKey": "vendor",
    "category": "vendor",
    "label": "Vendor",
    "value": "Ldj Creat",
    "href": "https://github.com/LDJ-creat/video-helper-skill",
    "sourceUrl": "https://github.com/LDJ-creat/video-helper-skill",
    "sourceType": "profile",
    "confidence": "medium",
    "observedAt": "2026-02-25T02:27:56.620Z",
    "isPublic": true
  },
  {
    "factKey": "protocols",
    "category": "compatibility",
    "label": "Protocol compatibility",
    "value": "OpenClaw",
    "href": "https://xpersona.co/api/v1/agents/ldj-creat-video-helper-skill/contract",
    "sourceUrl": "https://xpersona.co/api/v1/agents/ldj-creat-video-helper-skill/contract",
    "sourceType": "contract",
    "confidence": "medium",
    "observedAt": "2026-02-25T02:27:56.620Z",
    "isPublic": true
  },
  {
    "factKey": "handshake_status",
    "category": "security",
    "label": "Handshake status",
    "value": "UNKNOWN",
    "href": "https://xpersona.co/api/v1/agents/ldj-creat-video-helper-skill/trust",
    "sourceUrl": "https://xpersona.co/api/v1/agents/ldj-creat-video-helper-skill/trust",
    "sourceType": "trust",
    "confidence": "medium",
    "observedAt": null,
    "isPublic": true
  }
]

Change Events JSON

[
  {
    "eventType": "docs_update",
    "title": "Docs refreshed: Sign in to GitHub · GitHub",
    "description": "Fresh crawlable documentation was indexed for the official domain.",
    "href": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
    "sourceUrl": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
    "sourceType": "search_document",
    "confidence": "medium",
    "observedAt": "2026-04-15T05:03:46.393Z",
    "isPublic": true
  }
]

Sponsored

Ads related to video-helper-skill and adjacent AI workflows.