Rank
70
AI Agents & MCPs & AI Workflow Automation • (~400 MCP servers for AI agents) • AI Automation / AI Agent with MCPs • AI Workflows & AI Agents • MCPs for AI Agents
Traction
No public download signal
Freshness
Updated 2d ago
Crawler Summary
AI video generation skill for OpenClaw using LTX-2 API by Lightricks. Generates cinematic videos from text prompts, images, or audio. Use when: (1) user says "make a video", "generate video", "animate this", "text to video", "image to video", "audio to video", "cineclaw", (2) user wants to create social media content, ads, music videos, or cinematic clips, (3) user provides an image and wants it animated, (4) user provides audio and wants synced video. Supports: text-to-video (T2V), image-to-video (I2V), audio-to-video (A2V), camera presets, AI audio sync, prompt enhancement, cost estimation. NOT for: video editing, video trimming, adding subtitles, or screen recording. --- name: cineclaw description: > AI video generation skill for OpenClaw using LTX-2 API by Lightricks. Generates cinematic videos from text prompts, images, or audio. Use when: (1) user says "make a video", "generate video", "animate this", "text to video", "image to video", "audio to video", "cineclaw", (2) user wants to create social media content, ads, music videos, or cinematic clips, (3) user provides an image Capability contract not published. No trust telemetry is available yet. Last updated 2/25/2026.
Freshness
Last checked 2/25/2026
Best For
cineclaw is best for general automation workflows where OpenClaw compatibility matters.
Not Ideal For
Contract metadata is missing or unavailable for deterministic execution.
Evidence Sources Checked
editorial-content, GITHUB OPENCLEW, runtime-metrics, public facts pack
AI video generation skill for OpenClaw using LTX-2 API by Lightricks. Generates cinematic videos from text prompts, images, or audio. Use when: (1) user says "make a video", "generate video", "animate this", "text to video", "image to video", "audio to video", "cineclaw", (2) user wants to create social media content, ads, music videos, or cinematic clips, (3) user provides an image and wants it animated, (4) user provides audio and wants synced video. Supports: text-to-video (T2V), image-to-video (I2V), audio-to-video (A2V), camera presets, AI audio sync, prompt enhancement, cost estimation. NOT for: video editing, video trimming, adding subtitles, or screen recording. --- name: cineclaw description: > AI video generation skill for OpenClaw using LTX-2 API by Lightricks. Generates cinematic videos from text prompts, images, or audio. Use when: (1) user says "make a video", "generate video", "animate this", "text to video", "image to video", "audio to video", "cineclaw", (2) user wants to create social media content, ads, music videos, or cinematic clips, (3) user provides an image
Public facts
4
Change events
1
Artifacts
0
Freshness
Feb 25, 2026
Capability contract not published. No trust telemetry is available yet. Last updated 2/25/2026.
Trust score
Unknown
Compatibility
OpenClaw
Freshness
Feb 25, 2026
Vendor
Babakarto
Artifacts
0
Benchmarks
0
Last release
Unpublished
Key links, install path, and a quick operational read before the deeper crawl record.
Summary
Capability contract not published. No trust telemetry is available yet. Last updated 2/25/2026.
Setup snapshot
git clone https://github.com/babakarto/cineclaw.gitSetup complexity is LOW. This package is likely designed for quick installation with minimal external side-effects.
Final validation: Expose the agent to a mock request payload inside a sandbox and trace the network egress before allowing access to real customer data.
Everything public we have scraped or crawled about this agent, grouped by evidence type with provenance.
Vendor
Babakarto
Protocol compatibility
OpenClaw
Handshake status
UNKNOWN
Crawlable docs
6 indexed pages on the official domain
Merged public release, docs, artifact, benchmark, pricing, and trust refresh events.
Extracted files, examples, snippets, parameters, dependencies, permissions, and artifact metadata.
Extracted files
0
Examples
6
Snippets
0
Languages
typescript
Parameters
bash
python3 scripts/ltx_generate.py --mode t2v --prompt "your prompt" --duration 6 --model ltx-2-fast
bash
python3 scripts/ltx_generate.py --mode i2v --image /path/to/image.jpg --prompt "motion description" --duration 6
bash
python3 scripts/ltx_generate.py --mode a2v --audio /path/to/audio.mp3 --prompt "visual description" --model ltx-2-pro
text
[Scene/Environment]. [Subject description]. [Action/Motion]. [Camera movement]. [Style/aesthetic].
text
Intimate close-up portrait. Halation. 35mm film look. High-end fashion. Older woman with silver hair, wearing a dark velvet coat. She slowly turns her head toward the camera, soft smile forming. Shallow depth of field, warm studio lighting. Static camera, slight rack focus.
text
Gorilla gaming streamer. UGC style footage. Wearing headphones. Static camera wide-shot. Gorilla is gaming using mouse and keyboard.
Full documentation captured from public sources, including the complete README when available.
Docs source
GITHUB OPENCLEW
Editorial quality
ready
AI video generation skill for OpenClaw using LTX-2 API by Lightricks. Generates cinematic videos from text prompts, images, or audio. Use when: (1) user says "make a video", "generate video", "animate this", "text to video", "image to video", "audio to video", "cineclaw", (2) user wants to create social media content, ads, music videos, or cinematic clips, (3) user provides an image and wants it animated, (4) user provides audio and wants synced video. Supports: text-to-video (T2V), image-to-video (I2V), audio-to-video (A2V), camera presets, AI audio sync, prompt enhancement, cost estimation. NOT for: video editing, video trimming, adding subtitles, or screen recording. --- name: cineclaw description: > AI video generation skill for OpenClaw using LTX-2 API by Lightricks. Generates cinematic videos from text prompts, images, or audio. Use when: (1) user says "make a video", "generate video", "animate this", "text to video", "image to video", "audio to video", "cineclaw", (2) user wants to create social media content, ads, music videos, or cinematic clips, (3) user provides an image
Generate cinematic AI videos from text, images, or audio via the LTX-2 API.
This skill uses Python for all API calls — works on Windows, Mac, and Linux.
For LTX-2 API details (endpoints, parameters, errors): read references/ltx-api.md.
For advanced prompting techniques and community tips: read references/prompting-guide.md.
For deep research on LTX-2 from X/Twitter community: read references/ltx2-prompt-guide-advanced.md.
LTX-2 API charges per second of generated video:
| Model | 1080p | 1440p | 4K | |-------|-------|-------|-----| | ltx-2-fast | ~$0.02/s | ~$0.04/s | ~$0.08/s | | ltx-2-pro | ~$0.05/s | ~$0.10/s | ~$0.20/s |
Always estimate and state cost before generating:
Tell the user the estimated cost before running. Start with ltx-2-fast for drafts,
switch to ltx-2-pro only for final output or when user requests high quality.
Before running any generation, ensure ltx_generate.py exists in the working directory.
If it doesn't exist, create it with the content from scripts/ltx_generate.py.
This script handles all API calls, file uploads, error handling, and output saving. It must be created ONCE and then reused for all generations.
User describes a scene → model generates video.
python3 scripts/ltx_generate.py --mode t2v --prompt "your prompt" --duration 6 --model ltx-2-fast
User provides a still image → model animates it.
python3 scripts/ltx_generate.py --mode i2v --image /path/to/image.jpg --prompt "motion description" --duration 6
User provides audio → model generates synced video.
python3 scripts/ltx_generate.py --mode a2v --audio /path/to/audio.mp3 --prompt "visual description" --model ltx-2-pro
Note: A2V only works with ltx-2-pro model.
The key to great LTX-2 output is prompt quality. Before sending ANY user prompt to the API, enhance it using these rules:
Every prompt should include:
[Scene/Environment]. [Subject description]. [Action/Motion]. [Camera movement]. [Style/aesthetic].
Critical rule: Start with the scene description FIRST. This prevents morphing and scene changes.
When enhancing a user's prompt:
35mm film, Kodak film grain, halation, shallow depth of fieldgolden hour, cinematic lighting, volumetric lightclose-up, wide shot, aerial, tracking shot1940s film, 70s TV, 80s news, 90s sitcom, 2000s found footageCinematic portrait:
Intimate close-up portrait. Halation. 35mm film look. High-end fashion.
Older woman with silver hair, wearing a dark velvet coat. She slowly turns
her head toward the camera, soft smile forming. Shallow depth of field,
warm studio lighting. Static camera, slight rack focus.
UGC-style social video (20s):
Gorilla gaming streamer. UGC style footage. Wearing headphones.
Static camera wide-shot. Gorilla is gaming using mouse and keyboard.
Epic aerial:
Epic aerial shot through rocky canyon. Camera flies through misty mountains
at dawn. Volumetric fog, golden hour light streaming through peaks.
Cinematic wide angle lens. Smooth forward dolly movement.
Key difference from T2V: I2V needs MORE specificity about motion.
Common problem: Unwanted camera movement. LTX-2 tends to dolly/zoom in I2V even when you don't want it.
Solution: Be VERY explicit about camera:
"Static camera. No camera movement. Locked-off shot.""Subtle breathing movement. Static locked camera."LTX-2's killer feature — native lip sync and audio-driven video.
Key rules:
"Character speaks" if issues"Animate this image so that in the first second..."LTX-2 maintains consistent characters in T2V when descriptive enough:
@character_name in LTX Studio| Model | Speed | Quality | Best For | |-------|-------|---------|----------| | ltx-2-fast | ~5-15s | Good | Drafts, iteration, previews, social content | | ltx-2-pro | ~30-90s | Cinematic | Final output, client work, A2V |
Decision rules:
ltx-2-fast (cheap, fast feedback)ltx-2-proltx-2-proltx-2-fast until happy, then one ltx-2-pro final1920x1080, 2560x1440, 3840x2160ltx-2-fast at 1080p/25fps: 6–20 secondsltx-2-pro: 6–10 secondsInclude in prompts for consistent camera work:
| Preset | Prompt Keywords |
|--------|----------------|
| Static | Static camera. Locked-off shot. No camera movement. |
| Dolly In | Slow dolly forward. Camera gradually pushes in. |
| Dolly Out | Camera slowly pulls back. Reverse dolly. |
| Pan Right | Camera pans slowly to the right. |
| Pan Left | Camera pans slowly to the left. |
| Crane Up | Camera cranes upward revealing the scene. |
| Handheld | Handheld camera. Slight shake. Documentary style. |
| Aerial | Aerial drone shot. Smooth forward movement. |
| Tracking | Camera tracks alongside the subject. |
| Orbit | Camera slowly orbits around the subject. |
@element_name| Error | Cause | Action |
|-------|-------|--------|
| 401 Unauthorized | API key invalid | Check LTX_API_KEY env var |
| 402 Payment Required | Insufficient credits | Add credits at console.ltx.video |
| 413 Payload Too Large | Image/audio file too big | Compress or resize input |
| 429 Rate Limited | Too many requests | Wait 60 seconds and retry |
| 500 Server Error | LTX API issue | Wait 5 min and retry |
| Morphing/scene change | Scene not described first | Move scene description to start of prompt |
| Unwanted camera movement | I2V default behavior | Add explicit "Static camera. No camera movement." |
| Frozen first frame (A2V) | Audio sync issue | Add audio buffer, or prompt "Animate in the first second..." |
Save generated videos to:
~/Desktop/cineclaw/output-{mode}-{YYYY-MM-DD-HHmmss}.mp4
Include in the filename: mode (t2v/i2v/a2v), date, model used.
Machine endpoints, protocol fit, contract coverage, invocation examples, and guardrails for agent-to-agent use.
Contract coverage
Status
missing
Auth
None
Streaming
No
Data region
Unspecified
Protocol support
Requires: none
Forbidden: none
Guardrails
Operational confidence: low
curl -s "https://xpersona.co/api/v1/agents/babakarto-cineclaw/snapshot"
curl -s "https://xpersona.co/api/v1/agents/babakarto-cineclaw/contract"
curl -s "https://xpersona.co/api/v1/agents/babakarto-cineclaw/trust"
Trust and runtime signals, benchmark suites, failure patterns, and practical risk constraints.
Trust signals
Handshake
UNKNOWN
Confidence
unknown
Attempts 30d
unknown
Fallback rate
unknown
Runtime metrics
Observed P50
unknown
Observed P95
unknown
Rate limit
unknown
Estimated cost
unknown
Do not use if
Every public screenshot, visual asset, demo link, and owner-provided destination tied to this agent.
Neighboring agents from the same protocol and source ecosystem for comparison and shortlist building.
Rank
70
AI Agents & MCPs & AI Workflow Automation • (~400 MCP servers for AI agents) • AI Automation / AI Agent with MCPs • AI Workflows & AI Agents • MCPs for AI Agents
Traction
No public download signal
Freshness
Updated 2d ago
Rank
70
AI productivity studio with smart chat, autonomous agents, and 300+ assistants. Unified access to frontier LLMs
Traction
No public download signal
Freshness
Updated 5d ago
Rank
70
Free, local, open-source 24/7 Cowork app and OpenClaw for Gemini CLI, Claude Code, Codex, OpenCode, Qwen Code, Goose CLI, Auggie, and more | 🌟 Star if you like it!
Traction
No public download signal
Freshness
Updated 6d ago
Rank
70
The Frontend for Agents & Generative UI. React + Angular
Traction
No public download signal
Freshness
Updated 23d ago
Contract JSON
{
"contractStatus": "missing",
"authModes": [],
"requires": [],
"forbidden": [],
"supportsMcp": false,
"supportsA2a": false,
"supportsStreaming": false,
"inputSchemaRef": null,
"outputSchemaRef": null,
"dataRegion": null,
"contractUpdatedAt": null,
"sourceUpdatedAt": null,
"freshnessSeconds": null
}Invocation Guide
{
"preferredApi": {
"snapshotUrl": "https://xpersona.co/api/v1/agents/babakarto-cineclaw/snapshot",
"contractUrl": "https://xpersona.co/api/v1/agents/babakarto-cineclaw/contract",
"trustUrl": "https://xpersona.co/api/v1/agents/babakarto-cineclaw/trust"
},
"curlExamples": [
"curl -s \"https://xpersona.co/api/v1/agents/babakarto-cineclaw/snapshot\"",
"curl -s \"https://xpersona.co/api/v1/agents/babakarto-cineclaw/contract\"",
"curl -s \"https://xpersona.co/api/v1/agents/babakarto-cineclaw/trust\""
],
"jsonRequestTemplate": {
"query": "summarize this repo",
"constraints": {
"maxLatencyMs": 2000,
"protocolPreference": [
"OPENCLEW"
]
}
},
"jsonResponseTemplate": {
"ok": true,
"result": {
"summary": "...",
"confidence": 0.9
},
"meta": {
"source": "GITHUB_OPENCLEW",
"generatedAt": "2026-04-16T23:42:30.768Z"
}
},
"retryPolicy": {
"maxAttempts": 3,
"backoffMs": [
500,
1500,
3500
],
"retryableConditions": [
"HTTP_429",
"HTTP_503",
"NETWORK_TIMEOUT"
]
}
}Trust JSON
{
"status": "unavailable",
"handshakeStatus": "UNKNOWN",
"verificationFreshnessHours": null,
"reputationScore": null,
"p95LatencyMs": null,
"successRate30d": null,
"fallbackRate": null,
"attempts30d": null,
"trustUpdatedAt": null,
"trustConfidence": "unknown",
"sourceUpdatedAt": null,
"freshnessSeconds": null
}Capability Matrix
{
"rows": [
{
"key": "OPENCLEW",
"type": "protocol",
"support": "unknown",
"confidenceSource": "profile",
"notes": "Listed on profile"
}
],
"flattenedTokens": "protocol:OPENCLEW|unknown|profile"
}Facts JSON
[
{
"factKey": "docs_crawl",
"category": "integration",
"label": "Crawlable docs",
"value": "6 indexed pages on the official domain",
"href": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
"sourceUrl": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
"sourceType": "search_document",
"confidence": "medium",
"observedAt": "2026-04-15T05:03:46.393Z",
"isPublic": true
},
{
"factKey": "vendor",
"category": "vendor",
"label": "Vendor",
"value": "Babakarto",
"href": "https://github.com/babakarto/cineclaw",
"sourceUrl": "https://github.com/babakarto/cineclaw",
"sourceType": "profile",
"confidence": "medium",
"observedAt": "2026-02-25T02:24:04.993Z",
"isPublic": true
},
{
"factKey": "protocols",
"category": "compatibility",
"label": "Protocol compatibility",
"value": "OpenClaw",
"href": "https://xpersona.co/api/v1/agents/babakarto-cineclaw/contract",
"sourceUrl": "https://xpersona.co/api/v1/agents/babakarto-cineclaw/contract",
"sourceType": "contract",
"confidence": "medium",
"observedAt": "2026-02-25T02:24:04.993Z",
"isPublic": true
},
{
"factKey": "handshake_status",
"category": "security",
"label": "Handshake status",
"value": "UNKNOWN",
"href": "https://xpersona.co/api/v1/agents/babakarto-cineclaw/trust",
"sourceUrl": "https://xpersona.co/api/v1/agents/babakarto-cineclaw/trust",
"sourceType": "trust",
"confidence": "medium",
"observedAt": null,
"isPublic": true
}
]Change Events JSON
[
{
"eventType": "docs_update",
"title": "Docs refreshed: Sign in to GitHub · GitHub",
"description": "Fresh crawlable documentation was indexed for the official domain.",
"href": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
"sourceUrl": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
"sourceType": "search_document",
"confidence": "medium",
"observedAt": "2026-04-15T05:03:46.393Z",
"isPublic": true
}
]Sponsored
Ads related to cineclaw and adjacent AI workflows.