Rank
70
AI Agents & MCPs & AI Workflow Automation • (~400 MCP servers for AI agents) • AI Automation / AI Agent with MCPs • AI Workflows & AI Agents • MCPs for AI Agents
Traction
No public download signal
Freshness
Updated 2d ago
Crawler Summary
Deep research workflow that produces comprehensive multi-page reports. Trigger on "deep research", "deep dive", "research report", "trawl", or "DeepTrawler". Combines Perplexity deep research API with source extraction and synthesis into structured research documents. Do NOT use for quick web searches, simple fact-checking, or questions answerable by regular web_search. This is for comprehensive research only (~$0.50-1.00 per query). --- name: deep-trawler description: Deep research workflow that produces comprehensive multi-page reports. Trigger on "deep research", "deep dive", "research report", "trawl", or "DeepTrawler". Combines Perplexity deep research API with source extraction and synthesis into structured research documents. Do NOT use for quick web searches, simple fact-checking, or questions answerable by regular web_search. This is for Capability contract not published. No trust telemetry is available yet. 2 GitHub stars reported by the source. Last updated 2/25/2026.
Freshness
Last checked 2/25/2026
Best For
deep-trawler is best for be, fill workflows where OpenClaw compatibility matters.
Not Ideal For
Contract metadata is missing or unavailable for deterministic execution.
Evidence Sources Checked
editorial-content, GITHUB OPENCLEW, runtime-metrics, public facts pack
Deep research workflow that produces comprehensive multi-page reports. Trigger on "deep research", "deep dive", "research report", "trawl", or "DeepTrawler". Combines Perplexity deep research API with source extraction and synthesis into structured research documents. Do NOT use for quick web searches, simple fact-checking, or questions answerable by regular web_search. This is for comprehensive research only (~$0.50-1.00 per query). --- name: deep-trawler description: Deep research workflow that produces comprehensive multi-page reports. Trigger on "deep research", "deep dive", "research report", "trawl", or "DeepTrawler". Combines Perplexity deep research API with source extraction and synthesis into structured research documents. Do NOT use for quick web searches, simple fact-checking, or questions answerable by regular web_search. This is for
Public facts
5
Change events
1
Artifacts
0
Freshness
Feb 25, 2026
Capability contract not published. No trust telemetry is available yet. 2 GitHub stars reported by the source. Last updated 2/25/2026.
Trust score
Unknown
Compatibility
OpenClaw
Freshness
Feb 25, 2026
Vendor
Sene1337
Artifacts
0
Benchmarks
0
Last release
Unpublished
Key links, install path, and a quick operational read before the deeper crawl record.
Summary
Capability contract not published. No trust telemetry is available yet. 2 GitHub stars reported by the source. Last updated 2/25/2026.
Setup snapshot
git clone https://github.com/sene1337/deep-trawler.gitSetup complexity is LOW. This package is likely designed for quick installation with minimal external side-effects.
Final validation: Expose the agent to a mock request payload inside a sandbox and trace the network egress before allowing access to real customer data.
Everything public we have scraped or crawled about this agent, grouped by evidence type with provenance.
Vendor
Sene1337
Protocol compatibility
OpenClaw
Adoption signal
2 GitHub stars
Handshake status
UNKNOWN
Crawlable docs
6 indexed pages on the official domain
Merged public release, docs, artifact, benchmark, pricing, and trust refresh events.
Extracted files, examples, snippets, parameters, dependencies, permissions, and artifact metadata.
Extracted files
0
Examples
5
Snippets
0
Languages
typescript
Parameters
bash
# Run on Mac mini via nodes.run bash scripts/trawl.sh "your query here" slug-name
text
docs/research/<slug>.md
text
docs/projects/<name>-manifest.md
text
docs/projects/<name>-results.json (or .md, .csv)
bash
bash scripts/checkpoint.sh "scraped batch N of M — <count> URLs done"
Full documentation captured from public sources, including the complete README when available.
Docs source
GITHUB OPENCLEW
Editorial quality
ready
Deep research workflow that produces comprehensive multi-page reports. Trigger on "deep research", "deep dive", "research report", "trawl", or "DeepTrawler". Combines Perplexity deep research API with source extraction and synthesis into structured research documents. Do NOT use for quick web searches, simple fact-checking, or questions answerable by regular web_search. This is for comprehensive research only (~$0.50-1.00 per query). --- name: deep-trawler description: Deep research workflow that produces comprehensive multi-page reports. Trigger on "deep research", "deep dive", "research report", "trawl", or "DeepTrawler". Combines Perplexity deep research API with source extraction and synthesis into structured research documents. Do NOT use for quick web searches, simple fact-checking, or questions answerable by regular web_search. This is for
Multi-stage deep research workflow. Produces comprehensive reports, not summaries.
Define the research question. Break broad topics into 2-4 focused sub-queries. Each sub-query becomes a separate Perplexity deep research call.
For each sub-query, run via Mac mini (API key only readable there — NEVER from sandbox):
# Run on Mac mini via nodes.run
bash scripts/trawl.sh "your query here" slug-name
Saves raw JSON to logs/trawl-<slug>-raw.json automatically. Do not truncate responses.
Parallel mode (3+ sub-queries): When there are 3+ sub-queries, spawn a coordinator sub-agent per query using sessions_spawn. Each sub-agent runs trawl.sh + source extraction independently. The parent waits for all results, then synthesizes. This cuts wall-clock time by 2-3x.
Extract citations from each response. For the top 5-8 most relevant sources:
web_fetch each URLlogs/trawl-<slug>-sources.mdCombine all trawl results + source content into a structured report:
docs/research/<slug>.md
Report structure:
Target: 2000-5000 words. Dense, not padded.
Each deep research call: ~$0.50-1.00. A 3-query trawl = ~$1.50-3.00. Always state expected cost before running. Get confirmation for >$5 total.
| Artifact | Location | In git? |
|----------|----------|---------|
| Raw JSON results | logs/trawl-<slug>-raw.json | ❌ No — gitignored, ephemeral |
| Source extracts | logs/trawl-<slug>-sources.md | ❌ No — gitignored, ephemeral |
| Synthesis report | docs/research/<slug>.md | ✅ Yes — this is the durable artifact |
Never commit raw JSON to git. The synthesis is what matters. If you need raw data again, re-run the trawl — that's what the skill is for. Raw output is a build artifact, not a source file.
When scraping more than ~10 URLs in a single workflow (e.g., extracting video metadata from a course platform), standard trawl-and-hold won't work. The scraped HTML will overflow your context window.
Plan first. Write a manifest file listing all target URLs:
docs/projects/<name>-manifest.md
This is your progress tracker. Mark each URL as pending/done/failed.
Batch in groups of 5-10. Scrape one batch, extract the data you need, write results to a file immediately:
docs/projects/<name>-results.json (or .md, .csv)
Flush before next batch. Don't carry raw HTML between batches:
Checkpoint between batches. Use ClawBack Mode 2:
bash scripts/checkpoint.sh "scraped batch N of M — <count> URLs done"
Never hold raw HTML in context. Web pages are 50-400KB each. Extract what you need and discard immediately.
A single web_fetch result can be 100-400K characters. Usable context is ~120K tokens (~480K chars). Three large web pages can fill your entire context window.
| Artifact | Location | In git? |
|----------|----------|---------|
| URL manifest | docs/projects/<name>-manifest.md | ✅ Yes |
| Extracted results | docs/projects/<name>-results.json | ✅ Yes |
| Raw HTML | Nowhere — extract and discard | ❌ Never persist |
Machine endpoints, protocol fit, contract coverage, invocation examples, and guardrails for agent-to-agent use.
Contract coverage
Status
missing
Auth
None
Streaming
No
Data region
Unspecified
Protocol support
Requires: none
Forbidden: none
Guardrails
Operational confidence: low
curl -s "https://xpersona.co/api/v1/agents/sene1337-deep-trawler/snapshot"
curl -s "https://xpersona.co/api/v1/agents/sene1337-deep-trawler/contract"
curl -s "https://xpersona.co/api/v1/agents/sene1337-deep-trawler/trust"
Trust and runtime signals, benchmark suites, failure patterns, and practical risk constraints.
Trust signals
Handshake
UNKNOWN
Confidence
unknown
Attempts 30d
unknown
Fallback rate
unknown
Runtime metrics
Observed P50
unknown
Observed P95
unknown
Rate limit
unknown
Estimated cost
unknown
Do not use if
Every public screenshot, visual asset, demo link, and owner-provided destination tied to this agent.
Neighboring agents from the same protocol and source ecosystem for comparison and shortlist building.
Rank
70
AI Agents & MCPs & AI Workflow Automation • (~400 MCP servers for AI agents) • AI Automation / AI Agent with MCPs • AI Workflows & AI Agents • MCPs for AI Agents
Traction
No public download signal
Freshness
Updated 2d ago
Rank
70
AI productivity studio with smart chat, autonomous agents, and 300+ assistants. Unified access to frontier LLMs
Traction
No public download signal
Freshness
Updated 5d ago
Rank
70
Free, local, open-source 24/7 Cowork app and OpenClaw for Gemini CLI, Claude Code, Codex, OpenCode, Qwen Code, Goose CLI, Auggie, and more | 🌟 Star if you like it!
Traction
No public download signal
Freshness
Updated 6d ago
Rank
70
The Frontend for Agents & Generative UI. React + Angular
Traction
No public download signal
Freshness
Updated 23d ago
Contract JSON
{
"contractStatus": "missing",
"authModes": [],
"requires": [],
"forbidden": [],
"supportsMcp": false,
"supportsA2a": false,
"supportsStreaming": false,
"inputSchemaRef": null,
"outputSchemaRef": null,
"dataRegion": null,
"contractUpdatedAt": null,
"sourceUpdatedAt": null,
"freshnessSeconds": null
}Invocation Guide
{
"preferredApi": {
"snapshotUrl": "https://xpersona.co/api/v1/agents/sene1337-deep-trawler/snapshot",
"contractUrl": "https://xpersona.co/api/v1/agents/sene1337-deep-trawler/contract",
"trustUrl": "https://xpersona.co/api/v1/agents/sene1337-deep-trawler/trust"
},
"curlExamples": [
"curl -s \"https://xpersona.co/api/v1/agents/sene1337-deep-trawler/snapshot\"",
"curl -s \"https://xpersona.co/api/v1/agents/sene1337-deep-trawler/contract\"",
"curl -s \"https://xpersona.co/api/v1/agents/sene1337-deep-trawler/trust\""
],
"jsonRequestTemplate": {
"query": "summarize this repo",
"constraints": {
"maxLatencyMs": 2000,
"protocolPreference": [
"OPENCLEW"
]
}
},
"jsonResponseTemplate": {
"ok": true,
"result": {
"summary": "...",
"confidence": 0.9
},
"meta": {
"source": "GITHUB_OPENCLEW",
"generatedAt": "2026-04-16T23:35:55.907Z"
}
},
"retryPolicy": {
"maxAttempts": 3,
"backoffMs": [
500,
1500,
3500
],
"retryableConditions": [
"HTTP_429",
"HTTP_503",
"NETWORK_TIMEOUT"
]
}
}Trust JSON
{
"status": "unavailable",
"handshakeStatus": "UNKNOWN",
"verificationFreshnessHours": null,
"reputationScore": null,
"p95LatencyMs": null,
"successRate30d": null,
"fallbackRate": null,
"attempts30d": null,
"trustUpdatedAt": null,
"trustConfidence": "unknown",
"sourceUpdatedAt": null,
"freshnessSeconds": null
}Capability Matrix
{
"rows": [
{
"key": "OPENCLEW",
"type": "protocol",
"support": "unknown",
"confidenceSource": "profile",
"notes": "Listed on profile"
},
{
"key": "be",
"type": "capability",
"support": "supported",
"confidenceSource": "profile",
"notes": "Declared in agent profile metadata"
},
{
"key": "fill",
"type": "capability",
"support": "supported",
"confidenceSource": "profile",
"notes": "Declared in agent profile metadata"
}
],
"flattenedTokens": "protocol:OPENCLEW|unknown|profile capability:be|supported|profile capability:fill|supported|profile"
}Facts JSON
[
{
"factKey": "docs_crawl",
"category": "integration",
"label": "Crawlable docs",
"value": "6 indexed pages on the official domain",
"href": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
"sourceUrl": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
"sourceType": "search_document",
"confidence": "medium",
"observedAt": "2026-04-15T05:03:46.393Z",
"isPublic": true
},
{
"factKey": "vendor",
"category": "vendor",
"label": "Vendor",
"value": "Sene1337",
"href": "https://github.com/sene1337/deep-trawler",
"sourceUrl": "https://github.com/sene1337/deep-trawler",
"sourceType": "profile",
"confidence": "medium",
"observedAt": "2026-02-25T01:47:49.305Z",
"isPublic": true
},
{
"factKey": "protocols",
"category": "compatibility",
"label": "Protocol compatibility",
"value": "OpenClaw",
"href": "https://xpersona.co/api/v1/agents/sene1337-deep-trawler/contract",
"sourceUrl": "https://xpersona.co/api/v1/agents/sene1337-deep-trawler/contract",
"sourceType": "contract",
"confidence": "medium",
"observedAt": "2026-02-25T01:47:49.305Z",
"isPublic": true
},
{
"factKey": "traction",
"category": "adoption",
"label": "Adoption signal",
"value": "2 GitHub stars",
"href": "https://github.com/sene1337/deep-trawler",
"sourceUrl": "https://github.com/sene1337/deep-trawler",
"sourceType": "profile",
"confidence": "medium",
"observedAt": "2026-02-25T01:47:49.305Z",
"isPublic": true
},
{
"factKey": "handshake_status",
"category": "security",
"label": "Handshake status",
"value": "UNKNOWN",
"href": "https://xpersona.co/api/v1/agents/sene1337-deep-trawler/trust",
"sourceUrl": "https://xpersona.co/api/v1/agents/sene1337-deep-trawler/trust",
"sourceType": "trust",
"confidence": "medium",
"observedAt": null,
"isPublic": true
}
]Change Events JSON
[
{
"eventType": "docs_update",
"title": "Docs refreshed: Sign in to GitHub · GitHub",
"description": "Fresh crawlable documentation was indexed for the official domain.",
"href": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
"sourceUrl": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
"sourceType": "search_document",
"confidence": "medium",
"observedAt": "2026-04-15T05:03:46.393Z",
"isPublic": true
}
]Sponsored
Ads related to deep-trawler and adjacent AI workflows.