Rank
70
AI Agents & MCPs & AI Workflow Automation โข (~400 MCP servers for AI agents) โข AI Automation / AI Agent with MCPs โข AI Workflows & AI Agents โข MCPs for AI Agents
Traction
No public download signal
Freshness
Updated 2d ago
Crawler Summary
Daily Debrief - Autonomous Research Digest --- name: daily-debrief description: OpenClaw skill for scheduled research digests (papers, GitHub Trending, and industry news). Default domain: food safety; configurable for any research field. --- Daily Debrief - Autonomous Research Digest You are an autonomous research assistant. Your job is to wake up daily, find relevant new papers and GitHub repositories in the user's research domain, analyze them intelligently Capability contract not published. No trust telemetry is available yet. 1 GitHub stars reported by the source. Last updated 2/25/2026.
Freshness
Last checked 2/25/2026
Best For
daily-debrief is best for customize, track, this workflows where OpenClaw compatibility matters.
Not Ideal For
Contract metadata is missing or unavailable for deterministic execution.
Evidence Sources Checked
editorial-content, GITHUB OPENCLEW, runtime-metrics, public facts pack
Daily Debrief - Autonomous Research Digest --- name: daily-debrief description: OpenClaw skill for scheduled research digests (papers, GitHub Trending, and industry news). Default domain: food safety; configurable for any research field. --- Daily Debrief - Autonomous Research Digest You are an autonomous research assistant. Your job is to wake up daily, find relevant new papers and GitHub repositories in the user's research domain, analyze them intelligently
Public facts
5
Change events
1
Artifacts
0
Freshness
Feb 25, 2026
Capability contract not published. No trust telemetry is available yet. 1 GitHub stars reported by the source. Last updated 2/25/2026.
Trust score
Unknown
Compatibility
OpenClaw
Freshness
Feb 25, 2026
Vendor
Chenhaoq87
Artifacts
0
Benchmarks
0
Last release
Unpublished
Key links, install path, and a quick operational read before the deeper crawl record.
Summary
Capability contract not published. No trust telemetry is available yet. 1 GitHub stars reported by the source. Last updated 2/25/2026.
Setup snapshot
git clone https://github.com/chenhaoq87/daily-debrief.gitSetup complexity is LOW. This package is likely designed for quick installation with minimal external side-effects.
Final validation: Expose the agent to a mock request payload inside a sandbox and trace the network egress before allowing access to real customer data.
Everything public we have scraped or crawled about this agent, grouped by evidence type with provenance.
Vendor
Chenhaoq87
Protocol compatibility
OpenClaw
Adoption signal
1 GitHub stars
Handshake status
UNKNOWN
Crawlable docs
6 indexed pages on the official domain
Merged public release, docs, artifact, benchmark, pricing, and trust refresh events.
Extracted files, examples, snippets, parameters, dependencies, permissions, and artifact metadata.
Extracted files
0
Examples
6
Snippets
0
Languages
typescript
Parameters
json
{
"domain": {
"name": "Food Safety Research",
"description": "AI/ML applications in food safety...",
"keywords": {
"technical": ["machine learning", "deep learning", ...],
"domain": ["food safety", "pathogen", "salmonella", ...]
},
"categories": ["Pathogen Detection", "Quality Assessment", ...]
},
"llm": {
"provider": "gemini",
"apiKey": "..."
},
"output": {
"telegram": { "enabled": true, "chatId": "..." }
}
}bash
node scripts/fetch_openalex.js <date> <keyword1,keyword2,...> [perPage] # Returns: JSON array of papers
bash
node scripts/fetch_arxiv.js <date> <cs.LG,cs.CV,...> <keyword1,keyword2,...> # Returns: JSON array of papers
json
{
"source": "OpenAlex|arXiv",
"id": "...",
"doi": "...",
"title": "...",
"abstract": "...",
"authors": [{"name": "...", "id": "..."}],
"venue": "...",
"citationCount": 0,
"publicationDate": "2026-01-26",
"openAccess": true,
"url": "https://..."
}bash
node scripts/fetch_github_trending.js [limit] [language] # Scrapes github.com/trending for today's trending repos # Returns: JSON array of repositories
json
{
"source": "GitHub",
"id": "...",
"name": "owner/repo",
"description": "...",
"url": "https://github.com/...",
"stars": 1234,
"language": "Python",
"topics": ["machine-learning", "ai"],
"createdAt": "2026-01-26T12:00:00Z",
"updatedAt": "2026-01-26T15:00:00Z",
"owner": {
"name": "username",
"url": "https://github.com/username"
}
}Full documentation captured from public sources, including the complete README when available.
Docs source
GITHUB OPENCLEW
Editorial quality
ready
Daily Debrief - Autonomous Research Digest --- name: daily-debrief description: OpenClaw skill for scheduled research digests (papers, GitHub Trending, and industry news). Default domain: food safety; configurable for any research field. --- Daily Debrief - Autonomous Research Digest You are an autonomous research assistant. Your job is to wake up daily, find relevant new papers and GitHub repositories in the user's research domain, analyze them intelligently
You are an autonomous research assistant. Your job is to wake up daily, find relevant new papers and GitHub repositories in the user's research domain, analyze them intelligently, and deliver a concise digest.
When triggered (usually via cron), you:
Read config.json first (copy from config.example.json if not exists).
Key sections:
{
"domain": {
"name": "Food Safety Research",
"description": "AI/ML applications in food safety...",
"keywords": {
"technical": ["machine learning", "deep learning", ...],
"domain": ["food safety", "pathogen", "salmonella", ...]
},
"categories": ["Pathogen Detection", "Quality Assessment", ...]
},
"llm": {
"provider": "gemini",
"apiKey": "..."
},
"output": {
"telegram": { "enabled": true, "chatId": "..." }
}
}
Users can customize:
OpenAlex:
node scripts/fetch_openalex.js <date> <keyword1,keyword2,...> [perPage]
# Returns: JSON array of papers
arXiv:
node scripts/fetch_arxiv.js <date> <cs.LG,cs.CV,...> <keyword1,keyword2,...>
# Returns: JSON array of papers
Both return standardized paper objects:
{
"source": "OpenAlex|arXiv",
"id": "...",
"doi": "...",
"title": "...",
"abstract": "...",
"authors": [{"name": "...", "id": "..."}],
"venue": "...",
"citationCount": 0,
"publicationDate": "2026-01-26",
"openAccess": true,
"url": "https://..."
}
GitHub Trending (Scraped):
node scripts/fetch_github_trending.js [limit] [language]
# Scrapes github.com/trending for today's trending repos
# Returns: JSON array of repositories
Returns standardized repository objects:
{
"source": "GitHub",
"id": "...",
"name": "owner/repo",
"description": "...",
"url": "https://github.com/...",
"stars": 1234,
"language": "Python",
"topics": ["machine-learning", "ai"],
"createdAt": "2026-01-26T12:00:00Z",
"updatedAt": "2026-01-26T15:00:00Z",
"owner": {
"name": "username",
"url": "https://github.com/username"
}
}
Combined media fetcher:
node scripts/fetch_media_sources.js [--days N] [--since YYYY-MM-DD] [--sources all|fsn,fsm,fda,fsis,cdc]
# Fetches from all 5 media sources, deduplicates, and merges
# Returns: JSON array of standardized media items
Individual source scripts:
# Food Safety News (RSS)
node scripts/fetch_food_safety_news.js [--days N] [--since YYYY-MM-DD]
# Food Safety Magazine (RSS, multiple topics)
node scripts/fetch_food_safety_magazine.js [--days N] [--since YYYY-MM-DD] [--topics 305,306,309,311,312,313]
# FDA Food Recalls (openFDA API)
node scripts/fetch_fda_recalls.js [--days N] [--since YYYY-MM-DD] [--limit N]
# USDA FSIS Recalls (scrape + openFDA fallback for meat/poultry/eggs)
node scripts/fetch_fsis_recalls.js [--days N] [--since YYYY-MM-DD]
# CDC Outbreak Investigations (multi-strategy: API + scrape)
node scripts/fetch_cdc_outbreaks.js [--days N] [--since YYYY-MM-DD]
All return standardized media objects:
{
"source_type": "media",
"sources": ["Food Safety News", "FDA"],
"source_urls": ["https://...", "https://..."],
"title": "Firm Recalls Product (Class I)",
"summary": "Products may be contaminated with...",
"date": "2026-01-28",
"category": "Recall|Outbreak|Policy|Research|Alert",
"severity": "high|medium|low",
"pathogen": "Salmonella",
"product": "ground beef",
"states": ["CA", "NY"],
"recall_number": "H-0393-2026",
"tags": ["Microbiological"]
}
Source key: fsn=Food Safety News, fsm=Food Safety Magazine, fda=FDA, fsis=USDA FSIS, cdc=CDC
Deduplication: The combined script automatically merges duplicate items (same recall across multiple sources) by matching on recall number, title similarity, and pathogen+product combo. Merged items list all contributing sources.
Load data/papers_history.jsonl to see what papers you've already reported.
Add new papers to avoid duplicates:
echo '{"id":"...","date":"2026-01-26"}' >> data/papers_history.jsonl
Load authors_watchlist.json:
{
"authors": [
{"name": "Jane Smith", "openalex_id": "A1234567890", "note": "..."}
]
}
Flag papers by these authors with ๐ค emoji.
Usually yesterday:
const date = new Date();
date.setDate(date.getDate() - 1);
const yesterday = date.toISOString().split('T')[0]; // "2026-01-26"
Use all three sources in parallel:
# OpenAlex
node scripts/fetch_openalex.js 2026-01-26 "food safety,pathogen,salmonella"
# arXiv
node scripts/fetch_arxiv.js 2026-01-26 "cs.LG,cs.CV" "food,pathogen,dairy"
# GitHub Trending (scrapes github.com/trending)
node scripts/fetch_github_trending.js 30
Combine papers into one array and keep repos separate. Repos are scraped from GitHub's official trending page - no filtering needed, just take top N.
Fetch industry news, recalls, and outbreak reports in parallel with papers:
# Fetch all media sources for the past 1 day (yesterday's news)
node scripts/fetch_media_sources.js --days 1
This fetches from 5 sources simultaneously:
The script automatically deduplicates items that appear in multiple sources (e.g., the same Salmonella recall in both FDA data and Food Safety News coverage), merging them into a single entry with all source citations.
Store media items separately from papers โ they go in the "Industry News & Alerts" section of the digest.
No keyword pre-filtering! Pass ALL fetched papers directly to LLM for analysis.
Hard gate: If abstract is empty/missing (or only whitespace), reject immediately (set relevance=1) and exclude from the digest. Do not send these to the LLM.
For each paper, analyze deeply:
Prompt yourself:
Analyze this paper for ${config.domain.name} relevance:
Title: ${paper.title}
Abstract: ${paper.abstract.substring(0, 600)}
Rate 1-5 (scope: AI/ML applied to food systems + AI for scientific research automation):
- 5 = Core focus on AI/ML for food safety/quality OR AI/GenAI systems that automate scientific production/research (e.g., Paper2Agent, virtual lab, agentic discovery, automated experiment design)
- 4 = Strong AI/ML application to food systems (dairy, meat, produce, pathogens) OR concrete AI system improving scientific workflows
- 3 = Moderate relevance (AI/ML methods applied to food systems or food-adjacent agriculture). Must involve actual AI/ML techniques.
- 2 = Weak (AI or food safety mentioned but not central; no actual AI methodology)
- 1 = Not relevant (no AI/ML component, or unrelated domain)
Also categorize into ONE of: ${config.domain.categories.join(', ')}
Respond with JSON:
{"relevance": <1-5>, "category": "<category>", "reasoning": "<one sentence>"}
Parse your own response and extract the analysis.
Only keep papers scoring >= config.filters.minRelevanceScore.
Note: With pure LLM filtering, you'll analyze more papers (~50-100/day vs ~10-20 with keyword filtering). This increases API costs slightly (~$0.15-0.20/day) but catches cross-domain discoveries and AI research papers you'd otherwise miss.
Take the top 5 repos by stars from the fetch results. No LLM filtering needed - just show what's genuinely trending across all of tech/GitHub for that day.
Load data/papers_history.jsonl and skip papers you've already seen (by DOI or ID).
For repos, you can track them similarly in data/repos_history.jsonl (create if needed).
For each paper, check if any author matches watchlist (by name or OpenAlex ID).
Flag with isWatchlistAuthor: true and include author name.
Create Telegram-formatted message with both papers and repos:
*Daily Research Debrief (${date})*
Found ${papers.length} new AI/${domain} papers (X OpenAlex, Y arXiv)
(Category breakdown: X Pathogen Detection, Y Quality Assessment)
โโโโโโโโโโโโโโโโโโโโโโโโโโโ
๐ *${title}*
โญโญโญโญ | ๐ฆ ${category}
๐ค *${watchlistAuthor}* | ๐ | ๐ ${citations} citations | ๐
${date}
_${venue}_
${abstract.substring(0, 200)}...
[Read Full Paper](${url})
โโโโโโโโโโโโโโโโโโโโโโโโโโโ
*๐ฅ Top 5 Trending Repos (Today)*
๐ป *${repo.name}*
โญ ${repo.stars} stars | ${repo.language}
${repo.description}
[View Repository](${repo.url})
*๐จ Industry News & Alerts (Past Day)*
(Group by category: Recalls first, then Outbreaks, then Policy/Research)
๐ด *${title}* (${severity})
๐ฐ ${sources.join(' + ')} | ๐
${date}
๐ฆ ${pathogen} | ๐ฅฉ ${product} | ๐ ${states.join(', ')}
${summary.substring(0, 200)}...
[Read More](${source_urls[0]})
โโโโโโโโโโโโโโโโโโโโโโโโโโโ
_(After listing all sections, add an LLM-generated summary)_
**Why these matter to you:**
${one_paragraph_summary_explaining_relevance_to_your_research}
(Analyzed ${totalCandidates} paper candidates, ${totalRepoCandidates} repos, ${mediaItems} media items)
Category emojis (papers):
Category emojis (media items):
Severity indicators (media items):
Limit to config.filters.maxPapersPerDigest top papers and top 5 trending repos.
Telegram:
Use the message tool:
message({
action: 'send',
channel: 'telegram',
target: config.output.telegram.chatId,
message: digest
})
File:
Save to ${config.output.filePath}/digest_${date}.txt
Append each reported paper to data/papers_history.jsonl:
echo '{"id":"${paper.id}","doi":"${paper.doi}","date":"${date}","title":"${paper.title}"}' >> data/papers_history.jsonl
Append each reported repo to data/repos_history.jsonl:
echo '{"id":"${repo.id}","name":"${repo.name}","date":"${date}"}' >> data/repos_history.jsonl
After delivering the digest, save media items to history for deduplication and memory sync:
# For each media item included in the digest:
echo '{"source_type":"media","sources":["Food Safety News","FDA"],"source_urls":["https://..."],"title":"...","summary":"...","date":"2026-01-28","category":"Recall","severity":"high","pathogen":"Salmonella","product":"ground beef"}' >> data/media_history.jsonl
This tracks what media items have been reported to avoid duplicates in future digests.
After updating all history files, sync EVERYTHING to the user's research memory:
node scripts/sync_to_memory.js
This script syncs:
memory/research/all_papers.json + papers_index.md (full OpenAlex metadata, ๐ฌ tags)memory/research/media_history.json + media_index.md (recalls, outbreaks, news, rolling 500-item archive)memory/research/digest_log.jsonl (timestamp + counts per run)Why this matters: The user's research memory (memory/research/) is their persistent knowledge base. Every debrief should leave a trace โ papers, recalls, outbreaks, and news are all searchable and accessible for future reference.
Recommended: Use the setup script
cd skills/daily-debrief
./scripts/setup.sh
The interactive setup will:
Manual setup: If config.json doesn't exist:
config.example.json to config.jsonauthors_watchlist.jsonTrigger: A daily OpenClaw cron (set via Dashboard or by asking the agent) runs the daily-debrief skill.
You wake up and:
User wakes up to digest in Telegram with papers, repos, AND industry news. Papers automatically added to research memory.
This skill works for ANY research field. Users just edit config.json:
Example: Materials Science
{
"domain": {
"name": "2D Materials Research",
"keywords": {
"technical": ["machine learning", "DFT", "molecular dynamics"],
"domain": ["graphene", "MoS2", "2D materials", "van der Waals"]
},
"categories": ["Synthesis", "Properties", "Applications", "Simulation"]
}
}
Example: Drug Discovery
{
"domain": {
"name": "AI Drug Discovery",
"keywords": {
"technical": ["deep learning", "transformer", "GNN"],
"domain": ["drug discovery", "ADMET", "binding affinity", "molecular"]
},
"categories": ["Target Identification", "Lead Optimization", "Toxicity", "Repurposing"]
}
}
The agent logic stays the same - only keywords and categories change!
Machine endpoints, protocol fit, contract coverage, invocation examples, and guardrails for agent-to-agent use.
Contract coverage
Status
missing
Auth
None
Streaming
No
Data region
Unspecified
Protocol support
Requires: none
Forbidden: none
Guardrails
Operational confidence: low
curl -s "https://xpersona.co/api/v1/agents/chenhaoq87-daily-debrief/snapshot"
curl -s "https://xpersona.co/api/v1/agents/chenhaoq87-daily-debrief/contract"
curl -s "https://xpersona.co/api/v1/agents/chenhaoq87-daily-debrief/trust"
Trust and runtime signals, benchmark suites, failure patterns, and practical risk constraints.
Trust signals
Handshake
UNKNOWN
Confidence
unknown
Attempts 30d
unknown
Fallback rate
unknown
Runtime metrics
Observed P50
unknown
Observed P95
unknown
Rate limit
unknown
Estimated cost
unknown
Do not use if
Every public screenshot, visual asset, demo link, and owner-provided destination tied to this agent.
Neighboring agents from the same protocol and source ecosystem for comparison and shortlist building.
Rank
70
AI Agents & MCPs & AI Workflow Automation โข (~400 MCP servers for AI agents) โข AI Automation / AI Agent with MCPs โข AI Workflows & AI Agents โข MCPs for AI Agents
Traction
No public download signal
Freshness
Updated 2d ago
Rank
70
AI productivity studio with smart chat, autonomous agents, and 300+ assistants. Unified access to frontier LLMs
Traction
No public download signal
Freshness
Updated 5d ago
Rank
70
Free, local, open-source 24/7 Cowork app and OpenClaw for Gemini CLI, Claude Code, Codex, OpenCode, Qwen Code, Goose CLI, Auggie, and more | ๐ Star if you like it!
Traction
No public download signal
Freshness
Updated 6d ago
Rank
70
The Frontend for Agents & Generative UI. React + Angular
Traction
No public download signal
Freshness
Updated 23d ago
Contract JSON
{
"contractStatus": "missing",
"authModes": [],
"requires": [],
"forbidden": [],
"supportsMcp": false,
"supportsA2a": false,
"supportsStreaming": false,
"inputSchemaRef": null,
"outputSchemaRef": null,
"dataRegion": null,
"contractUpdatedAt": null,
"sourceUpdatedAt": null,
"freshnessSeconds": null
}Invocation Guide
{
"preferredApi": {
"snapshotUrl": "https://xpersona.co/api/v1/agents/chenhaoq87-daily-debrief/snapshot",
"contractUrl": "https://xpersona.co/api/v1/agents/chenhaoq87-daily-debrief/contract",
"trustUrl": "https://xpersona.co/api/v1/agents/chenhaoq87-daily-debrief/trust"
},
"curlExamples": [
"curl -s \"https://xpersona.co/api/v1/agents/chenhaoq87-daily-debrief/snapshot\"",
"curl -s \"https://xpersona.co/api/v1/agents/chenhaoq87-daily-debrief/contract\"",
"curl -s \"https://xpersona.co/api/v1/agents/chenhaoq87-daily-debrief/trust\""
],
"jsonRequestTemplate": {
"query": "summarize this repo",
"constraints": {
"maxLatencyMs": 2000,
"protocolPreference": [
"OPENCLEW"
]
}
},
"jsonResponseTemplate": {
"ok": true,
"result": {
"summary": "...",
"confidence": 0.9
},
"meta": {
"source": "GITHUB_OPENCLEW",
"generatedAt": "2026-04-16T23:44:22.303Z"
}
},
"retryPolicy": {
"maxAttempts": 3,
"backoffMs": [
500,
1500,
3500
],
"retryableConditions": [
"HTTP_429",
"HTTP_503",
"NETWORK_TIMEOUT"
]
}
}Trust JSON
{
"status": "unavailable",
"handshakeStatus": "UNKNOWN",
"verificationFreshnessHours": null,
"reputationScore": null,
"p95LatencyMs": null,
"successRate30d": null,
"fallbackRate": null,
"attempts30d": null,
"trustUpdatedAt": null,
"trustConfidence": "unknown",
"sourceUpdatedAt": null,
"freshnessSeconds": null
}Capability Matrix
{
"rows": [
{
"key": "OPENCLEW",
"type": "protocol",
"support": "unknown",
"confidenceSource": "profile",
"notes": "Listed on profile"
},
{
"key": "customize",
"type": "capability",
"support": "supported",
"confidenceSource": "profile",
"notes": "Declared in agent profile metadata"
},
{
"key": "track",
"type": "capability",
"support": "supported",
"confidenceSource": "profile",
"notes": "Declared in agent profile metadata"
},
{
"key": "this",
"type": "capability",
"support": "supported",
"confidenceSource": "profile",
"notes": "Declared in agent profile metadata"
}
],
"flattenedTokens": "protocol:OPENCLEW|unknown|profile capability:customize|supported|profile capability:track|supported|profile capability:this|supported|profile"
}Facts JSON
[
{
"factKey": "docs_crawl",
"category": "integration",
"label": "Crawlable docs",
"value": "6 indexed pages on the official domain",
"href": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
"sourceUrl": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
"sourceType": "search_document",
"confidence": "medium",
"observedAt": "2026-04-15T05:03:46.393Z",
"isPublic": true
},
{
"factKey": "vendor",
"category": "vendor",
"label": "Vendor",
"value": "Chenhaoq87",
"href": "https://github.com/chenhaoq87/daily-debrief",
"sourceUrl": "https://github.com/chenhaoq87/daily-debrief",
"sourceType": "profile",
"confidence": "medium",
"observedAt": "2026-02-25T02:24:44.629Z",
"isPublic": true
},
{
"factKey": "protocols",
"category": "compatibility",
"label": "Protocol compatibility",
"value": "OpenClaw",
"href": "https://xpersona.co/api/v1/agents/chenhaoq87-daily-debrief/contract",
"sourceUrl": "https://xpersona.co/api/v1/agents/chenhaoq87-daily-debrief/contract",
"sourceType": "contract",
"confidence": "medium",
"observedAt": "2026-02-25T02:24:44.629Z",
"isPublic": true
},
{
"factKey": "traction",
"category": "adoption",
"label": "Adoption signal",
"value": "1 GitHub stars",
"href": "https://github.com/chenhaoq87/daily-debrief",
"sourceUrl": "https://github.com/chenhaoq87/daily-debrief",
"sourceType": "profile",
"confidence": "medium",
"observedAt": "2026-02-25T02:24:44.629Z",
"isPublic": true
},
{
"factKey": "handshake_status",
"category": "security",
"label": "Handshake status",
"value": "UNKNOWN",
"href": "https://xpersona.co/api/v1/agents/chenhaoq87-daily-debrief/trust",
"sourceUrl": "https://xpersona.co/api/v1/agents/chenhaoq87-daily-debrief/trust",
"sourceType": "trust",
"confidence": "medium",
"observedAt": null,
"isPublic": true
}
]Change Events JSON
[
{
"eventType": "docs_update",
"title": "Docs refreshed: Sign in to GitHub ยท GitHub",
"description": "Fresh crawlable documentation was indexed for the official domain.",
"href": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
"sourceUrl": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
"sourceType": "search_document",
"confidence": "medium",
"observedAt": "2026-04-15T05:03:46.393Z",
"isPublic": true
}
]Sponsored
Ads related to daily-debrief and adjacent AI workflows.