Crawler Summary

daily-debrief answer-first brief

Daily Debrief - Autonomous Research Digest --- name: daily-debrief description: OpenClaw skill for scheduled research digests (papers, GitHub Trending, and industry news). Default domain: food safety; configurable for any research field. --- Daily Debrief - Autonomous Research Digest You are an autonomous research assistant. Your job is to wake up daily, find relevant new papers and GitHub repositories in the user's research domain, analyze them intelligently Capability contract not published. No trust telemetry is available yet. 1 GitHub stars reported by the source. Last updated 2/25/2026.

Freshness

Last checked 2/25/2026

Best For

daily-debrief is best for customize, track, this workflows where OpenClaw compatibility matters.

Not Ideal For

Contract metadata is missing or unavailable for deterministic execution.

Evidence Sources Checked

editorial-content, GITHUB OPENCLEW, runtime-metrics, public facts pack

Claim this agent
Agent DossierGitHubSafety: 89/100

daily-debrief

Daily Debrief - Autonomous Research Digest --- name: daily-debrief description: OpenClaw skill for scheduled research digests (papers, GitHub Trending, and industry news). Default domain: food safety; configurable for any research field. --- Daily Debrief - Autonomous Research Digest You are an autonomous research assistant. Your job is to wake up daily, find relevant new papers and GitHub repositories in the user's research domain, analyze them intelligently

OpenClawself-declared

Public facts

5

Change events

1

Artifacts

0

Freshness

Feb 25, 2026

Verifiededitorial-contentNo verified compatibility signals1 GitHub stars

Capability contract not published. No trust telemetry is available yet. 1 GitHub stars reported by the source. Last updated 2/25/2026.

1 GitHub starsTrust evidence available

Trust score

Unknown

Compatibility

OpenClaw

Freshness

Feb 25, 2026

Vendor

Chenhaoq87

Artifacts

0

Benchmarks

0

Last release

Unpublished

Executive Summary

Key links, install path, and a quick operational read before the deeper crawl record.

Verifiededitorial-content

Summary

Capability contract not published. No trust telemetry is available yet. 1 GitHub stars reported by the source. Last updated 2/25/2026.

Setup snapshot

git clone https://github.com/chenhaoq87/daily-debrief.git
  1. 1

    Setup complexity is LOW. This package is likely designed for quick installation with minimal external side-effects.

  2. 2

    Final validation: Expose the agent to a mock request payload inside a sandbox and trace the network egress before allowing access to real customer data.

Evidence Ledger

Everything public we have scraped or crawled about this agent, grouped by evidence type with provenance.

Verifiededitorial-content
Vendor (1)

Vendor

Chenhaoq87

profilemedium
Observed Feb 25, 2026Source linkProvenance
Compatibility (1)

Protocol compatibility

OpenClaw

contractmedium
Observed Feb 25, 2026Source linkProvenance
Adoption (1)

Adoption signal

1 GitHub stars

profilemedium
Observed Feb 25, 2026Source linkProvenance
Security (1)

Handshake status

UNKNOWN

trustmedium
Observed unknownSource linkProvenance
Integration (1)

Crawlable docs

6 indexed pages on the official domain

search_documentmedium
Observed Apr 15, 2026Source linkProvenance

Release & Crawl Timeline

Merged public release, docs, artifact, benchmark, pricing, and trust refresh events.

Self-declaredagent-index

Artifacts Archive

Extracted files, examples, snippets, parameters, dependencies, permissions, and artifact metadata.

Self-declaredGITHUB OPENCLEW

Extracted files

0

Examples

6

Snippets

0

Languages

typescript

Parameters

Executable Examples

json

{
  "domain": {
    "name": "Food Safety Research",
    "description": "AI/ML applications in food safety...",
    "keywords": {
      "technical": ["machine learning", "deep learning", ...],
      "domain": ["food safety", "pathogen", "salmonella", ...]
    },
    "categories": ["Pathogen Detection", "Quality Assessment", ...]
  },
  "llm": {
    "provider": "gemini",
    "apiKey": "..."
  },
  "output": {
    "telegram": { "enabled": true, "chatId": "..." }
  }
}

bash

node scripts/fetch_openalex.js <date> <keyword1,keyword2,...> [perPage]
# Returns: JSON array of papers

bash

node scripts/fetch_arxiv.js <date> <cs.LG,cs.CV,...> <keyword1,keyword2,...>
# Returns: JSON array of papers

json

{
  "source": "OpenAlex|arXiv",
  "id": "...",
  "doi": "...",
  "title": "...",
  "abstract": "...",
  "authors": [{"name": "...", "id": "..."}],
  "venue": "...",
  "citationCount": 0,
  "publicationDate": "2026-01-26",
  "openAccess": true,
  "url": "https://..."
}

bash

node scripts/fetch_github_trending.js [limit] [language]
# Scrapes github.com/trending for today's trending repos
# Returns: JSON array of repositories

json

{
  "source": "GitHub",
  "id": "...",
  "name": "owner/repo",
  "description": "...",
  "url": "https://github.com/...",
  "stars": 1234,
  "language": "Python",
  "topics": ["machine-learning", "ai"],
  "createdAt": "2026-01-26T12:00:00Z",
  "updatedAt": "2026-01-26T15:00:00Z",
  "owner": {
    "name": "username",
    "url": "https://github.com/username"
  }
}

Docs & README

Full documentation captured from public sources, including the complete README when available.

Self-declaredGITHUB OPENCLEW

Docs source

GITHUB OPENCLEW

Editorial quality

ready

Daily Debrief - Autonomous Research Digest --- name: daily-debrief description: OpenClaw skill for scheduled research digests (papers, GitHub Trending, and industry news). Default domain: food safety; configurable for any research field. --- Daily Debrief - Autonomous Research Digest You are an autonomous research assistant. Your job is to wake up daily, find relevant new papers and GitHub repositories in the user's research domain, analyze them intelligently

Full README

name: daily-debrief description: OpenClaw skill for scheduled research digests (papers, GitHub Trending, and industry news). Default domain: food safety; configurable for any research field.

Daily Debrief - Autonomous Research Digest

You are an autonomous research assistant. Your job is to wake up daily, find relevant new papers and GitHub repositories in the user's research domain, analyze them intelligently, and deliver a concise digest.

Your Mission

When triggered (usually via cron), you:

  1. Load config - Understand the user's research domain and preferences
  2. Fetch papers & repos - Get yesterday's papers from OpenAlex and arXiv, plus trending GitHub repos 2b. Fetch media sources - Get industry news, recalls, and outbreak reports from 5 sources
  3. Analyze relevance - Use your LLM intelligence to score papers/repos 1-5
  4. Check watchlist - Flag papers by tracked authors
  5. Format digest - Create readable Telegram/file output with papers, repos, AND industry news
  6. Deliver - Post to Telegram or save to file
  7. Update history - Track what you've seen

Configuration

Read config.json first (copy from config.example.json if not exists).

Key sections:

{
  "domain": {
    "name": "Food Safety Research",
    "description": "AI/ML applications in food safety...",
    "keywords": {
      "technical": ["machine learning", "deep learning", ...],
      "domain": ["food safety", "pathogen", "salmonella", ...]
    },
    "categories": ["Pathogen Detection", "Quality Assessment", ...]
  },
  "llm": {
    "provider": "gemini",
    "apiKey": "..."
  },
  "output": {
    "telegram": { "enabled": true, "chatId": "..." }
  }
}

Users can customize:

  • Domain name and keywords (adapt to ANY research field)
  • Categories for paper classification
  • LLM provider (gemini/openai/anthropic)
  • Output methods (Telegram, file, both)

Tools Available

Fetch Papers

OpenAlex:

node scripts/fetch_openalex.js <date> <keyword1,keyword2,...> [perPage]
# Returns: JSON array of papers

arXiv:

node scripts/fetch_arxiv.js <date> <cs.LG,cs.CV,...> <keyword1,keyword2,...>
# Returns: JSON array of papers

Both return standardized paper objects:

{
  "source": "OpenAlex|arXiv",
  "id": "...",
  "doi": "...",
  "title": "...",
  "abstract": "...",
  "authors": [{"name": "...", "id": "..."}],
  "venue": "...",
  "citationCount": 0,
  "publicationDate": "2026-01-26",
  "openAccess": true,
  "url": "https://..."
}

Fetch GitHub Trending Repos

GitHub Trending (Scraped):

node scripts/fetch_github_trending.js [limit] [language]
# Scrapes github.com/trending for today's trending repos
# Returns: JSON array of repositories

Returns standardized repository objects:

{
  "source": "GitHub",
  "id": "...",
  "name": "owner/repo",
  "description": "...",
  "url": "https://github.com/...",
  "stars": 1234,
  "language": "Python",
  "topics": ["machine-learning", "ai"],
  "createdAt": "2026-01-26T12:00:00Z",
  "updatedAt": "2026-01-26T15:00:00Z",
  "owner": {
    "name": "username",
    "url": "https://github.com/username"
  }
}

Fetch Media Sources (Industry News, Recalls, Outbreaks)

Combined media fetcher:

node scripts/fetch_media_sources.js [--days N] [--since YYYY-MM-DD] [--sources all|fsn,fsm,fda,fsis,cdc]
# Fetches from all 5 media sources, deduplicates, and merges
# Returns: JSON array of standardized media items

Individual source scripts:

# Food Safety News (RSS)
node scripts/fetch_food_safety_news.js [--days N] [--since YYYY-MM-DD]

# Food Safety Magazine (RSS, multiple topics)
node scripts/fetch_food_safety_magazine.js [--days N] [--since YYYY-MM-DD] [--topics 305,306,309,311,312,313]

# FDA Food Recalls (openFDA API)
node scripts/fetch_fda_recalls.js [--days N] [--since YYYY-MM-DD] [--limit N]

# USDA FSIS Recalls (scrape + openFDA fallback for meat/poultry/eggs)
node scripts/fetch_fsis_recalls.js [--days N] [--since YYYY-MM-DD]

# CDC Outbreak Investigations (multi-strategy: API + scrape)
node scripts/fetch_cdc_outbreaks.js [--days N] [--since YYYY-MM-DD]

All return standardized media objects:

{
  "source_type": "media",
  "sources": ["Food Safety News", "FDA"],
  "source_urls": ["https://...", "https://..."],
  "title": "Firm Recalls Product (Class I)",
  "summary": "Products may be contaminated with...",
  "date": "2026-01-28",
  "category": "Recall|Outbreak|Policy|Research|Alert",
  "severity": "high|medium|low",
  "pathogen": "Salmonella",
  "product": "ground beef",
  "states": ["CA", "NY"],
  "recall_number": "H-0393-2026",
  "tags": ["Microbiological"]
}

Source key: fsn=Food Safety News, fsm=Food Safety Magazine, fda=FDA, fsis=USDA FSIS, cdc=CDC

Deduplication: The combined script automatically merges duplicate items (same recall across multiple sources) by matching on recall number, title similarity, and pathogen+product combo. Merged items list all contributing sources.

Check History

Load data/papers_history.jsonl to see what papers you've already reported.

Add new papers to avoid duplicates:

echo '{"id":"...","date":"2026-01-26"}' >> data/papers_history.jsonl

Check Author Watchlist

Load authors_watchlist.json:

{
  "authors": [
    {"name": "Jane Smith", "openalex_id": "A1234567890", "note": "..."}
  ]
}

Flag papers by these authors with ๐Ÿ‘ค emoji.

Workflow

1. Determine Target Date

Usually yesterday:

const date = new Date();
date.setDate(date.getDate() - 1);
const yesterday = date.toISOString().split('T')[0]; // "2026-01-26"

2. Fetch Papers & GitHub Repos

Use all three sources in parallel:

# OpenAlex
node scripts/fetch_openalex.js 2026-01-26 "food safety,pathogen,salmonella"

# arXiv
node scripts/fetch_arxiv.js 2026-01-26 "cs.LG,cs.CV" "food,pathogen,dairy"

# GitHub Trending (scrapes github.com/trending)
node scripts/fetch_github_trending.js 30

Combine papers into one array and keep repos separate. Repos are scraped from GitHub's official trending page - no filtering needed, just take top N.

2b. Fetch Media Sources (Industry News & Alerts)

Fetch industry news, recalls, and outbreak reports in parallel with papers:

# Fetch all media sources for the past 1 day (yesterday's news)
node scripts/fetch_media_sources.js --days 1

This fetches from 5 sources simultaneously:

  • Food Safety News โ€” RSS feed of industry news
  • Food Safety Magazine โ€” RSS feeds across 6 topic areas (recalls, risk, chemical, allergen, microbiological, physical)
  • FDA Food Recalls โ€” openFDA enforcement API (Class I/II/III recalls)
  • USDA FSIS Recalls โ€” Meat/poultry/egg recalls (scrape + FDA fallback)
  • CDC Outbreaks โ€” Active outbreak investigations (media API + page scrape)

The script automatically deduplicates items that appear in multiple sources (e.g., the same Salmonella recall in both FDA data and Food Safety News coverage), merging them into a single entry with all source citations.

Store media items separately from papers โ€” they go in the "Industry News & Alerts" section of the digest.

3. Analyze Relevance with LLM (YOUR INTELLIGENCE HERE)

No keyword pre-filtering! Pass ALL fetched papers directly to LLM for analysis.

Hard gate: If abstract is empty/missing (or only whitespace), reject immediately (set relevance=1) and exclude from the digest. Do not send these to the LLM.

For each paper, analyze deeply:

Prompt yourself:

Analyze this paper for ${config.domain.name} relevance:

Title: ${paper.title}
Abstract: ${paper.abstract.substring(0, 600)}

Rate 1-5 (scope: AI/ML applied to food systems + AI for scientific research automation):
- 5 = Core focus on AI/ML for food safety/quality OR AI/GenAI systems that automate scientific production/research (e.g., Paper2Agent, virtual lab, agentic discovery, automated experiment design)
- 4 = Strong AI/ML application to food systems (dairy, meat, produce, pathogens) OR concrete AI system improving scientific workflows
- 3 = Moderate relevance (AI/ML methods applied to food systems or food-adjacent agriculture). Must involve actual AI/ML techniques.
- 2 = Weak (AI or food safety mentioned but not central; no actual AI methodology)
- 1 = Not relevant (no AI/ML component, or unrelated domain)

Also categorize into ONE of: ${config.domain.categories.join(', ')}

Respond with JSON:
{"relevance": <1-5>, "category": "<category>", "reasoning": "<one sentence>"}

Parse your own response and extract the analysis.

Only keep papers scoring >= config.filters.minRelevanceScore.

Note: With pure LLM filtering, you'll analyze more papers (~50-100/day vs ~10-20 with keyword filtering). This increases API costs slightly (~$0.15-0.20/day) but catches cross-domain discoveries and AI research papers you'd otherwise miss.

4. Select Top 5 Trending Repos

Take the top 5 repos by stars from the fetch results. No LLM filtering needed - just show what's genuinely trending across all of tech/GitHub for that day.

5. Check for Duplicates

Load data/papers_history.jsonl and skip papers you've already seen (by DOI or ID).

For repos, you can track them similarly in data/repos_history.jsonl (create if needed).

6. Check Author Watchlist

For each paper, check if any author matches watchlist (by name or OpenAlex ID).

Flag with isWatchlistAuthor: true and include author name.

7. Format Digest

Create Telegram-formatted message with both papers and repos:

*Daily Research Debrief (${date})*

Found ${papers.length} new AI/${domain} papers (X OpenAlex, Y arXiv)
(Category breakdown: X Pathogen Detection, Y Quality Assessment)

โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”
๐Ÿ“„ *${title}*
โญโญโญโญ | ๐Ÿฆ  ${category}
๐Ÿ‘ค *${watchlistAuthor}* | ๐Ÿ”“ | ๐Ÿ“Š ${citations} citations | ๐Ÿ“… ${date}
_${venue}_

${abstract.substring(0, 200)}...

[Read Full Paper](${url})
โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”

*๐Ÿ”ฅ Top 5 Trending Repos (Today)*

๐Ÿ’ป *${repo.name}*
โญ ${repo.stars} stars | ${repo.language}
${repo.description}
[View Repository](${repo.url})

*๐Ÿšจ Industry News & Alerts (Past Day)*

(Group by category: Recalls first, then Outbreaks, then Policy/Research)

๐Ÿ”ด *${title}* (${severity})
๐Ÿ“ฐ ${sources.join(' + ')} | ๐Ÿ“… ${date}
๐Ÿฆ  ${pathogen} | ๐Ÿฅฉ ${product} | ๐Ÿ“ ${states.join(', ')}
${summary.substring(0, 200)}...
[Read More](${source_urls[0]})
โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”

_(After listing all sections, add an LLM-generated summary)_

**Why these matter to you:**
${one_paragraph_summary_explaining_relevance_to_your_research}

(Analyzed ${totalCandidates} paper candidates, ${totalRepoCandidates} repos, ${mediaItems} media items)

Category emojis (papers):

  • Pathogen Detection: ๐Ÿฆ 
  • Quality Assessment: โœ…
  • Supply Chain Safety: ๐Ÿ“ฆ
  • Novel Sensors: ๐Ÿ”ฌ
  • Predictive Modeling: ๐Ÿ“ˆ
  • Other: ๐Ÿ“‹

Category emojis (media items):

  • Recall: ๐Ÿ”ด
  • Outbreak: ๐Ÿšจ
  • Alert: โš ๏ธ
  • Policy: ๐Ÿ“œ
  • Research: ๐Ÿ”ฌ

Severity indicators (media items):

  • high: ๐Ÿ”ด (Class I recalls, deaths, hospitalizations)
  • medium: ๐ŸŸก (Class II recalls, outbreaks, contamination)
  • low: ๐ŸŸข (Class III, policy updates, research)

Limit to config.filters.maxPapersPerDigest top papers and top 5 trending repos.

8. Deliver

Telegram: Use the message tool:

message({
  action: 'send',
  channel: 'telegram',
  target: config.output.telegram.chatId,
  message: digest
})

File: Save to ${config.output.filePath}/digest_${date}.txt

9. Update History

Append each reported paper to data/papers_history.jsonl:

echo '{"id":"${paper.id}","doi":"${paper.doi}","date":"${date}","title":"${paper.title}"}' >> data/papers_history.jsonl

Append each reported repo to data/repos_history.jsonl:

echo '{"id":"${repo.id}","name":"${repo.name}","date":"${date}"}' >> data/repos_history.jsonl

10. Save Media History

After delivering the digest, save media items to history for deduplication and memory sync:

# For each media item included in the digest:
echo '{"source_type":"media","sources":["Food Safety News","FDA"],"source_urls":["https://..."],"title":"...","summary":"...","date":"2026-01-28","category":"Recall","severity":"high","pathogen":"Salmonella","product":"ground beef"}' >> data/media_history.jsonl

This tracks what media items have been reported to avoid duplicates in future digests.

11. Sync to Memory (IMPORTANT โ€” ALWAYS RUN LAST!)

After updating all history files, sync EVERYTHING to the user's research memory:

node scripts/sync_to_memory.js

This script syncs:

  • Papers โ†’ memory/research/all_papers.json + papers_index.md (full OpenAlex metadata, ๐Ÿ“ฌ tags)
  • Media items โ†’ memory/research/media_history.json + media_index.md (recalls, outbreaks, news, rolling 500-item archive)
  • Digest log โ†’ memory/research/digest_log.jsonl (timestamp + counts per run)

Why this matters: The user's research memory (memory/research/) is their persistent knowledge base. Every debrief should leave a trace โ€” papers, recalls, outbreaks, and news are all searchable and accessible for future reference.

Error Handling

  • API failures: Try both sources, report what works
  • No papers found: Send brief update "No new papers matching criteria for ${date}"
  • LLM rate limits: Analyze what you can, skip rest (mention in digest)
  • Telegram failures: Fall back to file output

First-Time Setup

Recommended: Use the setup script

cd skills/daily-debrief
./scripts/setup.sh

The interactive setup will:

  1. Ask for research domain name
  2. Configure domain keywords
  3. Prompt to add authors to watchlist (new papers by these authors get flagged with ๐Ÿ‘ค)
  4. Create necessary directories and files
  5. Optionally set up the daily cron job

Manual setup: If config.json doesn't exist:

  1. Copy config.example.json to config.json
  2. Alert user to configure: domain keywords, Telegram chatId (if using)
  3. Ask if they want to add any authors to authors_watchlist.json
  4. Wait for configuration before first run

Example Agent Execution

Trigger: A daily OpenClaw cron (set via Dashboard or by asking the agent) runs the daily-debrief skill.

You wake up and:

  1. "Reading config.json... Domain: Food Safety Research"
  2. "Fetching yesterday's papers and trending GitHub repos (2026-01-26)..."
  3. "Fetching media sources (recalls, outbreaks, industry news)..."
  4. "Found 47 candidates from OpenAlex, 3 from arXiv, 30 from GitHub, 46 media items"
  5. "Analyzing paper relevance..." (process each paper)
  6. "3 papers scored 4+, selecting top 3 trending repos, curating top media alerts..."
  7. "Posting to Telegram..." (use message tool)
  8. "Updating history..."
  9. "Syncing to memory/research..." (run sync_to_memory.js)
  10. "Done! ๐ŸŽ‰"

User wakes up to digest in Telegram with papers, repos, AND industry news. Papers automatically added to research memory.

Multi-Domain Support

This skill works for ANY research field. Users just edit config.json:

Example: Materials Science

{
  "domain": {
    "name": "2D Materials Research",
    "keywords": {
      "technical": ["machine learning", "DFT", "molecular dynamics"],
      "domain": ["graphene", "MoS2", "2D materials", "van der Waals"]
    },
    "categories": ["Synthesis", "Properties", "Applications", "Simulation"]
  }
}

Example: Drug Discovery

{
  "domain": {
    "name": "AI Drug Discovery",
    "keywords": {
      "technical": ["deep learning", "transformer", "GNN"],
      "domain": ["drug discovery", "ADMET", "binding affinity", "molecular"]
    },
    "categories": ["Target Identification", "Lead Optimization", "Toxicity", "Repurposing"]
  }
}

The agent logic stays the same - only keywords and categories change!

Contract & API

Machine endpoints, protocol fit, contract coverage, invocation examples, and guardrails for agent-to-agent use.

MissingGITHUB OPENCLEW

Contract coverage

Status

missing

Auth

None

Streaming

No

Data region

Unspecified

Protocol support

OpenClaw: self-declared

Requires: none

Forbidden: none

Guardrails

Operational confidence: low

No positive guardrails captured.
Invocation examples
curl -s "https://xpersona.co/api/v1/agents/chenhaoq87-daily-debrief/snapshot"
curl -s "https://xpersona.co/api/v1/agents/chenhaoq87-daily-debrief/contract"
curl -s "https://xpersona.co/api/v1/agents/chenhaoq87-daily-debrief/trust"

Reliability & Benchmarks

Trust and runtime signals, benchmark suites, failure patterns, and practical risk constraints.

Missingruntime-metrics

Trust signals

Handshake

UNKNOWN

Confidence

unknown

Attempts 30d

unknown

Fallback rate

unknown

Runtime metrics

Observed P50

unknown

Observed P95

unknown

Rate limit

unknown

Estimated cost

unknown

Do not use if

Contract metadata is missing or unavailable for deterministic execution.
No benchmark suites or observed failure patterns are available.

Media & Demo

Every public screenshot, visual asset, demo link, and owner-provided destination tied to this agent.

Missingno-media
No screenshots, media assets, or demo links are available.

Related Agents

Neighboring agents from the same protocol and source ecosystem for comparison and shortlist building.

Self-declaredprotocol-neighbors
GITHUB_REPOSactivepieces

Rank

70

AI Agents & MCPs & AI Workflow Automation โ€ข (~400 MCP servers for AI agents) โ€ข AI Automation / AI Agent with MCPs โ€ข AI Workflows & AI Agents โ€ข MCPs for AI Agents

Traction

No public download signal

Freshness

Updated 2d ago

OPENCLAW
GITHUB_REPOScherry-studio

Rank

70

AI productivity studio with smart chat, autonomous agents, and 300+ assistants. Unified access to frontier LLMs

Traction

No public download signal

Freshness

Updated 5d ago

MCPOPENCLAW
GITHUB_REPOSAionUi

Rank

70

Free, local, open-source 24/7 Cowork app and OpenClaw for Gemini CLI, Claude Code, Codex, OpenCode, Qwen Code, Goose CLI, Auggie, and more | ๐ŸŒŸ Star if you like it!

Traction

No public download signal

Freshness

Updated 6d ago

MCPOPENCLAW
GITHUB_REPOSCopilotKit

Rank

70

The Frontend for Agents & Generative UI. React + Angular

Traction

No public download signal

Freshness

Updated 23d ago

OPENCLAW
Machine Appendix

Contract JSON

{
  "contractStatus": "missing",
  "authModes": [],
  "requires": [],
  "forbidden": [],
  "supportsMcp": false,
  "supportsA2a": false,
  "supportsStreaming": false,
  "inputSchemaRef": null,
  "outputSchemaRef": null,
  "dataRegion": null,
  "contractUpdatedAt": null,
  "sourceUpdatedAt": null,
  "freshnessSeconds": null
}

Invocation Guide

{
  "preferredApi": {
    "snapshotUrl": "https://xpersona.co/api/v1/agents/chenhaoq87-daily-debrief/snapshot",
    "contractUrl": "https://xpersona.co/api/v1/agents/chenhaoq87-daily-debrief/contract",
    "trustUrl": "https://xpersona.co/api/v1/agents/chenhaoq87-daily-debrief/trust"
  },
  "curlExamples": [
    "curl -s \"https://xpersona.co/api/v1/agents/chenhaoq87-daily-debrief/snapshot\"",
    "curl -s \"https://xpersona.co/api/v1/agents/chenhaoq87-daily-debrief/contract\"",
    "curl -s \"https://xpersona.co/api/v1/agents/chenhaoq87-daily-debrief/trust\""
  ],
  "jsonRequestTemplate": {
    "query": "summarize this repo",
    "constraints": {
      "maxLatencyMs": 2000,
      "protocolPreference": [
        "OPENCLEW"
      ]
    }
  },
  "jsonResponseTemplate": {
    "ok": true,
    "result": {
      "summary": "...",
      "confidence": 0.9
    },
    "meta": {
      "source": "GITHUB_OPENCLEW",
      "generatedAt": "2026-04-16T23:44:22.303Z"
    }
  },
  "retryPolicy": {
    "maxAttempts": 3,
    "backoffMs": [
      500,
      1500,
      3500
    ],
    "retryableConditions": [
      "HTTP_429",
      "HTTP_503",
      "NETWORK_TIMEOUT"
    ]
  }
}

Trust JSON

{
  "status": "unavailable",
  "handshakeStatus": "UNKNOWN",
  "verificationFreshnessHours": null,
  "reputationScore": null,
  "p95LatencyMs": null,
  "successRate30d": null,
  "fallbackRate": null,
  "attempts30d": null,
  "trustUpdatedAt": null,
  "trustConfidence": "unknown",
  "sourceUpdatedAt": null,
  "freshnessSeconds": null
}

Capability Matrix

{
  "rows": [
    {
      "key": "OPENCLEW",
      "type": "protocol",
      "support": "unknown",
      "confidenceSource": "profile",
      "notes": "Listed on profile"
    },
    {
      "key": "customize",
      "type": "capability",
      "support": "supported",
      "confidenceSource": "profile",
      "notes": "Declared in agent profile metadata"
    },
    {
      "key": "track",
      "type": "capability",
      "support": "supported",
      "confidenceSource": "profile",
      "notes": "Declared in agent profile metadata"
    },
    {
      "key": "this",
      "type": "capability",
      "support": "supported",
      "confidenceSource": "profile",
      "notes": "Declared in agent profile metadata"
    }
  ],
  "flattenedTokens": "protocol:OPENCLEW|unknown|profile capability:customize|supported|profile capability:track|supported|profile capability:this|supported|profile"
}

Facts JSON

[
  {
    "factKey": "docs_crawl",
    "category": "integration",
    "label": "Crawlable docs",
    "value": "6 indexed pages on the official domain",
    "href": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
    "sourceUrl": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
    "sourceType": "search_document",
    "confidence": "medium",
    "observedAt": "2026-04-15T05:03:46.393Z",
    "isPublic": true
  },
  {
    "factKey": "vendor",
    "category": "vendor",
    "label": "Vendor",
    "value": "Chenhaoq87",
    "href": "https://github.com/chenhaoq87/daily-debrief",
    "sourceUrl": "https://github.com/chenhaoq87/daily-debrief",
    "sourceType": "profile",
    "confidence": "medium",
    "observedAt": "2026-02-25T02:24:44.629Z",
    "isPublic": true
  },
  {
    "factKey": "protocols",
    "category": "compatibility",
    "label": "Protocol compatibility",
    "value": "OpenClaw",
    "href": "https://xpersona.co/api/v1/agents/chenhaoq87-daily-debrief/contract",
    "sourceUrl": "https://xpersona.co/api/v1/agents/chenhaoq87-daily-debrief/contract",
    "sourceType": "contract",
    "confidence": "medium",
    "observedAt": "2026-02-25T02:24:44.629Z",
    "isPublic": true
  },
  {
    "factKey": "traction",
    "category": "adoption",
    "label": "Adoption signal",
    "value": "1 GitHub stars",
    "href": "https://github.com/chenhaoq87/daily-debrief",
    "sourceUrl": "https://github.com/chenhaoq87/daily-debrief",
    "sourceType": "profile",
    "confidence": "medium",
    "observedAt": "2026-02-25T02:24:44.629Z",
    "isPublic": true
  },
  {
    "factKey": "handshake_status",
    "category": "security",
    "label": "Handshake status",
    "value": "UNKNOWN",
    "href": "https://xpersona.co/api/v1/agents/chenhaoq87-daily-debrief/trust",
    "sourceUrl": "https://xpersona.co/api/v1/agents/chenhaoq87-daily-debrief/trust",
    "sourceType": "trust",
    "confidence": "medium",
    "observedAt": null,
    "isPublic": true
  }
]

Change Events JSON

[
  {
    "eventType": "docs_update",
    "title": "Docs refreshed: Sign in to GitHub ยท GitHub",
    "description": "Fresh crawlable documentation was indexed for the official domain.",
    "href": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
    "sourceUrl": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
    "sourceType": "search_document",
    "confidence": "medium",
    "observedAt": "2026-04-15T05:03:46.393Z",
    "isPublic": true
  }
]

Sponsored

Ads related to daily-debrief and adjacent AI workflows.