Rank
70
AI Agents & MCPs & AI Workflow Automation • (~400 MCP servers for AI agents) • AI Automation / AI Agent with MCPs • AI Workflows & AI Agents • MCPs for AI Agents
Traction
No public download signal
Freshness
Updated 2d ago
Crawler Summary
Orchestrate a debate between Claude and Codex to reach consensus. Use when the user asks for a "second opinion", "debate", "consensus", "crosscheck", or wants another model to review a proposal or diff. --- name: debate description: | Orchestrate a debate between Claude and Codex to reach consensus. Use when the user asks for a "second opinion", "debate", "consensus", "crosscheck", or wants another model to review a proposal or diff. allowed-tools: Read, Glob, Grep, Bash --- Debate Skill Orchestrate a back-and-forth debate between two AI models (Claude and Codex) until they reach consensus on a technical decision, c Capability contract not published. No trust telemetry is available yet. Last updated 4/15/2026.
Freshness
Last checked 4/15/2026
Best For
debate is best for general automation workflows where OpenClaw compatibility matters.
Not Ideal For
Contract metadata is missing or unavailable for deterministic execution.
Evidence Sources Checked
editorial-content, GITHUB OPENCLEW, runtime-metrics, public facts pack
Orchestrate a debate between Claude and Codex to reach consensus. Use when the user asks for a "second opinion", "debate", "consensus", "crosscheck", or wants another model to review a proposal or diff. --- name: debate description: | Orchestrate a debate between Claude and Codex to reach consensus. Use when the user asks for a "second opinion", "debate", "consensus", "crosscheck", or wants another model to review a proposal or diff. allowed-tools: Read, Glob, Grep, Bash --- Debate Skill Orchestrate a back-and-forth debate between two AI models (Claude and Codex) until they reach consensus on a technical decision, c
Public facts
4
Change events
1
Artifacts
0
Freshness
Apr 15, 2026
Capability contract not published. No trust telemetry is available yet. Last updated 4/15/2026.
Trust score
Unknown
Compatibility
OpenClaw
Freshness
Apr 15, 2026
Vendor
Aroc
Artifacts
0
Benchmarks
0
Last release
Unpublished
Key links, install path, and a quick operational read before the deeper crawl record.
Summary
Capability contract not published. No trust telemetry is available yet. Last updated 4/15/2026.
Setup snapshot
git clone https://github.com/aroc/debate-skill.gitSetup complexity is LOW. This package is likely designed for quick installation with minimal external side-effects.
Final validation: Expose the agent to a mock request payload inside a sandbox and trace the network egress before allowing access to real customer data.
Everything public we have scraped or crawled about this agent, grouped by evidence type with provenance.
Vendor
Aroc
Protocol compatibility
OpenClaw
Handshake status
UNKNOWN
Crawlable docs
6 indexed pages on the official domain
Merged public release, docs, artifact, benchmark, pricing, and trust refresh events.
Extracted files, examples, snippets, parameters, dependencies, permissions, and artifact metadata.
Extracted files
0
Examples
6
Snippets
0
Languages
typescript
Parameters
bash
# Debate with Codex (default when running from Claude) /debate should we use Redis or Memcached? # Explicitly debate with Claude (e.g., Claude vs Claude) /debate --vs claude --model opus should we use a monorepo? # Debate with Codex using a specific model /debate --vs codex --model o3 review my authentication changes # Debate with Codex with low reasoning (faster, cheaper) /debate --vs codex --model gpt-5.3-codex --reasoning low quick sanity check # Debate with Codex with high reasoning (thorough analysis) /debate --vs codex --reasoning high review this complex algorithm # Quick single-round feedback from Codex /debate --quick --vs codex what do you think of this API design?
bash
# For code changes git diff HEAD 2>/dev/null || git diff --staged 2>/dev/null # For specific files, read them # For architecture discussions, explore relevant code
text
## Topic [User's original question/request] ## Context [Relevant code, diffs, or information] ## My Proposal [Your analysis and recommendation] ## Key Points - [Point 1] - [Point 2] - [Point 3]
bash
# Check if running from Claude
if [ -n "$CLAUDE_SESSION_ID" ] || [ -n "$CLAUDE_CODE_ENTRYPOINT" ]; then
CURRENT="claude"
DEFAULT_OPPONENT="codex"
else
CURRENT="codex"
DEFAULT_OPPONENT="claude"
fibash
# Basic invocation (opponent required)
bash $SKILL_DIR/scripts/invoke_other.sh \
--opponent codex \
"Your prompt here"
# With specific model
bash $SKILL_DIR/scripts/invoke_other.sh \
--opponent claude \
--model opus \
"Your prompt here"
# With model and reasoning level (Codex only)
bash $SKILL_DIR/scripts/invoke_other.sh \
--opponent codex \
--model gpt-5.3-codex \
--reasoning high \
"Your prompt here"bash
bash $SKILL_DIR/scripts/invoke_other.sh \
--opponent "$OPPONENT" \
${MODEL:+--model "$MODEL"} \
${REASONING:+--reasoning "$REASONING"} \
"
You are reviewing a proposal.
## Context
[Include gathered context]
## Proposal
[Your proposal]
## Your Task
Evaluate this proposal critically. Consider:
- Technical correctness
- Edge cases missed
- Alternative approaches
- Potential issues
End your response with exactly one verdict:
- AGREE: [confirmation and why]
- REVISE: [specific changes needed]
- DISAGREE: [fundamental issues]
"Full documentation captured from public sources, including the complete README when available.
Docs source
GITHUB OPENCLEW
Editorial quality
ready
Orchestrate a debate between Claude and Codex to reach consensus. Use when the user asks for a "second opinion", "debate", "consensus", "crosscheck", or wants another model to review a proposal or diff. --- name: debate description: | Orchestrate a debate between Claude and Codex to reach consensus. Use when the user asks for a "second opinion", "debate", "consensus", "crosscheck", or wants another model to review a proposal or diff. allowed-tools: Read, Glob, Grep, Bash --- Debate Skill Orchestrate a back-and-forth debate between two AI models (Claude and Codex) until they reach consensus on a technical decision, c
Orchestrate a back-and-forth debate between two AI models (Claude and Codex) until they reach consensus on a technical decision, code review, or root cause analysis.
This skill activates when:
/debate [topic]--vs <claude|codex>: Choose opponent (default: auto-detect opposite of current)--model <model>: Specify opponent's model (e.g., opus, sonnet, o3, gpt-4.1)--reasoning <level>: Codex reasoning effort: low, medium, high (default: from config)--quick: Single round only, get feedback without iterating to consensus# Debate with Codex (default when running from Claude)
/debate should we use Redis or Memcached?
# Explicitly debate with Claude (e.g., Claude vs Claude)
/debate --vs claude --model opus should we use a monorepo?
# Debate with Codex using a specific model
/debate --vs codex --model o3 review my authentication changes
# Debate with Codex with low reasoning (faster, cheaper)
/debate --vs codex --model gpt-5.3-codex --reasoning low quick sanity check
# Debate with Codex with high reasoning (thorough analysis)
/debate --vs codex --reasoning high review this complex algorithm
# Quick single-round feedback from Codex
/debate --quick --vs codex what do you think of this API design?
The --model value is passed directly to the CLI, so use whatever model identifiers that CLI accepts.
Claude CLI models:
opus, sonnet, haikuclaude-opus-4-5-20251101, claude-sonnet-4-6-20250514, etc.claude --help or your Claude configCodex CLI models:
o3, o4-mini, gpt-4.1, gpt-5.3-codexcodex --help or your Codex config~/.codex/config.tomlImportant: The skill's base directory is provided when the skill is invoked (shown as "Base directory for this skill: ..."). Use this as SKILL_DIR in all script paths below.
Based on the topic, gather relevant context:
# For code changes
git diff HEAD 2>/dev/null || git diff --staged 2>/dev/null
# For specific files, read them
# For architecture discussions, explore relevant code
As Claude, analyze the topic and formulate a clear proposal:
## Topic
[User's original question/request]
## Context
[Relevant code, diffs, or information]
## My Proposal
[Your analysis and recommendation]
## Key Points
- [Point 1]
- [Point 2]
- [Point 3]
Parse the user's arguments to determine:
--vs flag, or default to the opposite of current (if running from Claude, default to codex; if from Codex, default to claude)--model flag, or omit to use CLI defaultDetect current environment:
# Check if running from Claude
if [ -n "$CLAUDE_SESSION_ID" ] || [ -n "$CLAUDE_CODE_ENTRYPOINT" ]; then
CURRENT="claude"
DEFAULT_OPPONENT="codex"
else
CURRENT="codex"
DEFAULT_OPPONENT="claude"
fi
Use the invoke script with explicit opponent, optional model, and optional reasoning level:
# Basic invocation (opponent required)
bash $SKILL_DIR/scripts/invoke_other.sh \
--opponent codex \
"Your prompt here"
# With specific model
bash $SKILL_DIR/scripts/invoke_other.sh \
--opponent claude \
--model opus \
"Your prompt here"
# With model and reasoning level (Codex only)
bash $SKILL_DIR/scripts/invoke_other.sh \
--opponent codex \
--model gpt-5.3-codex \
--reasoning high \
"Your prompt here"
Example critique prompt:
bash $SKILL_DIR/scripts/invoke_other.sh \
--opponent "$OPPONENT" \
${MODEL:+--model "$MODEL"} \
${REASONING:+--reasoning "$REASONING"} \
"
You are reviewing a proposal.
## Context
[Include gathered context]
## Proposal
[Your proposal]
## Your Task
Evaluate this proposal critically. Consider:
- Technical correctness
- Edge cases missed
- Alternative approaches
- Potential issues
End your response with exactly one verdict:
- AGREE: [confirmation and why]
- REVISE: [specific changes needed]
- DISAGREE: [fundamental issues]
"
The invoke script outputs the response file path to stdout. Capture it and parse:
# Capture the output file path from the invoke script
RESPONSE_FILE=$(bash $SKILL_DIR/scripts/invoke_other.sh \
--opponent "$OPPONENT" \
${MODEL:+--model "$MODEL"} \
${REASONING:+--reasoning "$REASONING"} \
"Your critique prompt here")
# Parse the verdict from the response
python3 $SKILL_DIR/scripts/parse_verdict.py "$RESPONSE_FILE"
This returns: AGREE|REVISE|DISAGREE and the explanation.
If AGREE:
If REVISE:
If DISAGREE:
If 5 rounds pass without consensus, generate a disagreement summary (see below).
When --quick flag is present:
Display the debate to the user in this format. Use the actual model names (e.g., "CLAUDE (opus)" or "CODEX (o3)"):
═══════════════════════════════════════════════
CONSENSUS DEBATE: [Topic]
Participants: [Current] vs [Opponent] ([model if specified])
═══════════════════════════════════════════════
--- Round 1 ---
[CURRENT]: [proposal]
[OPPONENT]: [critique]
Verdict: [AGREE|REVISE|DISAGREE]
--- Round 2 --- (if needed)
[CURRENT]: [revised proposal]
[OPPONENT]: [response]
Verdict: [AGREE|REVISE|DISAGREE]
═══════════════════════════════════════════════
CONSENSUS REACHED (Round N)
═══════════════════════════════════════════════
[Final agreed solution with key points]
When max rounds (5) reached without consensus:
═══════════════════════════════════════════════
NO CONSENSUS REACHED (5 rounds)
═══════════════════════════════════════════════
## Points of Agreement
- [Things both models agreed on]
## Points of Disagreement
- [Issue 1]: [Current] thinks X, [Opponent] thinks Y
- [Issue 2]: ...
## Root Cause of Disagreement
[Why consensus couldn't be reached - e.g., different assumptions,
missing information, genuinely valid competing approaches]
## Recommendation
[Best path forward given the disagreement]
═══════════════════════════════════════════════
CLI not installed:
if ! command -v codex &> /dev/null && ! command -v claude &> /dev/null; then
echo "Neither codex nor claude CLI found. Falling back to self-critique mode."
# Provide self-critique instead
fi
Network/timeout error:
Empty response:
You are {opponent_name}, reviewing a proposal from {current_name}.
## Original Question
{user_topic}
## Context
{gathered_context}
## Proposal Being Reviewed
{proposal}
## Your Task
Critically evaluate this proposal. Consider:
1. Is the technical approach correct?
2. Are there edge cases or failure modes missed?
3. Are there better alternatives?
4. What are the risks or downsides?
Be constructive but thorough. If you agree, explain why it's solid.
If you have concerns, be specific about what should change.
End with exactly one of:
- AGREE: [your confirmation and reasoning]
- REVISE: [specific changes you recommend]
- DISAGREE: [fundamental issues that need addressing]
You are {opponent_name}, continuing a debate with {current_name}.
## Original Question
{user_topic}
## Previous Exchange
{conversation_history}
## Latest Revision
{revised_proposal}
## Your Task
Evaluate whether the revision addresses your previous concerns.
End with exactly one of:
- AGREE: [if concerns are resolved]
- REVISE: [if minor issues remain]
- DISAGREE: [if fundamental issues persist]
Machine endpoints, protocol fit, contract coverage, invocation examples, and guardrails for agent-to-agent use.
Contract coverage
Status
missing
Auth
None
Streaming
No
Data region
Unspecified
Protocol support
Requires: none
Forbidden: none
Guardrails
Operational confidence: low
curl -s "https://xpersona.co/api/v1/agents/aroc-debate-skill/snapshot"
curl -s "https://xpersona.co/api/v1/agents/aroc-debate-skill/contract"
curl -s "https://xpersona.co/api/v1/agents/aroc-debate-skill/trust"
Trust and runtime signals, benchmark suites, failure patterns, and practical risk constraints.
Trust signals
Handshake
UNKNOWN
Confidence
unknown
Attempts 30d
unknown
Fallback rate
unknown
Runtime metrics
Observed P50
unknown
Observed P95
unknown
Rate limit
unknown
Estimated cost
unknown
Do not use if
Every public screenshot, visual asset, demo link, and owner-provided destination tied to this agent.
Neighboring agents from the same protocol and source ecosystem for comparison and shortlist building.
Rank
70
AI Agents & MCPs & AI Workflow Automation • (~400 MCP servers for AI agents) • AI Automation / AI Agent with MCPs • AI Workflows & AI Agents • MCPs for AI Agents
Traction
No public download signal
Freshness
Updated 2d ago
Rank
70
AI productivity studio with smart chat, autonomous agents, and 300+ assistants. Unified access to frontier LLMs
Traction
No public download signal
Freshness
Updated 5d ago
Rank
70
Free, local, open-source 24/7 Cowork app and OpenClaw for Gemini CLI, Claude Code, Codex, OpenCode, Qwen Code, Goose CLI, Auggie, and more | 🌟 Star if you like it!
Traction
No public download signal
Freshness
Updated 6d ago
Rank
70
The Frontend for Agents & Generative UI. React + Angular
Traction
No public download signal
Freshness
Updated 23d ago
Contract JSON
{
"contractStatus": "missing",
"authModes": [],
"requires": [],
"forbidden": [],
"supportsMcp": false,
"supportsA2a": false,
"supportsStreaming": false,
"inputSchemaRef": null,
"outputSchemaRef": null,
"dataRegion": null,
"contractUpdatedAt": null,
"sourceUpdatedAt": null,
"freshnessSeconds": null
}Invocation Guide
{
"preferredApi": {
"snapshotUrl": "https://xpersona.co/api/v1/agents/aroc-debate-skill/snapshot",
"contractUrl": "https://xpersona.co/api/v1/agents/aroc-debate-skill/contract",
"trustUrl": "https://xpersona.co/api/v1/agents/aroc-debate-skill/trust"
},
"curlExamples": [
"curl -s \"https://xpersona.co/api/v1/agents/aroc-debate-skill/snapshot\"",
"curl -s \"https://xpersona.co/api/v1/agents/aroc-debate-skill/contract\"",
"curl -s \"https://xpersona.co/api/v1/agents/aroc-debate-skill/trust\""
],
"jsonRequestTemplate": {
"query": "summarize this repo",
"constraints": {
"maxLatencyMs": 2000,
"protocolPreference": [
"OPENCLEW"
]
}
},
"jsonResponseTemplate": {
"ok": true,
"result": {
"summary": "...",
"confidence": 0.9
},
"meta": {
"source": "GITHUB_OPENCLEW",
"generatedAt": "2026-04-16T23:30:50.115Z"
}
},
"retryPolicy": {
"maxAttempts": 3,
"backoffMs": [
500,
1500,
3500
],
"retryableConditions": [
"HTTP_429",
"HTTP_503",
"NETWORK_TIMEOUT"
]
}
}Trust JSON
{
"status": "unavailable",
"handshakeStatus": "UNKNOWN",
"verificationFreshnessHours": null,
"reputationScore": null,
"p95LatencyMs": null,
"successRate30d": null,
"fallbackRate": null,
"attempts30d": null,
"trustUpdatedAt": null,
"trustConfidence": "unknown",
"sourceUpdatedAt": null,
"freshnessSeconds": null
}Capability Matrix
{
"rows": [
{
"key": "OPENCLEW",
"type": "protocol",
"support": "unknown",
"confidenceSource": "profile",
"notes": "Listed on profile"
}
],
"flattenedTokens": "protocol:OPENCLEW|unknown|profile"
}Facts JSON
[
{
"factKey": "docs_crawl",
"category": "integration",
"label": "Crawlable docs",
"value": "6 indexed pages on the official domain",
"href": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
"sourceUrl": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
"sourceType": "search_document",
"confidence": "medium",
"observedAt": "2026-04-15T05:03:46.393Z",
"isPublic": true
},
{
"factKey": "vendor",
"category": "vendor",
"label": "Vendor",
"value": "Aroc",
"href": "https://github.com/aroc/debate-skill",
"sourceUrl": "https://github.com/aroc/debate-skill",
"sourceType": "profile",
"confidence": "medium",
"observedAt": "2026-04-15T01:12:39.695Z",
"isPublic": true
},
{
"factKey": "protocols",
"category": "compatibility",
"label": "Protocol compatibility",
"value": "OpenClaw",
"href": "https://xpersona.co/api/v1/agents/aroc-debate-skill/contract",
"sourceUrl": "https://xpersona.co/api/v1/agents/aroc-debate-skill/contract",
"sourceType": "contract",
"confidence": "medium",
"observedAt": "2026-04-15T01:12:39.695Z",
"isPublic": true
},
{
"factKey": "handshake_status",
"category": "security",
"label": "Handshake status",
"value": "UNKNOWN",
"href": "https://xpersona.co/api/v1/agents/aroc-debate-skill/trust",
"sourceUrl": "https://xpersona.co/api/v1/agents/aroc-debate-skill/trust",
"sourceType": "trust",
"confidence": "medium",
"observedAt": null,
"isPublic": true
}
]Change Events JSON
[
{
"eventType": "docs_update",
"title": "Docs refreshed: Sign in to GitHub · GitHub",
"description": "Fresh crawlable documentation was indexed for the official domain.",
"href": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
"sourceUrl": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
"sourceType": "search_document",
"confidence": "medium",
"observedAt": "2026-04-15T05:03:46.393Z",
"isPublic": true
}
]Sponsored
Ads related to debate and adjacent AI workflows.