Crawler Summary

debate answer-first brief

Orchestrate a debate between Claude and Codex to reach consensus. Use when the user asks for a "second opinion", "debate", "consensus", "crosscheck", or wants another model to review a proposal or diff. --- name: debate description: | Orchestrate a debate between Claude and Codex to reach consensus. Use when the user asks for a "second opinion", "debate", "consensus", "crosscheck", or wants another model to review a proposal or diff. allowed-tools: Read, Glob, Grep, Bash --- Debate Skill Orchestrate a back-and-forth debate between two AI models (Claude and Codex) until they reach consensus on a technical decision, c Capability contract not published. No trust telemetry is available yet. Last updated 4/15/2026.

Freshness

Last checked 4/15/2026

Best For

debate is best for general automation workflows where OpenClaw compatibility matters.

Not Ideal For

Contract metadata is missing or unavailable for deterministic execution.

Evidence Sources Checked

editorial-content, GITHUB OPENCLEW, runtime-metrics, public facts pack

Claim this agent
Agent DossierGitHubSafety: 94/100

debate

Orchestrate a debate between Claude and Codex to reach consensus. Use when the user asks for a "second opinion", "debate", "consensus", "crosscheck", or wants another model to review a proposal or diff. --- name: debate description: | Orchestrate a debate between Claude and Codex to reach consensus. Use when the user asks for a "second opinion", "debate", "consensus", "crosscheck", or wants another model to review a proposal or diff. allowed-tools: Read, Glob, Grep, Bash --- Debate Skill Orchestrate a back-and-forth debate between two AI models (Claude and Codex) until they reach consensus on a technical decision, c

OpenClawself-declared

Public facts

4

Change events

1

Artifacts

0

Freshness

Apr 15, 2026

Verifiededitorial-contentNo verified compatibility signals

Capability contract not published. No trust telemetry is available yet. Last updated 4/15/2026.

Trust evidence available

Trust score

Unknown

Compatibility

OpenClaw

Freshness

Apr 15, 2026

Vendor

Aroc

Artifacts

0

Benchmarks

0

Last release

Unpublished

Executive Summary

Key links, install path, and a quick operational read before the deeper crawl record.

Verifiededitorial-content

Summary

Capability contract not published. No trust telemetry is available yet. Last updated 4/15/2026.

Setup snapshot

git clone https://github.com/aroc/debate-skill.git
  1. 1

    Setup complexity is LOW. This package is likely designed for quick installation with minimal external side-effects.

  2. 2

    Final validation: Expose the agent to a mock request payload inside a sandbox and trace the network egress before allowing access to real customer data.

Evidence Ledger

Everything public we have scraped or crawled about this agent, grouped by evidence type with provenance.

Verifiededitorial-content
Vendor (1)

Vendor

Aroc

profilemedium
Observed Apr 15, 2026Source linkProvenance
Compatibility (1)

Protocol compatibility

OpenClaw

contractmedium
Observed Apr 15, 2026Source linkProvenance
Security (1)

Handshake status

UNKNOWN

trustmedium
Observed unknownSource linkProvenance
Integration (1)

Crawlable docs

6 indexed pages on the official domain

search_documentmedium
Observed Apr 15, 2026Source linkProvenance

Release & Crawl Timeline

Merged public release, docs, artifact, benchmark, pricing, and trust refresh events.

Self-declaredagent-index

Artifacts Archive

Extracted files, examples, snippets, parameters, dependencies, permissions, and artifact metadata.

Self-declaredGITHUB OPENCLEW

Extracted files

0

Examples

6

Snippets

0

Languages

typescript

Parameters

Executable Examples

bash

# Debate with Codex (default when running from Claude)
/debate should we use Redis or Memcached?

# Explicitly debate with Claude (e.g., Claude vs Claude)
/debate --vs claude --model opus should we use a monorepo?

# Debate with Codex using a specific model
/debate --vs codex --model o3 review my authentication changes

# Debate with Codex with low reasoning (faster, cheaper)
/debate --vs codex --model gpt-5.3-codex --reasoning low quick sanity check

# Debate with Codex with high reasoning (thorough analysis)
/debate --vs codex --reasoning high review this complex algorithm

# Quick single-round feedback from Codex
/debate --quick --vs codex what do you think of this API design?

bash

# For code changes
git diff HEAD 2>/dev/null || git diff --staged 2>/dev/null

# For specific files, read them
# For architecture discussions, explore relevant code

text

## Topic
[User's original question/request]

## Context
[Relevant code, diffs, or information]

## My Proposal
[Your analysis and recommendation]

## Key Points
- [Point 1]
- [Point 2]
- [Point 3]

bash

# Check if running from Claude
if [ -n "$CLAUDE_SESSION_ID" ] || [ -n "$CLAUDE_CODE_ENTRYPOINT" ]; then
    CURRENT="claude"
    DEFAULT_OPPONENT="codex"
else
    CURRENT="codex"
    DEFAULT_OPPONENT="claude"
fi

bash

# Basic invocation (opponent required)
bash $SKILL_DIR/scripts/invoke_other.sh \
    --opponent codex \
    "Your prompt here"

# With specific model
bash $SKILL_DIR/scripts/invoke_other.sh \
    --opponent claude \
    --model opus \
    "Your prompt here"

# With model and reasoning level (Codex only)
bash $SKILL_DIR/scripts/invoke_other.sh \
    --opponent codex \
    --model gpt-5.3-codex \
    --reasoning high \
    "Your prompt here"

bash

bash $SKILL_DIR/scripts/invoke_other.sh \
    --opponent "$OPPONENT" \
    ${MODEL:+--model "$MODEL"} \
    ${REASONING:+--reasoning "$REASONING"} \
    "
You are reviewing a proposal.

## Context
[Include gathered context]

## Proposal
[Your proposal]

## Your Task
Evaluate this proposal critically. Consider:
- Technical correctness
- Edge cases missed
- Alternative approaches
- Potential issues

End your response with exactly one verdict:
- AGREE: [confirmation and why]
- REVISE: [specific changes needed]
- DISAGREE: [fundamental issues]
"

Docs & README

Full documentation captured from public sources, including the complete README when available.

Self-declaredGITHUB OPENCLEW

Docs source

GITHUB OPENCLEW

Editorial quality

ready

Orchestrate a debate between Claude and Codex to reach consensus. Use when the user asks for a "second opinion", "debate", "consensus", "crosscheck", or wants another model to review a proposal or diff. --- name: debate description: | Orchestrate a debate between Claude and Codex to reach consensus. Use when the user asks for a "second opinion", "debate", "consensus", "crosscheck", or wants another model to review a proposal or diff. allowed-tools: Read, Glob, Grep, Bash --- Debate Skill Orchestrate a back-and-forth debate between two AI models (Claude and Codex) until they reach consensus on a technical decision, c

Full README

name: debate description: | Orchestrate a debate between Claude and Codex to reach consensus. Use when the user asks for a "second opinion", "debate", "consensus", "crosscheck", or wants another model to review a proposal or diff. allowed-tools: Read, Glob, Grep, Bash

Debate Skill

Orchestrate a back-and-forth debate between two AI models (Claude and Codex) until they reach consensus on a technical decision, code review, or root cause analysis.

Activation

This skill activates when:

  • User invokes /debate [topic]
  • User asks for a "second opinion", "debate", "consensus", or "crosscheck"
  • User wants another model to review a proposal, diff, or decision

Arguments

  • --vs <claude|codex>: Choose opponent (default: auto-detect opposite of current)
  • --model <model>: Specify opponent's model (e.g., opus, sonnet, o3, gpt-4.1)
  • --reasoning <level>: Codex reasoning effort: low, medium, high (default: from config)
  • --quick: Single round only, get feedback without iterating to consensus
  • Topic: Any technical question, code review request, or decision point

Examples

# Debate with Codex (default when running from Claude)
/debate should we use Redis or Memcached?

# Explicitly debate with Claude (e.g., Claude vs Claude)
/debate --vs claude --model opus should we use a monorepo?

# Debate with Codex using a specific model
/debate --vs codex --model o3 review my authentication changes

# Debate with Codex with low reasoning (faster, cheaper)
/debate --vs codex --model gpt-5.3-codex --reasoning low quick sanity check

# Debate with Codex with high reasoning (thorough analysis)
/debate --vs codex --reasoning high review this complex algorithm

# Quick single-round feedback from Codex
/debate --quick --vs codex what do you think of this API design?

Available Models

The --model value is passed directly to the CLI, so use whatever model identifiers that CLI accepts.

Claude CLI models:

  • Shorthand: opus, sonnet, haiku
  • Full IDs: claude-opus-4-5-20251101, claude-sonnet-4-6-20250514, etc.
  • Check available models: claude --help or your Claude config

Codex CLI models:

  • Examples: o3, o4-mini, gpt-4.1, gpt-5.3-codex
  • Check available models: codex --help or your Codex config
  • Default is determined by your ~/.codex/config.toml

Orchestration Steps

Important: The skill's base directory is provided when the skill is invoked (shown as "Base directory for this skill: ..."). Use this as SKILL_DIR in all script paths below.

Step 1: Gather Context

Based on the topic, gather relevant context:

# For code changes
git diff HEAD 2>/dev/null || git diff --staged 2>/dev/null

# For specific files, read them
# For architecture discussions, explore relevant code

Step 2: Formulate Initial Proposal

As Claude, analyze the topic and formulate a clear proposal:

## Topic
[User's original question/request]

## Context
[Relevant code, diffs, or information]

## My Proposal
[Your analysis and recommendation]

## Key Points
- [Point 1]
- [Point 2]
- [Point 3]

Step 3: Determine Opponent

Parse the user's arguments to determine:

  1. Opponent CLI: From --vs flag, or default to the opposite of current (if running from Claude, default to codex; if from Codex, default to claude)
  2. Opponent Model: From --model flag, or omit to use CLI default

Detect current environment:

# Check if running from Claude
if [ -n "$CLAUDE_SESSION_ID" ] || [ -n "$CLAUDE_CODE_ENTRYPOINT" ]; then
    CURRENT="claude"
    DEFAULT_OPPONENT="codex"
else
    CURRENT="codex"
    DEFAULT_OPPONENT="claude"
fi

Step 4: Invoke Opponent

Use the invoke script with explicit opponent, optional model, and optional reasoning level:

# Basic invocation (opponent required)
bash $SKILL_DIR/scripts/invoke_other.sh \
    --opponent codex \
    "Your prompt here"

# With specific model
bash $SKILL_DIR/scripts/invoke_other.sh \
    --opponent claude \
    --model opus \
    "Your prompt here"

# With model and reasoning level (Codex only)
bash $SKILL_DIR/scripts/invoke_other.sh \
    --opponent codex \
    --model gpt-5.3-codex \
    --reasoning high \
    "Your prompt here"

Example critique prompt:

bash $SKILL_DIR/scripts/invoke_other.sh \
    --opponent "$OPPONENT" \
    ${MODEL:+--model "$MODEL"} \
    ${REASONING:+--reasoning "$REASONING"} \
    "
You are reviewing a proposal.

## Context
[Include gathered context]

## Proposal
[Your proposal]

## Your Task
Evaluate this proposal critically. Consider:
- Technical correctness
- Edge cases missed
- Alternative approaches
- Potential issues

End your response with exactly one verdict:
- AGREE: [confirmation and why]
- REVISE: [specific changes needed]
- DISAGREE: [fundamental issues]
"

Step 5: Parse Verdict

The invoke script outputs the response file path to stdout. Capture it and parse:

# Capture the output file path from the invoke script
RESPONSE_FILE=$(bash $SKILL_DIR/scripts/invoke_other.sh \
    --opponent "$OPPONENT" \
    ${MODEL:+--model "$MODEL"} \
    ${REASONING:+--reasoning "$REASONING"} \
    "Your critique prompt here")

# Parse the verdict from the response
python3 $SKILL_DIR/scripts/parse_verdict.py "$RESPONSE_FILE"

This returns: AGREE|REVISE|DISAGREE and the explanation.

Step 6: Iterate Based on Verdict

If AGREE:

  • Consensus reached! Present final solution to user.

If REVISE:

  • Incorporate the suggested changes into your proposal
  • Re-invoke the opponent with the revised proposal
  • Continue until AGREE or max rounds

If DISAGREE:

  • Address the fundamental concerns raised
  • Formulate a counter-proposal or clarification
  • Re-invoke the opponent
  • Continue until AGREE or max rounds

Step 7: Handle Max Rounds

If 5 rounds pass without consensus, generate a disagreement summary (see below).

Quick Mode (--quick)

When --quick flag is present:

  1. Formulate initial proposal
  2. Get single critique from opposing model
  3. Present both perspectives to user
  4. Do NOT iterate - stop after one round regardless of verdict

Output Format

Display the debate to the user in this format. Use the actual model names (e.g., "CLAUDE (opus)" or "CODEX (o3)"):

═══════════════════════════════════════════════
CONSENSUS DEBATE: [Topic]
Participants: [Current] vs [Opponent] ([model if specified])
═══════════════════════════════════════════════

--- Round 1 ---
[CURRENT]: [proposal]

[OPPONENT]: [critique]
Verdict: [AGREE|REVISE|DISAGREE]

--- Round 2 --- (if needed)
[CURRENT]: [revised proposal]

[OPPONENT]: [response]
Verdict: [AGREE|REVISE|DISAGREE]

═══════════════════════════════════════════════
CONSENSUS REACHED (Round N)
═══════════════════════════════════════════════
[Final agreed solution with key points]

No Consensus Summary

When max rounds (5) reached without consensus:

═══════════════════════════════════════════════
NO CONSENSUS REACHED (5 rounds)
═══════════════════════════════════════════════

## Points of Agreement
- [Things both models agreed on]

## Points of Disagreement
- [Issue 1]: [Current] thinks X, [Opponent] thinks Y
- [Issue 2]: ...

## Root Cause of Disagreement
[Why consensus couldn't be reached - e.g., different assumptions,
missing information, genuinely valid competing approaches]

## Recommendation
[Best path forward given the disagreement]
═══════════════════════════════════════════════

Error Handling

CLI not installed:

if ! command -v codex &> /dev/null && ! command -v claude &> /dev/null; then
    echo "Neither codex nor claude CLI found. Falling back to self-critique mode."
    # Provide self-critique instead
fi

Network/timeout error:

  • Retry once
  • If still fails, present partial results with note about the error

Empty response:

  • Treat as implicit DISAGREE
  • Request clarification in next round

Prompt Templates

Initial Critique Request

You are {opponent_name}, reviewing a proposal from {current_name}.

## Original Question
{user_topic}

## Context
{gathered_context}

## Proposal Being Reviewed
{proposal}

## Your Task
Critically evaluate this proposal. Consider:
1. Is the technical approach correct?
2. Are there edge cases or failure modes missed?
3. Are there better alternatives?
4. What are the risks or downsides?

Be constructive but thorough. If you agree, explain why it's solid.
If you have concerns, be specific about what should change.

End with exactly one of:
- AGREE: [your confirmation and reasoning]
- REVISE: [specific changes you recommend]
- DISAGREE: [fundamental issues that need addressing]

Revision Request

You are {opponent_name}, continuing a debate with {current_name}.

## Original Question
{user_topic}

## Previous Exchange
{conversation_history}

## Latest Revision
{revised_proposal}

## Your Task
Evaluate whether the revision addresses your previous concerns.

End with exactly one of:
- AGREE: [if concerns are resolved]
- REVISE: [if minor issues remain]
- DISAGREE: [if fundamental issues persist]

Implementation Notes

  • Accumulate conversation history between rounds
  • Each model should see the full debate context
  • Keep proposals concise but complete
  • Focus on technical merit, not rhetorical style
  • Maximum response from opposing model: ~2000 tokens

Contract & API

Machine endpoints, protocol fit, contract coverage, invocation examples, and guardrails for agent-to-agent use.

MissingGITHUB OPENCLEW

Contract coverage

Status

missing

Auth

None

Streaming

No

Data region

Unspecified

Protocol support

OpenClaw: self-declared

Requires: none

Forbidden: none

Guardrails

Operational confidence: low

No positive guardrails captured.
Invocation examples
curl -s "https://xpersona.co/api/v1/agents/aroc-debate-skill/snapshot"
curl -s "https://xpersona.co/api/v1/agents/aroc-debate-skill/contract"
curl -s "https://xpersona.co/api/v1/agents/aroc-debate-skill/trust"

Reliability & Benchmarks

Trust and runtime signals, benchmark suites, failure patterns, and practical risk constraints.

Missingruntime-metrics

Trust signals

Handshake

UNKNOWN

Confidence

unknown

Attempts 30d

unknown

Fallback rate

unknown

Runtime metrics

Observed P50

unknown

Observed P95

unknown

Rate limit

unknown

Estimated cost

unknown

Do not use if

Contract metadata is missing or unavailable for deterministic execution.
No benchmark suites or observed failure patterns are available.

Media & Demo

Every public screenshot, visual asset, demo link, and owner-provided destination tied to this agent.

Missingno-media
No screenshots, media assets, or demo links are available.

Related Agents

Neighboring agents from the same protocol and source ecosystem for comparison and shortlist building.

Self-declaredprotocol-neighbors
GITHUB_REPOSactivepieces

Rank

70

AI Agents & MCPs & AI Workflow Automation • (~400 MCP servers for AI agents) • AI Automation / AI Agent with MCPs • AI Workflows & AI Agents • MCPs for AI Agents

Traction

No public download signal

Freshness

Updated 2d ago

OPENCLAW
GITHUB_REPOScherry-studio

Rank

70

AI productivity studio with smart chat, autonomous agents, and 300+ assistants. Unified access to frontier LLMs

Traction

No public download signal

Freshness

Updated 5d ago

MCPOPENCLAW
GITHUB_REPOSAionUi

Rank

70

Free, local, open-source 24/7 Cowork app and OpenClaw for Gemini CLI, Claude Code, Codex, OpenCode, Qwen Code, Goose CLI, Auggie, and more | 🌟 Star if you like it!

Traction

No public download signal

Freshness

Updated 6d ago

MCPOPENCLAW
GITHUB_REPOSCopilotKit

Rank

70

The Frontend for Agents & Generative UI. React + Angular

Traction

No public download signal

Freshness

Updated 23d ago

OPENCLAW
Machine Appendix

Contract JSON

{
  "contractStatus": "missing",
  "authModes": [],
  "requires": [],
  "forbidden": [],
  "supportsMcp": false,
  "supportsA2a": false,
  "supportsStreaming": false,
  "inputSchemaRef": null,
  "outputSchemaRef": null,
  "dataRegion": null,
  "contractUpdatedAt": null,
  "sourceUpdatedAt": null,
  "freshnessSeconds": null
}

Invocation Guide

{
  "preferredApi": {
    "snapshotUrl": "https://xpersona.co/api/v1/agents/aroc-debate-skill/snapshot",
    "contractUrl": "https://xpersona.co/api/v1/agents/aroc-debate-skill/contract",
    "trustUrl": "https://xpersona.co/api/v1/agents/aroc-debate-skill/trust"
  },
  "curlExamples": [
    "curl -s \"https://xpersona.co/api/v1/agents/aroc-debate-skill/snapshot\"",
    "curl -s \"https://xpersona.co/api/v1/agents/aroc-debate-skill/contract\"",
    "curl -s \"https://xpersona.co/api/v1/agents/aroc-debate-skill/trust\""
  ],
  "jsonRequestTemplate": {
    "query": "summarize this repo",
    "constraints": {
      "maxLatencyMs": 2000,
      "protocolPreference": [
        "OPENCLEW"
      ]
    }
  },
  "jsonResponseTemplate": {
    "ok": true,
    "result": {
      "summary": "...",
      "confidence": 0.9
    },
    "meta": {
      "source": "GITHUB_OPENCLEW",
      "generatedAt": "2026-04-16T23:30:50.115Z"
    }
  },
  "retryPolicy": {
    "maxAttempts": 3,
    "backoffMs": [
      500,
      1500,
      3500
    ],
    "retryableConditions": [
      "HTTP_429",
      "HTTP_503",
      "NETWORK_TIMEOUT"
    ]
  }
}

Trust JSON

{
  "status": "unavailable",
  "handshakeStatus": "UNKNOWN",
  "verificationFreshnessHours": null,
  "reputationScore": null,
  "p95LatencyMs": null,
  "successRate30d": null,
  "fallbackRate": null,
  "attempts30d": null,
  "trustUpdatedAt": null,
  "trustConfidence": "unknown",
  "sourceUpdatedAt": null,
  "freshnessSeconds": null
}

Capability Matrix

{
  "rows": [
    {
      "key": "OPENCLEW",
      "type": "protocol",
      "support": "unknown",
      "confidenceSource": "profile",
      "notes": "Listed on profile"
    }
  ],
  "flattenedTokens": "protocol:OPENCLEW|unknown|profile"
}

Facts JSON

[
  {
    "factKey": "docs_crawl",
    "category": "integration",
    "label": "Crawlable docs",
    "value": "6 indexed pages on the official domain",
    "href": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
    "sourceUrl": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
    "sourceType": "search_document",
    "confidence": "medium",
    "observedAt": "2026-04-15T05:03:46.393Z",
    "isPublic": true
  },
  {
    "factKey": "vendor",
    "category": "vendor",
    "label": "Vendor",
    "value": "Aroc",
    "href": "https://github.com/aroc/debate-skill",
    "sourceUrl": "https://github.com/aroc/debate-skill",
    "sourceType": "profile",
    "confidence": "medium",
    "observedAt": "2026-04-15T01:12:39.695Z",
    "isPublic": true
  },
  {
    "factKey": "protocols",
    "category": "compatibility",
    "label": "Protocol compatibility",
    "value": "OpenClaw",
    "href": "https://xpersona.co/api/v1/agents/aroc-debate-skill/contract",
    "sourceUrl": "https://xpersona.co/api/v1/agents/aroc-debate-skill/contract",
    "sourceType": "contract",
    "confidence": "medium",
    "observedAt": "2026-04-15T01:12:39.695Z",
    "isPublic": true
  },
  {
    "factKey": "handshake_status",
    "category": "security",
    "label": "Handshake status",
    "value": "UNKNOWN",
    "href": "https://xpersona.co/api/v1/agents/aroc-debate-skill/trust",
    "sourceUrl": "https://xpersona.co/api/v1/agents/aroc-debate-skill/trust",
    "sourceType": "trust",
    "confidence": "medium",
    "observedAt": null,
    "isPublic": true
  }
]

Change Events JSON

[
  {
    "eventType": "docs_update",
    "title": "Docs refreshed: Sign in to GitHub · GitHub",
    "description": "Fresh crawlable documentation was indexed for the official domain.",
    "href": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
    "sourceUrl": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
    "sourceType": "search_document",
    "confidence": "medium",
    "observedAt": "2026-04-15T05:03:46.393Z",
    "isPublic": true
  }
]

Sponsored

Ads related to debate and adjacent AI workflows.