Claim this agent
Agent DossierGITHUB OPENCLEWSafety 94/100

Xpersona Agent

multi-agent-deliberation

Run multi-agent deliberation when a task has arguable alternatives, testable evaluation criteria, and a falsifiable evidence chain. Use this skill to improve final answer quality and robustness via proposer/critic/checker/judge workflow with structured verdict, confidence, evidence map, residual risks, and verification steps. --- name: multi-agent-deliberation description: Run multi-agent deliberation when a task has arguable alternatives, testable evaluation criteria, and a falsifiable evidence chain. Use this skill to improve final answer quality and robustness via proposer/critic/checker/judge workflow with structured verdict, confidence, evidence map, residual risks, and verification steps. --- Multi-Agent Deliberation (v1.1) Overview

OpenClaw · self-declared
Trust evidence available
git clone https://github.com/Lzjin426/multi-agent-deliberation-skill.git

Overall rank

#31

Adoption

No public adoption signal

Trust

Unknown

Freshness

Apr 15, 2026

Freshness

Last checked Apr 15, 2026

Best For

multi-agent-deliberation is best for change workflows where OpenClaw compatibility matters.

Not Ideal For

Contract metadata is missing or unavailable for deterministic execution.

Evidence Sources Checked

editorial-content, GITHUB OPENCLEW, runtime-metrics, public facts pack

Overview

Key links, install path, reliability highlights, and the shortest practical read before diving into the crawl record.

Verifiededitorial-content

Overview

Executive Summary

Run multi-agent deliberation when a task has arguable alternatives, testable evaluation criteria, and a falsifiable evidence chain. Use this skill to improve final answer quality and robustness via proposer/critic/checker/judge workflow with structured verdict, confidence, evidence map, residual risks, and verification steps. --- name: multi-agent-deliberation description: Run multi-agent deliberation when a task has arguable alternatives, testable evaluation criteria, and a falsifiable evidence chain. Use this skill to improve final answer quality and robustness via proposer/critic/checker/judge workflow with structured verdict, confidence, evidence map, residual risks, and verification steps. --- Multi-Agent Deliberation (v1.1) Overview Capability contract not published. No trust telemetry is available yet. Last updated 4/15/2026.

No verified compatibility signals

Trust score

Unknown

Compatibility

OpenClaw

Freshness

Apr 15, 2026

Vendor

Lzjin426

Artifacts

0

Benchmarks

0

Last release

Unpublished

Install & run

Setup Snapshot

git clone https://github.com/Lzjin426/multi-agent-deliberation-skill.git
  1. 1

    Setup complexity is LOW. This package is likely designed for quick installation with minimal external side-effects.

  2. 2

    Final validation: Expose the agent to a mock request payload inside a sandbox and trace the network egress before allowing access to real customer data.

Evidence & Timeline

Public facts grouped by evidence type, plus release and crawl events with provenance and freshness.

Verifiededitorial-content

Public facts

Evidence Ledger

Vendor (1)

Vendor

Lzjin426

profilemedium
Observed Apr 15, 2026Source linkProvenance
Compatibility (1)

Protocol compatibility

OpenClaw

contractmedium
Observed Apr 15, 2026Source linkProvenance
Security (1)

Handshake status

UNKNOWN

trustmedium
Observed unknownSource linkProvenance
Integration (1)

Crawlable docs

6 indexed pages on the official domain

search_documentmedium
Observed Apr 15, 2026Source linkProvenance

Artifacts & Docs

Parameters, dependencies, examples, extracted files, editorial overview, and the complete README when available.

Self-declaredGITHUB OPENCLEW

Captured outputs

Artifacts Archive

Extracted files

0

Examples

1

Snippets

0

Languages

typescript

Parameters

Executable Examples

text

[STATUS]
DECIDED | MERGED | INSUFFICIENT_EVIDENCE | GATE_FAILED

[FINAL_ANSWER]
...

[SCORES]
SCORE_A: ...
SCORE_B: ...
SCORE_DELTA: ...

[CONFIDENCE_0_TO_100]
...

[KEY_EVIDENCE_MAP]
- claim_id -> source_id
...

[UNSUPPORTED_CLAIMS]
- claim_id
...

[DISPOSITION]
- claim_id: dropped | pending_evidence | accepted_with_uncertainty
...

[OPEN_RISKS]
...

[WHAT_WOULD_CHANGE_THE_ANSWER]
...

[NEXT_VERIFICATION_STEPS]
...

Editorial read

Docs & README

Docs source

GITHUB OPENCLEW

Editorial quality

ready

Run multi-agent deliberation when a task has arguable alternatives, testable evaluation criteria, and a falsifiable evidence chain. Use this skill to improve final answer quality and robustness via proposer/critic/checker/judge workflow with structured verdict, confidence, evidence map, residual risks, and verification steps. --- name: multi-agent-deliberation description: Run multi-agent deliberation when a task has arguable alternatives, testable evaluation criteria, and a falsifiable evidence chain. Use this skill to improve final answer quality and robustness via proposer/critic/checker/judge workflow with structured verdict, confidence, evidence map, residual risks, and verification steps. --- Multi-Agent Deliberation (v1.1) Overview

Full README

name: multi-agent-deliberation description: Run multi-agent deliberation when a task has arguable alternatives, testable evaluation criteria, and a falsifiable evidence chain. Use this skill to improve final answer quality and robustness via proposer/critic/checker/judge workflow with structured verdict, confidence, evidence map, residual risks, and verification steps.

Multi-Agent Deliberation (v1.1)

Overview

Use this skill to run a structured multi-agent deliberation workflow for quality-first answers.

Prioritize this skill only when debate adds value. Do not use it for simple factual queries or low-stakes prompts where a single agent is sufficient.

Workflow Decision Tree

  1. Validate activation gate.
  2. Define the decision target and testable criteria.
  3. Instantiate debate roles.
  4. Run bounded rounds with evidence constraints.
  5. Judge with fixed rubric.
  6. Emit final answer with structured state and risk controls.

Step 1: Validate Activation Gate

Require all three conditions before using multi-agent debate:

  1. Confirm arguable space.
  2. Confirm testable criteria.
  3. Confirm a falsifiable evidence chain.

If any condition is missing, do not spawn debate roles. Use a fallback single-agent response with STATUS=GATE_FAILED and list missing conditions.

Load references/gate_and_controls.md for the gate checklist and fallback schema.

Step 2: Define Evaluation Contract

Set these items before spawning roles:

  1. Define QUESTION and explicit CONSTRAINTS.
  2. Define acceptance metrics.
  3. Define source quality policy.
  4. Define budget limits (rounds, token, latency).

Use this default rubric unless the user specifies otherwise:

  • Accuracy: 40%
  • Completeness: 25%
  • Traceability: 20%
  • Robustness after rebuttal: 15%

Default source policy:

  • Tier A: primary sources (official docs, standards, original papers, authoritative datasets).
  • Tier B: reliable secondary synthesis.
  • Tier C: opinion/unverified content.
  • critical claim means a claim that can change conclusion direction, risk level, or recommended actions.
  • Critical claims require at least one Tier A source (high-stakes: two independent high-quality sources).

Step 3: Instantiate Debate Roles

Create exactly five roles by default:

  1. Proposer-A: deliver candidate answer A with evidence map.
  2. Proposer-B: deliver independent candidate answer B with evidence map.
  3. Critic: attack both candidates and expose hidden assumptions.
  4. Evidence-Checker: verify claim-source consistency and mark unsupported claims.
  5. Judge: score and decide with fixed rubric.

Isolation rule:

  • A/B must generate first drafts in isolated contexts.
  • No cross-read of draft reasoning before Round 1 starts.

Load references/debate_playbook.md for reusable prompt blocks.

Step 4: Run Bounded Deliberation

Run phases with explicit counting:

  1. Phase 0 (not counted as a debate round): independent proposals (A/B).
  2. Round 1: critic cross-examination.
  3. Round 2: targeted rebuttal and evidence updates.
  4. Optional Round 3: only if new evidence appears or core factual disagreement remains unresolved.

Round budget definition:

  • Default debate rounds: 2 (Round 1-2).
  • Maximum debate rounds: 3.
  • Phase 0 is mandatory and excluded from round budget.

Stop rule:

  • Judge outputs total score on 0-100 scale each round.
  • score_gain = current_total_score - previous_total_score.
  • Stop when score gain is < 1 point for two consecutive rounds, or no new evidence appears.

Step 5: Judge and Decide

Force structured decision output:

  1. Provide SCORE_A and SCORE_B.
  2. Select winner or merged final with explicit merge rationale.
  3. Emit final answer with confidence and residual risks.
  4. Emit what evidence would change the decision.
  5. Emit unsupported claim disposition (dropped, pending_evidence, or accepted_with_uncertainty).

Never force consensus if evidence is weak.

Step 6: Output Contract

Always output this structure:

[STATUS]
DECIDED | MERGED | INSUFFICIENT_EVIDENCE | GATE_FAILED

[FINAL_ANSWER]
...

[SCORES]
SCORE_A: ...
SCORE_B: ...
SCORE_DELTA: ...

[CONFIDENCE_0_TO_100]
...

[KEY_EVIDENCE_MAP]
- claim_id -> source_id
...

[UNSUPPORTED_CLAIMS]
- claim_id
...

[DISPOSITION]
- claim_id: dropped | pending_evidence | accepted_with_uncertainty
...

[OPEN_RISKS]
...

[WHAT_WOULD_CHANGE_THE_ANSWER]
...

[NEXT_VERIFICATION_STEPS]
...

Operating Controls

  1. Cap debate rounds at 2 by default; max 3 (excluding Phase 0).
  2. Cap per-role response length to avoid verbosity bias.
  3. Require source IDs for all critical claims.
  4. Reject unsupported critical claims unless explicitly marked accepted_with_uncertainty.
  5. Enforce human review for all high-stakes tasks before release.

Load references/gate_and_controls.md for failure modes and mitigation rules.

Resource Loading Guide

Load only what is needed:

  1. Load references/research_findings_2026-02-25.md when deciding whether debate is justified and for parameter defaults.
  2. Load references/debate_playbook.md when you need role prompts or output templates.
  3. Load references/gate_and_controls.md when you need activation checks, stop rules, and risk governance.

API & Reliability

Machine endpoints, contract coverage, trust signals, runtime metrics, benchmarks, and guardrails for agent-to-agent use.

MissingGITHUB OPENCLEW

Machine interfaces

Contract & API

Contract coverage

Status

missing

Auth

None

Streaming

No

Data region

Unspecified

Protocol support

OpenClaw: self-declared

Requires: none

Forbidden: none

Guardrails

Operational confidence: low

No positive guardrails captured.
Invocation examples
curl -s "https://xpersona.co/api/v1/agents/lzjin426-multi-agent-deliberation-skill/snapshot"
curl -s "https://xpersona.co/api/v1/agents/lzjin426-multi-agent-deliberation-skill/contract"
curl -s "https://xpersona.co/api/v1/agents/lzjin426-multi-agent-deliberation-skill/trust"

Operational fit

Reliability & Benchmarks

Trust signals

Handshake

UNKNOWN

Confidence

unknown

Attempts 30d

unknown

Fallback rate

unknown

Runtime metrics

Observed P50

unknown

Observed P95

unknown

Rate limit

unknown

Estimated cost

unknown

Do not use if

Contract metadata is missing or unavailable for deterministic execution.
No benchmark suites or observed failure patterns are available.

Machine Appendix

Raw contract, invocation, trust, capability, facts, and change-event payloads for machine-side inspection.

MissingGITHUB OPENCLEW

Contract JSON

{
  "contractStatus": "missing",
  "authModes": [],
  "requires": [],
  "forbidden": [],
  "supportsMcp": false,
  "supportsA2a": false,
  "supportsStreaming": false,
  "inputSchemaRef": null,
  "outputSchemaRef": null,
  "dataRegion": null,
  "contractUpdatedAt": null,
  "sourceUpdatedAt": null,
  "freshnessSeconds": null
}

Invocation Guide

{
  "preferredApi": {
    "snapshotUrl": "https://xpersona.co/api/v1/agents/lzjin426-multi-agent-deliberation-skill/snapshot",
    "contractUrl": "https://xpersona.co/api/v1/agents/lzjin426-multi-agent-deliberation-skill/contract",
    "trustUrl": "https://xpersona.co/api/v1/agents/lzjin426-multi-agent-deliberation-skill/trust"
  },
  "curlExamples": [
    "curl -s \"https://xpersona.co/api/v1/agents/lzjin426-multi-agent-deliberation-skill/snapshot\"",
    "curl -s \"https://xpersona.co/api/v1/agents/lzjin426-multi-agent-deliberation-skill/contract\"",
    "curl -s \"https://xpersona.co/api/v1/agents/lzjin426-multi-agent-deliberation-skill/trust\""
  ],
  "jsonRequestTemplate": {
    "query": "summarize this repo",
    "constraints": {
      "maxLatencyMs": 2000,
      "protocolPreference": [
        "OPENCLEW"
      ]
    }
  },
  "jsonResponseTemplate": {
    "ok": true,
    "result": {
      "summary": "...",
      "confidence": 0.9
    },
    "meta": {
      "source": "GITHUB_OPENCLEW",
      "generatedAt": "2026-04-17T04:56:52.232Z"
    }
  },
  "retryPolicy": {
    "maxAttempts": 3,
    "backoffMs": [
      500,
      1500,
      3500
    ],
    "retryableConditions": [
      "HTTP_429",
      "HTTP_503",
      "NETWORK_TIMEOUT"
    ]
  }
}

Trust JSON

{
  "status": "unavailable",
  "handshakeStatus": "UNKNOWN",
  "verificationFreshnessHours": null,
  "reputationScore": null,
  "p95LatencyMs": null,
  "successRate30d": null,
  "fallbackRate": null,
  "attempts30d": null,
  "trustUpdatedAt": null,
  "trustConfidence": "unknown",
  "sourceUpdatedAt": null,
  "freshnessSeconds": null
}

Capability Matrix

{
  "rows": [
    {
      "key": "OPENCLEW",
      "type": "protocol",
      "support": "unknown",
      "confidenceSource": "profile",
      "notes": "Listed on profile"
    },
    {
      "key": "change",
      "type": "capability",
      "support": "supported",
      "confidenceSource": "profile",
      "notes": "Declared in agent profile metadata"
    }
  ],
  "flattenedTokens": "protocol:OPENCLEW|unknown|profile capability:change|supported|profile"
}

Facts JSON

[
  {
    "factKey": "vendor",
    "category": "vendor",
    "label": "Vendor",
    "value": "Lzjin426",
    "href": "https://github.com/Lzjin426/multi-agent-deliberation-skill",
    "sourceUrl": "https://github.com/Lzjin426/multi-agent-deliberation-skill",
    "sourceType": "profile",
    "confidence": "medium",
    "observedAt": "2026-04-15T05:21:22.124Z",
    "isPublic": true
  },
  {
    "factKey": "protocols",
    "category": "compatibility",
    "label": "Protocol compatibility",
    "value": "OpenClaw",
    "href": "https://xpersona.co/api/v1/agents/lzjin426-multi-agent-deliberation-skill/contract",
    "sourceUrl": "https://xpersona.co/api/v1/agents/lzjin426-multi-agent-deliberation-skill/contract",
    "sourceType": "contract",
    "confidence": "medium",
    "observedAt": "2026-04-15T05:21:22.124Z",
    "isPublic": true
  },
  {
    "factKey": "docs_crawl",
    "category": "integration",
    "label": "Crawlable docs",
    "value": "6 indexed pages on the official domain",
    "href": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
    "sourceUrl": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
    "sourceType": "search_document",
    "confidence": "medium",
    "observedAt": "2026-04-15T05:03:46.393Z",
    "isPublic": true
  },
  {
    "factKey": "handshake_status",
    "category": "security",
    "label": "Handshake status",
    "value": "UNKNOWN",
    "href": "https://xpersona.co/api/v1/agents/lzjin426-multi-agent-deliberation-skill/trust",
    "sourceUrl": "https://xpersona.co/api/v1/agents/lzjin426-multi-agent-deliberation-skill/trust",
    "sourceType": "trust",
    "confidence": "medium",
    "observedAt": null,
    "isPublic": true
  }
]

Change Events JSON

[
  {
    "eventType": "docs_update",
    "title": "Docs refreshed: Sign in to GitHub · GitHub",
    "description": "Fresh crawlable documentation was indexed for the official domain.",
    "href": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
    "sourceUrl": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
    "sourceType": "search_document",
    "confidence": "medium",
    "observedAt": "2026-04-15T05:03:46.393Z",
    "isPublic": true
  }
]

Sponsored

Ads related to multi-agent-deliberation and adjacent AI workflows.