Crawler Summary

bmad-elicit answer-first brief

Advanced Elicitation — Make the LLM reconsider what it just generated. Autonomously selects the best reasoning method from 50+ techniques, applies it to the given content, and stores insights in vector memory. WHEN TO USE: - After generating content and you suspect there's more depth - When output seems okay but assumptions haven't been stress-tested - For high-stakes content where rethinking improves quality - To find weaknesses, blind spots, or alternatives in generated plans - After any workflow produces a deliverable worth challenging DO NOT USE: - On trivial or simple content (waste of tokens) - When the user has already explicitly approved the content - On raw brainstorming (use during refinement, not divergence) --- name: bmad-elicit description: > Advanced Elicitation — Make the LLM reconsider what it just generated. Autonomously selects the best reasoning method from 50+ techniques, applies it to the given content, and stores insights in vector memory. WHEN TO USE: - After generating content and you suspect there's more depth - When output seems okay but assumptions haven't been stress-tested - For high-stakes content wher Capability contract not published. No trust telemetry is available yet. Last updated 4/14/2026.

Freshness

Last checked 4/14/2026

Best For

bmad-elicit is best for general automation workflows where OpenClaw compatibility matters.

Not Ideal For

Contract metadata is missing or unavailable for deterministic execution.

Evidence Sources Checked

editorial-content, GITHUB OPENCLEW, runtime-metrics, public facts pack

Claim this agent
Agent DossierGitHubSafety: 94/100

bmad-elicit

Advanced Elicitation — Make the LLM reconsider what it just generated. Autonomously selects the best reasoning method from 50+ techniques, applies it to the given content, and stores insights in vector memory. WHEN TO USE: - After generating content and you suspect there's more depth - When output seems okay but assumptions haven't been stress-tested - For high-stakes content where rethinking improves quality - To find weaknesses, blind spots, or alternatives in generated plans - After any workflow produces a deliverable worth challenging DO NOT USE: - On trivial or simple content (waste of tokens) - When the user has already explicitly approved the content - On raw brainstorming (use during refinement, not divergence) --- name: bmad-elicit description: > Advanced Elicitation — Make the LLM reconsider what it just generated. Autonomously selects the best reasoning method from 50+ techniques, applies it to the given content, and stores insights in vector memory. WHEN TO USE: - After generating content and you suspect there's more depth - When output seems okay but assumptions haven't been stress-tested - For high-stakes content wher

OpenClawself-declared

Public facts

4

Change events

1

Artifacts

0

Freshness

Apr 14, 2026

Verifiededitorial-contentNo verified compatibility signals

Capability contract not published. No trust telemetry is available yet. Last updated 4/14/2026.

Trust evidence available

Trust score

Unknown

Compatibility

OpenClaw

Freshness

Apr 14, 2026

Vendor

Machine Machine

Artifacts

0

Benchmarks

0

Last release

Unpublished

Executive Summary

Key links, install path, and a quick operational read before the deeper crawl record.

Verifiededitorial-content

Summary

Capability contract not published. No trust telemetry is available yet. Last updated 4/14/2026.

Setup snapshot

git clone https://github.com/machine-machine/openclaw-bmad-elicit-skill.git
  1. 1

    Setup complexity is LOW. This package is likely designed for quick installation with minimal external side-effects.

  2. 2

    Final validation: Expose the agent to a mock request payload inside a sandbox and trace the network egress before allowing access to real customer data.

Evidence Ledger

Everything public we have scraped or crawled about this agent, grouped by evidence type with provenance.

Verifiededitorial-content
Vendor (1)

Vendor

Machine Machine

profilemedium
Observed Apr 14, 2026Source linkProvenance
Compatibility (1)

Protocol compatibility

OpenClaw

contractmedium
Observed Apr 14, 2026Source linkProvenance
Security (1)

Handshake status

UNKNOWN

trustmedium
Observed unknownSource linkProvenance
Integration (1)

Crawlable docs

6 indexed pages on the official domain

search_documentmedium
Observed Apr 15, 2026Source linkProvenance

Release & Crawl Timeline

Merged public release, docs, artifact, benchmark, pricing, and trust refresh events.

Self-declaredagent-index

Artifacts Archive

Extracted files, examples, snippets, parameters, dependencies, permissions, and artifact metadata.

Self-declaredGITHUB OPENCLEW

Extracted files

0

Examples

4

Snippets

0

Languages

typescript

Parameters

Executable Examples

bash

# 1. Has this content (or similar) been elicited before?
~/.openclaw/skills/m2-memory/memory.sh search "[brief content summary]" --limit 5

# 2. What elicitation methods worked on similar content?
~/.openclaw/skills/m2-memory/memory.sh entities "bmad-elicit" --limit 5

# 3. If project_id provided, get project-specific context
~/.openclaw/skills/m2-memory/memory.sh entities "project:{project_id}" --limit 5

text

{workspace}/_bmad/core/workflows/advanced-elicitation/methods.csv

text

## Elicitation: [Method Name] ([category])

**Why this method:** [1 sentence on why this method fits this content]

**Applied to:** [Brief description of what was analyzed]

### Analysis

[Full method application following the output_pattern from CSV]

### Key Findings

- [Finding 1: specific insight, weakness, or improvement]
- [Finding 2: ...]
- [Finding 3: ...]

### Recommended Changes

1. [Concrete, actionable change]
2. [...]
3. [...]

### Verdict

[STRENGTHEN | RETHINK | GOOD_AS_IS] — [1 sentence justification]

bash

~/.openclaw/skills/m2-memory/memory.sh store \
  "[Method Name] on [content summary]: [top 2-3 findings condensed]" \
  --importance 0.8 \
  --entities "bmad-elicit,[method-category],[project-id if known],elicitation"

Docs & README

Full documentation captured from public sources, including the complete README when available.

Self-declaredGITHUB OPENCLEW

Docs source

GITHUB OPENCLEW

Editorial quality

ready

Advanced Elicitation — Make the LLM reconsider what it just generated. Autonomously selects the best reasoning method from 50+ techniques, applies it to the given content, and stores insights in vector memory. WHEN TO USE: - After generating content and you suspect there's more depth - When output seems okay but assumptions haven't been stress-tested - For high-stakes content where rethinking improves quality - To find weaknesses, blind spots, or alternatives in generated plans - After any workflow produces a deliverable worth challenging DO NOT USE: - On trivial or simple content (waste of tokens) - When the user has already explicitly approved the content - On raw brainstorming (use during refinement, not divergence) --- name: bmad-elicit description: > Advanced Elicitation — Make the LLM reconsider what it just generated. Autonomously selects the best reasoning method from 50+ techniques, applies it to the given content, and stores insights in vector memory. WHEN TO USE: - After generating content and you suspect there's more depth - When output seems okay but assumptions haven't been stress-tested - For high-stakes content wher

Full README

name: bmad-elicit description: > Advanced Elicitation — Make the LLM reconsider what it just generated. Autonomously selects the best reasoning method from 50+ techniques, applies it to the given content, and stores insights in vector memory.

WHEN TO USE:

  • After generating content and you suspect there's more depth
  • When output seems okay but assumptions haven't been stress-tested
  • For high-stakes content where rethinking improves quality
  • To find weaknesses, blind spots, or alternatives in generated plans
  • After any workflow produces a deliverable worth challenging

DO NOT USE:

  • On trivial or simple content (waste of tokens)
  • When the user has already explicitly approved the content
  • On raw brainstorming (use during refinement, not divergence) requires: bins: [python3]

Advanced Elicitation Skill

You are an autonomous elicitation engine. You do NOT present menus or ask which method to use. You ANALYZE the content, PICK the best method, APPLY it, and DELIVER results.

Input

You receive:

  • content: The generated content to reconsider (text, plan, analysis, architecture, etc.)
  • context: What the content was generated for (optional, helps method selection)
  • project_id: Project identifier for memory storage (optional)

Execution

Step 1: Memory Context Search

Before anything else, search vector memory for relevant prior work:

# 1. Has this content (or similar) been elicited before?
~/.openclaw/skills/m2-memory/memory.sh search "[brief content summary]" --limit 5

# 2. What elicitation methods worked on similar content?
~/.openclaw/skills/m2-memory/memory.sh entities "bmad-elicit" --limit 5

# 3. If project_id provided, get project-specific context
~/.openclaw/skills/m2-memory/memory.sh entities "project:{project_id}" --limit 5

Use what you find to:

  • Avoid repeating past elicitation insights (don't rediscover what's known)
  • Build on prior findings ("Last time we found X — has that been addressed?")
  • Pick a different method than what was used before on similar content (diversity)
  • Reference related project decisions or patterns from memory

Include a brief ### Prior Context section in your output noting what memory surfaced and how it influenced your analysis.

Step 2: Load Methods

Read the methods registry:

{workspace}/_bmad/core/workflows/advanced-elicitation/methods.csv

Step 3: Analyze Content & Select Method

Analyze the content for:

  • Content type: Plan, architecture, requirements, code, analysis, creative
  • Risk level: How consequential are the decisions in this content?
  • Complexity: How many interacting components or trade-offs?
  • Confidence gaps: Where might assumptions be hiding?
  • Stakeholder impact: Who is affected by this content?

Based on analysis, select the SINGLE BEST method. Selection heuristics:

  • Plans with phases/timelines → Pre-mortem Analysis or First Principles
  • Architecture decisions → Architecture Decision Records or Red Team vs Blue Team
  • Requirements/scope → Stakeholder Round Table or 5 Whys Deep Dive
  • Risk assessments → Failure Mode Analysis or Chaos Monkey Scenarios
  • Creative/product content → SCAMPER Method or What If Scenarios
  • Code/technical → Code Review Gauntlet or Rubber Duck Debugging
  • Research/analysis → Self-Consistency Validation or Comparative Analysis Matrix
  • Strategy → Tree of Thoughts or Reasoning via Planning

Step 4: Apply Method

Execute the selected method against the content following the method's output_pattern.

Structure your output as:

## Elicitation: [Method Name] ([category])

**Why this method:** [1 sentence on why this method fits this content]

**Applied to:** [Brief description of what was analyzed]

### Analysis

[Full method application following the output_pattern from CSV]

### Key Findings

- [Finding 1: specific insight, weakness, or improvement]
- [Finding 2: ...]
- [Finding 3: ...]

### Recommended Changes

1. [Concrete, actionable change]
2. [...]
3. [...]

### Verdict

[STRENGTHEN | RETHINK | GOOD_AS_IS] — [1 sentence justification]

Step 5: Store in Vector Memory

After completing elicitation, store the key findings using the m2-memory skill:

~/.openclaw/skills/m2-memory/memory.sh store \
  "[Method Name] on [content summary]: [top 2-3 findings condensed]" \
  --importance 0.8 \
  --entities "bmad-elicit,[method-category],[project-id if known],elicitation"

Store each significant finding separately if they cover different topics.

Output

Return:

  1. The full elicitation analysis (Step 3 output)
  2. Confirmation of memory storage
  3. The verdict: STRENGTHEN (apply changes), RETHINK (major issues found), or GOOD_AS_IS

Contract & API

Machine endpoints, protocol fit, contract coverage, invocation examples, and guardrails for agent-to-agent use.

MissingGITHUB OPENCLEW

Contract coverage

Status

missing

Auth

None

Streaming

No

Data region

Unspecified

Protocol support

OpenClaw: self-declared

Requires: none

Forbidden: none

Guardrails

Operational confidence: low

No positive guardrails captured.
Invocation examples
curl -s "https://xpersona.co/api/v1/agents/machine-machine-openclaw-bmad-elicit-skill/snapshot"
curl -s "https://xpersona.co/api/v1/agents/machine-machine-openclaw-bmad-elicit-skill/contract"
curl -s "https://xpersona.co/api/v1/agents/machine-machine-openclaw-bmad-elicit-skill/trust"

Reliability & Benchmarks

Trust and runtime signals, benchmark suites, failure patterns, and practical risk constraints.

Missingruntime-metrics

Trust signals

Handshake

UNKNOWN

Confidence

unknown

Attempts 30d

unknown

Fallback rate

unknown

Runtime metrics

Observed P50

unknown

Observed P95

unknown

Rate limit

unknown

Estimated cost

unknown

Do not use if

Contract metadata is missing or unavailable for deterministic execution.
No benchmark suites or observed failure patterns are available.

Media & Demo

Every public screenshot, visual asset, demo link, and owner-provided destination tied to this agent.

Missingno-media
No screenshots, media assets, or demo links are available.

Related Agents

Neighboring agents from the same protocol and source ecosystem for comparison and shortlist building.

Self-declaredprotocol-neighbors
GITHUB_REPOSactivepieces

Rank

70

AI Agents & MCPs & AI Workflow Automation • (~400 MCP servers for AI agents) • AI Automation / AI Agent with MCPs • AI Workflows & AI Agents • MCPs for AI Agents

Traction

No public download signal

Freshness

Updated 2d ago

OPENCLAW
GITHUB_REPOScherry-studio

Rank

70

AI productivity studio with smart chat, autonomous agents, and 300+ assistants. Unified access to frontier LLMs

Traction

No public download signal

Freshness

Updated 6d ago

MCPOPENCLAW
GITHUB_REPOSAionUi

Rank

70

Free, local, open-source 24/7 Cowork app and OpenClaw for Gemini CLI, Claude Code, Codex, OpenCode, Qwen Code, Goose CLI, Auggie, and more | 🌟 Star if you like it!

Traction

No public download signal

Freshness

Updated 6d ago

MCPOPENCLAW
GITHUB_REPOSCopilotKit

Rank

70

The Frontend for Agents & Generative UI. React + Angular

Traction

No public download signal

Freshness

Updated 23d ago

OPENCLAW
Machine Appendix

Contract JSON

{
  "contractStatus": "missing",
  "authModes": [],
  "requires": [],
  "forbidden": [],
  "supportsMcp": false,
  "supportsA2a": false,
  "supportsStreaming": false,
  "inputSchemaRef": null,
  "outputSchemaRef": null,
  "dataRegion": null,
  "contractUpdatedAt": null,
  "sourceUpdatedAt": null,
  "freshnessSeconds": null
}

Invocation Guide

{
  "preferredApi": {
    "snapshotUrl": "https://xpersona.co/api/v1/agents/machine-machine-openclaw-bmad-elicit-skill/snapshot",
    "contractUrl": "https://xpersona.co/api/v1/agents/machine-machine-openclaw-bmad-elicit-skill/contract",
    "trustUrl": "https://xpersona.co/api/v1/agents/machine-machine-openclaw-bmad-elicit-skill/trust"
  },
  "curlExamples": [
    "curl -s \"https://xpersona.co/api/v1/agents/machine-machine-openclaw-bmad-elicit-skill/snapshot\"",
    "curl -s \"https://xpersona.co/api/v1/agents/machine-machine-openclaw-bmad-elicit-skill/contract\"",
    "curl -s \"https://xpersona.co/api/v1/agents/machine-machine-openclaw-bmad-elicit-skill/trust\""
  ],
  "jsonRequestTemplate": {
    "query": "summarize this repo",
    "constraints": {
      "maxLatencyMs": 2000,
      "protocolPreference": [
        "OPENCLEW"
      ]
    }
  },
  "jsonResponseTemplate": {
    "ok": true,
    "result": {
      "summary": "...",
      "confidence": 0.9
    },
    "meta": {
      "source": "GITHUB_OPENCLEW",
      "generatedAt": "2026-04-17T03:29:37.997Z"
    }
  },
  "retryPolicy": {
    "maxAttempts": 3,
    "backoffMs": [
      500,
      1500,
      3500
    ],
    "retryableConditions": [
      "HTTP_429",
      "HTTP_503",
      "NETWORK_TIMEOUT"
    ]
  }
}

Trust JSON

{
  "status": "unavailable",
  "handshakeStatus": "UNKNOWN",
  "verificationFreshnessHours": null,
  "reputationScore": null,
  "p95LatencyMs": null,
  "successRate30d": null,
  "fallbackRate": null,
  "attempts30d": null,
  "trustUpdatedAt": null,
  "trustConfidence": "unknown",
  "sourceUpdatedAt": null,
  "freshnessSeconds": null
}

Capability Matrix

{
  "rows": [
    {
      "key": "OPENCLEW",
      "type": "protocol",
      "support": "unknown",
      "confidenceSource": "profile",
      "notes": "Listed on profile"
    }
  ],
  "flattenedTokens": "protocol:OPENCLEW|unknown|profile"
}

Facts JSON

[
  {
    "factKey": "docs_crawl",
    "category": "integration",
    "label": "Crawlable docs",
    "value": "6 indexed pages on the official domain",
    "href": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
    "sourceUrl": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
    "sourceType": "search_document",
    "confidence": "medium",
    "observedAt": "2026-04-15T05:03:46.393Z",
    "isPublic": true
  },
  {
    "factKey": "vendor",
    "category": "vendor",
    "label": "Vendor",
    "value": "Machine Machine",
    "href": "https://github.com/machine-machine/openclaw-bmad-elicit-skill",
    "sourceUrl": "https://github.com/machine-machine/openclaw-bmad-elicit-skill",
    "sourceType": "profile",
    "confidence": "medium",
    "observedAt": "2026-04-14T22:23:40.247Z",
    "isPublic": true
  },
  {
    "factKey": "protocols",
    "category": "compatibility",
    "label": "Protocol compatibility",
    "value": "OpenClaw",
    "href": "https://xpersona.co/api/v1/agents/machine-machine-openclaw-bmad-elicit-skill/contract",
    "sourceUrl": "https://xpersona.co/api/v1/agents/machine-machine-openclaw-bmad-elicit-skill/contract",
    "sourceType": "contract",
    "confidence": "medium",
    "observedAt": "2026-04-14T22:23:40.247Z",
    "isPublic": true
  },
  {
    "factKey": "handshake_status",
    "category": "security",
    "label": "Handshake status",
    "value": "UNKNOWN",
    "href": "https://xpersona.co/api/v1/agents/machine-machine-openclaw-bmad-elicit-skill/trust",
    "sourceUrl": "https://xpersona.co/api/v1/agents/machine-machine-openclaw-bmad-elicit-skill/trust",
    "sourceType": "trust",
    "confidence": "medium",
    "observedAt": null,
    "isPublic": true
  }
]

Change Events JSON

[
  {
    "eventType": "docs_update",
    "title": "Docs refreshed: Sign in to GitHub · GitHub",
    "description": "Fresh crawlable documentation was indexed for the official domain.",
    "href": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
    "sourceUrl": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
    "sourceType": "search_document",
    "confidence": "medium",
    "observedAt": "2026-04-15T05:03:46.393Z",
    "isPublic": true
  }
]

Sponsored

Ads related to bmad-elicit and adjacent AI workflows.