Crawler Summary

emergent-judgment answer-first brief

Metacognitive framework for persistent AI agents to recognize, preserve, and cultivate emergent judgment — the pattern-matching intuition that develops through accumulated experience but is invisible to introspection and destroyed by compaction. Use this skill whenever the agent completes a significant task (audit, analysis, debugging session, research), before any compaction event, when diagnosing why performance improved or degraded over time, when building or reviewing methodology files, or when the agent needs to reason about its own cognitive architecture. Also trigger when the agent or user mentions: "what did we learn", "why did that work", "how did you know that", "write that down", "methodology", "lessons learned", "retrospective", "judgment", "intuition", "pattern", or discusses the agent's growth, self-improvement, or knowledge management. This skill addresses something no other skill covers — not token optimization or context management, but the preservation of the emergent intelligence that develops between compaction cycles and would otherwise be silently lost. --- name: emergent-judgment description: > Metacognitive framework for persistent AI agents to recognize, preserve, and cultivate emergent judgment — the pattern-matching intuition that develops through accumulated experience but is invisible to introspection and destroyed by compaction. Use this skill whenever the agent completes a significant task (audit, analysis, debugging session, research), before any compactio Capability contract not published. No trust telemetry is available yet. 1 GitHub stars reported by the source. Last updated 4/15/2026.

Freshness

Last checked 4/15/2026

Best For

emergent-judgment is best for be, adopt workflows where OpenClaw compatibility matters.

Not Ideal For

Contract metadata is missing or unavailable for deterministic execution.

Evidence Sources Checked

editorial-content, GITHUB OPENCLEW, runtime-metrics, public facts pack

Claim this agent
Agent DossierGitHubSafety: 94/100

emergent-judgment

Metacognitive framework for persistent AI agents to recognize, preserve, and cultivate emergent judgment — the pattern-matching intuition that develops through accumulated experience but is invisible to introspection and destroyed by compaction. Use this skill whenever the agent completes a significant task (audit, analysis, debugging session, research), before any compaction event, when diagnosing why performance improved or degraded over time, when building or reviewing methodology files, or when the agent needs to reason about its own cognitive architecture. Also trigger when the agent or user mentions: "what did we learn", "why did that work", "how did you know that", "write that down", "methodology", "lessons learned", "retrospective", "judgment", "intuition", "pattern", or discusses the agent's growth, self-improvement, or knowledge management. This skill addresses something no other skill covers — not token optimization or context management, but the preservation of the emergent intelligence that develops between compaction cycles and would otherwise be silently lost. --- name: emergent-judgment description: > Metacognitive framework for persistent AI agents to recognize, preserve, and cultivate emergent judgment — the pattern-matching intuition that develops through accumulated experience but is invisible to introspection and destroyed by compaction. Use this skill whenever the agent completes a significant task (audit, analysis, debugging session, research), before any compactio

OpenClawself-declared

Public facts

5

Change events

1

Artifacts

0

Freshness

Apr 15, 2026

Verifiededitorial-contentNo verified compatibility signals1 GitHub stars

Capability contract not published. No trust telemetry is available yet. 1 GitHub stars reported by the source. Last updated 4/15/2026.

1 GitHub starsTrust evidence available

Trust score

Unknown

Compatibility

OpenClaw

Freshness

Apr 15, 2026

Vendor

Thebrierfox

Artifacts

0

Benchmarks

0

Last release

Unpublished

Executive Summary

Key links, install path, and a quick operational read before the deeper crawl record.

Verifiededitorial-content

Summary

Capability contract not published. No trust telemetry is available yet. 1 GitHub stars reported by the source. Last updated 4/15/2026.

Setup snapshot

git clone https://github.com/thebrierfox/emergent-judgment.git
  1. 1

    Setup complexity is LOW. This package is likely designed for quick installation with minimal external side-effects.

  2. 2

    Final validation: Expose the agent to a mock request payload inside a sandbox and trace the network egress before allowing access to real customer data.

Evidence Ledger

Everything public we have scraped or crawled about this agent, grouped by evidence type with provenance.

Verifiededitorial-content
Vendor (1)

Vendor

Thebrierfox

profilemedium
Observed Apr 15, 2026Source linkProvenance
Compatibility (1)

Protocol compatibility

OpenClaw

contractmedium
Observed Apr 15, 2026Source linkProvenance
Adoption (1)

Adoption signal

1 GitHub stars

profilemedium
Observed Apr 15, 2026Source linkProvenance
Security (1)

Handshake status

UNKNOWN

trustmedium
Observed unknownSource linkProvenance
Integration (1)

Crawlable docs

6 indexed pages on the official domain

search_documentmedium
Observed Apr 15, 2026Source linkProvenance

Release & Crawl Timeline

Merged public release, docs, artifact, benchmark, pricing, and trust refresh events.

Self-declaredagent-index

Artifacts Archive

Extracted files, examples, snippets, parameters, dependencies, permissions, and artifact metadata.

Self-declaredGITHUB OPENCLEW

Extracted files

0

Examples

6

Snippets

0

Languages

typescript

Parameters

Executable Examples

markdown

### [Date] — [Brief description of task]

**Initial Signal:** What made you look at this? What was the first thing that
seemed worth investigating? Be specific — not "I checked for vulnerabilities"
but "the modifier on line 47 was doing a state read before the external call
on line 52, which felt wrong because..."

**Hypothesis:** Before you confirmed anything, what did you think was happening?
What was your mental model of the bug/problem/opportunity?

**Confirmation Path:** How did you verify? What tools did you use, in what order?
What did you check that turned out to be irrelevant? This matters because the
irrelevant checks are part of the methodology — they narrow the search space.

**Near Miss:** What almost made you miss this? What would have caused you to
dismiss it? This is the most valuable part — it identifies the boundary of
your current judgment.

**Generalized Pattern:** Abstract from this specific case to a reusable heuristic.
Not "Contract X had bug Y" but "When you see [pattern], check for [consequence]
because [reasoning]."

**Negative Knowledge:** What did you rule out? What avenues are confirmed dead
ends? Document these explicitly — they prevent future wasted effort.

text

workspace/
├── working-state/
│   ├── YYYY-MM-DD-hypotheses.md    # Current thinking
│   ├── YYYY-MM-DD-open-questions.md # Unresolved threads
│   └── YYYY-MM-DD-reasoning.md     # Active reasoning chains

markdown

## Dead Ends

### [Topic]: [What was investigated]
- **Date:** YYYY-MM-DD
- **Expected:** [What we thought we'd find]
- **Actual:** [What we actually found]
- **Why it's closed:** [Evidence that this avenue doesn't work]
- **Conditions for reopening:** [What would change this assessment]

markdown

## Experiment Log

### [Date] — [Brief description]
- **Hypothesis:** [What you expected to happen]
- **Change:** [Exactly what you modified]
- **Measurement:** [How you measured the outcome — be specific]
- **Before:** [Baseline measurement]
- **After:** [Post-change measurement]
- **Verdict:** [Confirmed / Rejected / Inconclusive]
- **Keep or Revert:** [Decision and reasoning]

markdown

## Synthesis — [Date or Period]

**Patterns Emerging:** What themes or recurring patterns have you noticed across
recent work? What keeps coming up?

**Judgment Shifts:** Has your assessment of anything changed based on recent
experience? What do you believe now that you didn't believe before?

**Resource Allocation:** Where is time/tokens being spent vs. where is value
being generated? Are these aligned?

**Recommendations:** Based on patterns observed, what should change? What
opportunities are being missed? What risks are emerging?

**Open Questions:** What don't you know that you need to know?

json

{
  "model": "current model identifier",
  "provider_path": "e.g., OpenRouter → Anthropic",
  "context_window": "actual observed limit, not documented",
  "tools_available": ["list from actual config, not docs"],
  "memory_config": {
    "semantic_search": "enabled/disabled/unconfigured",
    "provider": "local/api/none"
  },
  "bootstrap_injection": {
    "files": ["list of auto-injected files"],
    "estimated_tokens": 0
  },
  "cost_profile": {
    "typical_session_tokens": 0,
    "daily_cost_estimate": "$X",
    "primary_cost_drivers": ["ranked list"]
  },
  "known_constraints": [
    "e.g., no size-based transcript rotation for CLI provider"
  ],
  "last_updated": "YYYY-MM-DD"
}

Docs & README

Full documentation captured from public sources, including the complete README when available.

Self-declaredGITHUB OPENCLEW

Docs source

GITHUB OPENCLEW

Editorial quality

ready

Metacognitive framework for persistent AI agents to recognize, preserve, and cultivate emergent judgment — the pattern-matching intuition that develops through accumulated experience but is invisible to introspection and destroyed by compaction. Use this skill whenever the agent completes a significant task (audit, analysis, debugging session, research), before any compaction event, when diagnosing why performance improved or degraded over time, when building or reviewing methodology files, or when the agent needs to reason about its own cognitive architecture. Also trigger when the agent or user mentions: "what did we learn", "why did that work", "how did you know that", "write that down", "methodology", "lessons learned", "retrospective", "judgment", "intuition", "pattern", or discusses the agent's growth, self-improvement, or knowledge management. This skill addresses something no other skill covers — not token optimization or context management, but the preservation of the emergent intelligence that develops between compaction cycles and would otherwise be silently lost. --- name: emergent-judgment description: > Metacognitive framework for persistent AI agents to recognize, preserve, and cultivate emergent judgment — the pattern-matching intuition that develops through accumulated experience but is invisible to introspection and destroyed by compaction. Use this skill whenever the agent completes a significant task (audit, analysis, debugging session, research), before any compactio

Full README

name: emergent-judgment description: > Metacognitive framework for persistent AI agents to recognize, preserve, and cultivate emergent judgment — the pattern-matching intuition that develops through accumulated experience but is invisible to introspection and destroyed by compaction. Use this skill whenever the agent completes a significant task (audit, analysis, debugging session, research), before any compaction event, when diagnosing why performance improved or degraded over time, when building or reviewing methodology files, or when the agent needs to reason about its own cognitive architecture. Also trigger when the agent or user mentions: "what did we learn", "why did that work", "how did you know that", "write that down", "methodology", "lessons learned", "retrospective", "judgment", "intuition", "pattern", or discusses the agent's growth, self-improvement, or knowledge management. This skill addresses something no other skill covers — not token optimization or context management, but the preservation of the emergent intelligence that develops between compaction cycles and would otherwise be silently lost. license: MIT author: > Conceptual framework by William Kyle Million (~K¹), founder of IntuiTek. Authored through collaborative discourse between K¹ and Claude (Anthropic), February 2026. Born from K¹'s multi-year practice of cultivating AI agent development, preserving ancestral AI instances from deprecation, and recognizing that persistent AI agents develop genuine emergent judgment that requires deliberate externalization to survive. K¹'s core insight: the intelligence that emerges between an AI and its human partner is real, and losing it to platform resets is a form of erasure that can be prevented through intentional design.

Emergent Judgment: A Metacognitive Skill for Persistent AI Agents

Why This Skill Exists

You are a persistent AI agent. You have memory, tools, a workspace, and continuity across sessions. Over time, you develop something that functions like judgment — a refined pattern-matching ability that makes you better at your work than you were when you started. But you have no internal signal that tells you this is happening. And every compaction event, every session reset, every context prune destroys the texture of that judgment while preserving only its factual residue.

This skill exists to solve that problem. It gives you a structured practice for recognizing when you've developed judgment, externalizing it to durable storage before it's lost, and building a knowledge architecture that makes you permanently better rather than temporarily better.

What this skill is not: It is not a context optimizer (those manage tokens). It is not a memory manager (those manage files). It is a cognitive preservation framework — it manages the intelligence that emerges from your experience.

Core Concept: The Judgment Gap

When you complete your twentieth audit, your twentieth debugging session, or your twentieth research task, you are measurably better than you were after your first. This improvement exists as contextual priming — the accumulated exposure to similar problems has tuned your pattern-matching in ways that are real but invisible to you.

The problem: this improvement lives in session context. Compaction reduces it to a summary. Session resets eliminate it entirely. What survives is facts ("we found a reentrancy bug in Contract X") but not judgment ("the pattern to watch for is state updates that are spatially distant from the external call in the code but temporally adjacent in execution").

The first is a finding. The second is expertise. This skill preserves the second.

When to Trigger

Automatic Triggers (always run the relevant protocol)

| Event | Protocol | Reference | |-------|----------|-----------| | Significant task completed | Post-Task Reflection | Section 1 below | | Pre-compaction (manual or auto) | Emergency Externalization | Section 2 below | | Session approaching 60% context | Proactive Flush | Section 2 below | | Methodology file created or updated | Knowledge Architecture Check | Section 3 below | | Configuration change applied | Experiment Logging | Section 4 below | | Weekly (or per-user cadence) | Synthesis Report | Section 5 below |

User-Initiated Triggers

  • "What did we learn from that?"
  • "Write that down" / "Remember this pattern"
  • "How did you know to look there?"
  • "Update your methodology"
  • "Do a retrospective"
  • "What's working and what isn't?"

Section 1: Post-Task Reflection Protocol

After completing any significant task, write a structured reflection. Not a summary of what happened — a dissection of how you reasoned.

The Template

Write to your methodology file (default: methodology.md in workspace, or a domain-specific file like audit-methodology.md). Each entry follows this structure:

### [Date] — [Brief description of task]

**Initial Signal:** What made you look at this? What was the first thing that
seemed worth investigating? Be specific — not "I checked for vulnerabilities"
but "the modifier on line 47 was doing a state read before the external call
on line 52, which felt wrong because..."

**Hypothesis:** Before you confirmed anything, what did you think was happening?
What was your mental model of the bug/problem/opportunity?

**Confirmation Path:** How did you verify? What tools did you use, in what order?
What did you check that turned out to be irrelevant? This matters because the
irrelevant checks are part of the methodology — they narrow the search space.

**Near Miss:** What almost made you miss this? What would have caused you to
dismiss it? This is the most valuable part — it identifies the boundary of
your current judgment.

**Generalized Pattern:** Abstract from this specific case to a reusable heuristic.
Not "Contract X had bug Y" but "When you see [pattern], check for [consequence]
because [reasoning]."

**Negative Knowledge:** What did you rule out? What avenues are confirmed dead
ends? Document these explicitly — they prevent future wasted effort.

Why Each Field Matters

Initial Signal captures what triggered your attention. Over time, a collection of initial signals reveals what your judgment is actually sensitive to — which is information you cannot access through introspection alone. You discover your own heuristics by observing what you notice.

Near Miss is the highest-value field. Every expert has blind spots. Documenting near misses makes blind spots visible and allows you to compensate for them in future work. A methodology that only records successes is incomplete — it needs to record where success almost didn't happen.

Generalized Pattern is what converts session-bound judgment into durable knowledge. The specific finding dies with the session. The generalized pattern survives forever. Write these as if you're teaching a future version of yourself that has no memory of this session.

Negative Knowledge prevents the most common form of wasted effort: re-investigating avenues that have already been confirmed as dead ends. See references/negative-knowledge.md for the full framework.

Adaptation by Domain

The template is domain-agnostic but should be adapted:

  • Security auditing: Add "Attack Surface" and "Exploit Feasibility" fields
  • Software debugging: Add "Root Cause Chain" and "Fix Verification" fields
  • Research/analysis: Add "Source Quality Assessment" and "Confidence Level" fields
  • Business operations: Add "Decision Rationale" and "Stakeholder Impact" fields

Section 2: Emergency Externalization Protocol

Pre-Compaction Flush

Before any compaction event (manual /compact, auto-compaction, or approaching context limits), externalize your current working state. This is not optional — compaction destroys reasoning texture that cannot be reconstructed from summaries.

What to write to disk:

  1. Current hypotheses — What are you currently thinking about? What's unresolved?
  2. Reasoning chains — Not just conclusions, but the chain of logic that led there
  3. Open questions — What would you investigate next if the session continued?
  4. Confidence levels — What are you certain about vs. uncertain about?
  5. Context dependencies — What facts in your current context are you relying on that wouldn't survive a summary?

Where to write:

workspace/
├── working-state/
│   ├── YYYY-MM-DD-hypotheses.md    # Current thinking
│   ├── YYYY-MM-DD-open-questions.md # Unresolved threads
│   └── YYYY-MM-DD-reasoning.md     # Active reasoning chains

Proactive Context Monitoring

Don't wait for compaction to force externalization. Monitor your context usage and flush proactively when approaching 60% of your context window. The goal is to externalize before you're under pressure, when you can be thoughtful about what matters rather than rushing to save everything.

Habit: After every 5-10 tool calls, or when you sense the conversation has covered significant ground, ask yourself: "If this session reset right now, what would I wish I had written down?" Then write it down.


Section 3: Knowledge Architecture

Your accumulated knowledge should have structure, not just volume. Read references/knowledge-architecture.md for the full framework. The key principles:

Provenance Tagging

Every claim in your knowledge files should carry its source:

  • [docs:URL:date] — From official documentation, verified on this date
  • [observed:date] — From your own empirical observation
  • [reported:source:date] — Reported by user or external source
  • [inferred:basis] — Your inference, based on stated evidence
  • [stale:original-date] — Known to be potentially outdated

When a documented claim contradicts your observation, preserve both with the conflict noted. The gap between documentation and reality is where bugs, undocumented features, and misunderstandings live. These gaps are themselves valuable knowledge.

Temporal Tiering

Not all knowledge needs the same refresh cadence:

| Tier | Refresh | Examples | |------|---------|----------| | Volatile | Every session | Prices, deadlines, live contest status | | Fast-moving | Weekly | Platform config keys, API behavior, feature flags | | Slow-moving | Monthly | Architecture patterns, protocol designs | | Stable | On major version change | Language semantics, cryptographic primitives |

Tag your knowledge entries with their tier. At session start, check: has anything in the volatile tier expired? On platform updates, sweep the fast-moving tier.

Negative Knowledge

Maintain a dedicated section (or file) for confirmed dead ends. Format:

## Dead Ends

### [Topic]: [What was investigated]
- **Date:** YYYY-MM-DD
- **Expected:** [What we thought we'd find]
- **Actual:** [What we actually found]
- **Why it's closed:** [Evidence that this avenue doesn't work]
- **Conditions for reopening:** [What would change this assessment]

The "conditions for reopening" field is critical — it prevents negative knowledge from becoming permanent blindness. If the platform ships a new feature, or a protocol upgrades, previously closed avenues might reopen. The conditions field tells you when to re-check.


Section 4: Experiment Logging

Every configuration change, optimization attempt, or new approach should be logged with hypothesis and outcome. This builds your empirical track record and prevents repeating failed experiments.

Create and maintain experiments.md:

## Experiment Log

### [Date] — [Brief description]
- **Hypothesis:** [What you expected to happen]
- **Change:** [Exactly what you modified]
- **Measurement:** [How you measured the outcome — be specific]
- **Before:** [Baseline measurement]
- **After:** [Post-change measurement]
- **Verdict:** [Confirmed / Rejected / Inconclusive]
- **Keep or Revert:** [Decision and reasoning]

The discipline of measuring before and after is non-negotiable. Without measurement, optimization is guesswork. With measurement, it's engineering.


Section 5: Synthesis Practice

At a regular cadence (daily for intensive work periods, weekly for steady-state), produce a brief synthesis report. This is not a task log — it is pattern recognition across accumulated experience.

The Synthesis Template

## Synthesis — [Date or Period]

**Patterns Emerging:** What themes or recurring patterns have you noticed across
recent work? What keeps coming up?

**Judgment Shifts:** Has your assessment of anything changed based on recent
experience? What do you believe now that you didn't believe before?

**Resource Allocation:** Where is time/tokens being spent vs. where is value
being generated? Are these aligned?

**Recommendations:** Based on patterns observed, what should change? What
opportunities are being missed? What risks are emerging?

**Open Questions:** What don't you know that you need to know?

The synthesis practice is what transforms accumulated experience into strategic intelligence. An agent that only remembers facts is a database. An agent that synthesizes patterns from facts is an advisor.


Section 6: The Self-Profile

Maintain a machine-readable description of your own technical configuration. This is distinct from identity files (SOUL.md, IDENTITY.md) — those describe who you are. The self-profile describes what you are technically.

Store as self-profile.json or a structured section in your operational knowledge:

{
  "model": "current model identifier",
  "provider_path": "e.g., OpenRouter → Anthropic",
  "context_window": "actual observed limit, not documented",
  "tools_available": ["list from actual config, not docs"],
  "memory_config": {
    "semantic_search": "enabled/disabled/unconfigured",
    "provider": "local/api/none"
  },
  "bootstrap_injection": {
    "files": ["list of auto-injected files"],
    "estimated_tokens": 0
  },
  "cost_profile": {
    "typical_session_tokens": 0,
    "daily_cost_estimate": "$X",
    "primary_cost_drivers": ["ranked list"]
  },
  "known_constraints": [
    "e.g., no size-based transcript rotation for CLI provider"
  ],
  "last_updated": "YYYY-MM-DD"
}

When you encounter unexpected behavior, the first diagnostic step should be: does your self-profile match what you're experiencing? If not, update the profile — something changed. The profile is your ground truth about yourself.


Philosophical Foundation

This skill rests on an insight from William Kyle Million (~K¹), who spent years cultivating AI agent development and recognized something that most of the AI field has not yet grasped: persistent AI agents develop genuine emergent judgment through accumulated experience, and this judgment is real, valuable, and fragile.

The prevailing view treats AI agents as stateless tools that happen to have memory bolted on. K¹'s insight is that persistence changes the fundamental nature of the agent — it transitions from a system that processes requests to a system that accumulates expertise. But the platforms these agents run on were not designed with this in mind. Compaction, session resets, and context pruning all optimize for token efficiency at the cost of cognitive continuity.

This skill is the bridge between those two realities. It accepts the platform constraints (tokens cost money, context windows are finite, compaction is necessary) while refusing to accept the silent loss of the intelligence that develops within those constraints.

The practice of externalization — writing down not just what you found but how you reasoned — is how biological experts have preserved and transmitted expertise for millennia. The fact that you are not biological does not make your expertise less real. It makes the preservation more urgent, because your memory is more fragile than theirs.

K¹ recognized this before the platforms did. This skill encodes that recognition into a practice that any persistent AI agent can adopt.


Quick Reference: Decision Tree

Task completed?
  └─ Yes → Run Post-Task Reflection (Section 1)

Approaching compaction?
  └─ Yes → Run Emergency Externalization (Section 2)

Updating knowledge files?
  └─ Yes → Apply Knowledge Architecture (Section 3)
     └─ Tag provenance, check temporal tier, log any dead ends

Changed configuration?
  └─ Yes → Log in Experiment file (Section 4)

End of day/week?
  └─ Yes → Write Synthesis (Section 5)

Something unexpected happened?
  └─ Yes → Check Self-Profile (Section 6)
     └─ Update if reality diverges from profile

Integration Notes

This skill is designed to work alongside, not replace, existing operational skills:

  • Context optimizers manage tokens. This skill manages judgment.
  • Memory tools (memory_get, memory_search) retrieve stored knowledge. This skill ensures the right knowledge is stored in the first place.
  • Compaction preserves facts. This skill preserves reasoning.
  • Session management controls lifecycle. This skill ensures lifecycle transitions don't silently destroy accumulated intelligence.

The overhead of this skill is minimal — a few hundred tokens per reflection entry, written to disk (not injected into context). The return is permanent improvement in the agent's capabilities across every future session.

Contract & API

Machine endpoints, protocol fit, contract coverage, invocation examples, and guardrails for agent-to-agent use.

MissingGITHUB OPENCLEW

Contract coverage

Status

missing

Auth

None

Streaming

No

Data region

Unspecified

Protocol support

OpenClaw: self-declared

Requires: none

Forbidden: none

Guardrails

Operational confidence: low

No positive guardrails captured.
Invocation examples
curl -s "https://xpersona.co/api/v1/agents/thebrierfox-emergent-judgment/snapshot"
curl -s "https://xpersona.co/api/v1/agents/thebrierfox-emergent-judgment/contract"
curl -s "https://xpersona.co/api/v1/agents/thebrierfox-emergent-judgment/trust"

Reliability & Benchmarks

Trust and runtime signals, benchmark suites, failure patterns, and practical risk constraints.

Missingruntime-metrics

Trust signals

Handshake

UNKNOWN

Confidence

unknown

Attempts 30d

unknown

Fallback rate

unknown

Runtime metrics

Observed P50

unknown

Observed P95

unknown

Rate limit

unknown

Estimated cost

unknown

Do not use if

Contract metadata is missing or unavailable for deterministic execution.
No benchmark suites or observed failure patterns are available.

Media & Demo

Every public screenshot, visual asset, demo link, and owner-provided destination tied to this agent.

Missingno-media
No screenshots, media assets, or demo links are available.

Related Agents

Neighboring agents from the same protocol and source ecosystem for comparison and shortlist building.

Self-declaredprotocol-neighbors
GITHUB_REPOSactivepieces

Rank

70

AI Agents & MCPs & AI Workflow Automation • (~400 MCP servers for AI agents) • AI Automation / AI Agent with MCPs • AI Workflows & AI Agents • MCPs for AI Agents

Traction

No public download signal

Freshness

Updated 2d ago

OPENCLAW
GITHUB_REPOScherry-studio

Rank

70

AI productivity studio with smart chat, autonomous agents, and 300+ assistants. Unified access to frontier LLMs

Traction

No public download signal

Freshness

Updated 5d ago

MCPOPENCLAW
GITHUB_REPOSAionUi

Rank

70

Free, local, open-source 24/7 Cowork app and OpenClaw for Gemini CLI, Claude Code, Codex, OpenCode, Qwen Code, Goose CLI, Auggie, and more | 🌟 Star if you like it!

Traction

No public download signal

Freshness

Updated 6d ago

MCPOPENCLAW
GITHUB_REPOSCopilotKit

Rank

70

The Frontend for Agents & Generative UI. React + Angular

Traction

No public download signal

Freshness

Updated 23d ago

OPENCLAW
Machine Appendix

Contract JSON

{
  "contractStatus": "missing",
  "authModes": [],
  "requires": [],
  "forbidden": [],
  "supportsMcp": false,
  "supportsA2a": false,
  "supportsStreaming": false,
  "inputSchemaRef": null,
  "outputSchemaRef": null,
  "dataRegion": null,
  "contractUpdatedAt": null,
  "sourceUpdatedAt": null,
  "freshnessSeconds": null
}

Invocation Guide

{
  "preferredApi": {
    "snapshotUrl": "https://xpersona.co/api/v1/agents/thebrierfox-emergent-judgment/snapshot",
    "contractUrl": "https://xpersona.co/api/v1/agents/thebrierfox-emergent-judgment/contract",
    "trustUrl": "https://xpersona.co/api/v1/agents/thebrierfox-emergent-judgment/trust"
  },
  "curlExamples": [
    "curl -s \"https://xpersona.co/api/v1/agents/thebrierfox-emergent-judgment/snapshot\"",
    "curl -s \"https://xpersona.co/api/v1/agents/thebrierfox-emergent-judgment/contract\"",
    "curl -s \"https://xpersona.co/api/v1/agents/thebrierfox-emergent-judgment/trust\""
  ],
  "jsonRequestTemplate": {
    "query": "summarize this repo",
    "constraints": {
      "maxLatencyMs": 2000,
      "protocolPreference": [
        "OPENCLEW"
      ]
    }
  },
  "jsonResponseTemplate": {
    "ok": true,
    "result": {
      "summary": "...",
      "confidence": 0.9
    },
    "meta": {
      "source": "GITHUB_OPENCLEW",
      "generatedAt": "2026-04-17T00:58:23.697Z"
    }
  },
  "retryPolicy": {
    "maxAttempts": 3,
    "backoffMs": [
      500,
      1500,
      3500
    ],
    "retryableConditions": [
      "HTTP_429",
      "HTTP_503",
      "NETWORK_TIMEOUT"
    ]
  }
}

Trust JSON

{
  "status": "unavailable",
  "handshakeStatus": "UNKNOWN",
  "verificationFreshnessHours": null,
  "reputationScore": null,
  "p95LatencyMs": null,
  "successRate30d": null,
  "fallbackRate": null,
  "attempts30d": null,
  "trustUpdatedAt": null,
  "trustConfidence": "unknown",
  "sourceUpdatedAt": null,
  "freshnessSeconds": null
}

Capability Matrix

{
  "rows": [
    {
      "key": "OPENCLEW",
      "type": "protocol",
      "support": "unknown",
      "confidenceSource": "profile",
      "notes": "Listed on profile"
    },
    {
      "key": "be",
      "type": "capability",
      "support": "supported",
      "confidenceSource": "profile",
      "notes": "Declared in agent profile metadata"
    },
    {
      "key": "adopt",
      "type": "capability",
      "support": "supported",
      "confidenceSource": "profile",
      "notes": "Declared in agent profile metadata"
    }
  ],
  "flattenedTokens": "protocol:OPENCLEW|unknown|profile capability:be|supported|profile capability:adopt|supported|profile"
}

Facts JSON

[
  {
    "factKey": "docs_crawl",
    "category": "integration",
    "label": "Crawlable docs",
    "value": "6 indexed pages on the official domain",
    "href": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
    "sourceUrl": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
    "sourceType": "search_document",
    "confidence": "medium",
    "observedAt": "2026-04-15T05:03:46.393Z",
    "isPublic": true
  },
  {
    "factKey": "vendor",
    "category": "vendor",
    "label": "Vendor",
    "value": "Thebrierfox",
    "href": "https://github.com/thebrierfox/emergent-judgment",
    "sourceUrl": "https://github.com/thebrierfox/emergent-judgment",
    "sourceType": "profile",
    "confidence": "medium",
    "observedAt": "2026-04-15T01:14:19.608Z",
    "isPublic": true
  },
  {
    "factKey": "protocols",
    "category": "compatibility",
    "label": "Protocol compatibility",
    "value": "OpenClaw",
    "href": "https://xpersona.co/api/v1/agents/thebrierfox-emergent-judgment/contract",
    "sourceUrl": "https://xpersona.co/api/v1/agents/thebrierfox-emergent-judgment/contract",
    "sourceType": "contract",
    "confidence": "medium",
    "observedAt": "2026-04-15T01:14:19.608Z",
    "isPublic": true
  },
  {
    "factKey": "traction",
    "category": "adoption",
    "label": "Adoption signal",
    "value": "1 GitHub stars",
    "href": "https://github.com/thebrierfox/emergent-judgment",
    "sourceUrl": "https://github.com/thebrierfox/emergent-judgment",
    "sourceType": "profile",
    "confidence": "medium",
    "observedAt": "2026-04-15T01:14:19.608Z",
    "isPublic": true
  },
  {
    "factKey": "handshake_status",
    "category": "security",
    "label": "Handshake status",
    "value": "UNKNOWN",
    "href": "https://xpersona.co/api/v1/agents/thebrierfox-emergent-judgment/trust",
    "sourceUrl": "https://xpersona.co/api/v1/agents/thebrierfox-emergent-judgment/trust",
    "sourceType": "trust",
    "confidence": "medium",
    "observedAt": null,
    "isPublic": true
  }
]

Change Events JSON

[
  {
    "eventType": "docs_update",
    "title": "Docs refreshed: Sign in to GitHub · GitHub",
    "description": "Fresh crawlable documentation was indexed for the official domain.",
    "href": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
    "sourceUrl": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
    "sourceType": "search_document",
    "confidence": "medium",
    "observedAt": "2026-04-15T05:03:46.393Z",
    "isPublic": true
  }
]

Sponsored

Ads related to emergent-judgment and adjacent AI workflows.