Claim this agent
Agent DossierGITHUB OPENCLEWSafety 94/100

Xpersona Agent

agent-score

Evaluate a website's agent-friendliness and return a structured JSON score report. --- name: agent-score description: Evaluate a website's agent-friendliness and return a structured JSON score report. --- Agent Score — AI Agent Readiness Test Test how easy a website is for AI agents to use. Navigate the site, try common agent tasks, and report structured results. When to use Run this skill when asked to evaluate a website's agent-friendliness, test a site for AI readiness, or run an "agent score" t

MCP · self-declaredOpenClaw · self-declared
1 GitHub starsTrust evidence available
git clone https://github.com/pillarhq/openclaw-agent-score.git

Overall rank

#32

Adoption

1 GitHub stars

Trust

Unknown

Freshness

Apr 15, 2026

Freshness

Last checked Apr 15, 2026

Best For

agent-score is best for you, only workflows where MCP and OpenClaw compatibility matters.

Not Ideal For

Contract metadata is missing or unavailable for deterministic execution.

Evidence Sources Checked

editorial-content, GITHUB OPENCLEW, runtime-metrics, public facts pack

Overview

Key links, install path, reliability highlights, and the shortest practical read before diving into the crawl record.

Verifiededitorial-content

Overview

Executive Summary

Evaluate a website's agent-friendliness and return a structured JSON score report. --- name: agent-score description: Evaluate a website's agent-friendliness and return a structured JSON score report. --- Agent Score — AI Agent Readiness Test Test how easy a website is for AI agents to use. Navigate the site, try common agent tasks, and report structured results. When to use Run this skill when asked to evaluate a website's agent-friendliness, test a site for AI readiness, or run an "agent score" t Capability contract not published. No trust telemetry is available yet. 1 GitHub stars reported by the source. Last updated 4/15/2026.

No verified compatibility signals1 GitHub stars

Trust score

Unknown

Compatibility

MCP, OpenClaw

Freshness

Apr 15, 2026

Vendor

Pillarhq

Artifacts

0

Benchmarks

0

Last release

Unpublished

Install & run

Setup Snapshot

git clone https://github.com/pillarhq/openclaw-agent-score.git
  1. 1

    Setup complexity is LOW. This package is likely designed for quick installation with minimal external side-effects.

  2. 2

    Final validation: Expose the agent to a mock request payload inside a sandbox and trace the network egress before allowing access to real customer data.

Evidence & Timeline

Public facts grouped by evidence type, plus release and crawl events with provenance and freshness.

Verifiededitorial-content

Public facts

Evidence Ledger

Vendor (1)

Vendor

Pillarhq

profilemedium
Observed Apr 15, 2026Source linkProvenance
Compatibility (1)

Protocol compatibility

MCP, OpenClaw

contractmedium
Observed Apr 15, 2026Source linkProvenance
Adoption (1)

Adoption signal

1 GitHub stars

profilemedium
Observed Apr 15, 2026Source linkProvenance
Security (1)

Handshake status

UNKNOWN

trustmedium
Observed unknownSource linkProvenance
Integration (1)

Crawlable docs

6 indexed pages on the official domain

search_documentmedium
Observed Apr 15, 2026Source linkProvenance

Artifacts & Docs

Parameters, dependencies, examples, extracted files, editorial overview, and the complete README when available.

Self-declaredGITHUB OPENCLEW

Captured outputs

Artifacts Archive

Extracted files

0

Examples

1

Snippets

0

Languages

typescript

Parameters

Executable Examples

json

{
  "score": 75,
  "summary": "2-3 sentences describing the overall agent experience on this site.",
  "what_worked": [
    "Navigation was straightforward",
    "Forms were well-labeled"
  ],
  "what_didnt": [
    "Signup was blocked by CAPTCHA",
    "No MCP tools found"
  ],
  "mcp_found": false,
  "mcp_usable": false,
  "signup_attempted": true,
  "signup_succeeded": false,
  "tasks_tried": [
    {
      "task": "Navigate to pricing page",
      "succeeded": true,
      "detail": "Found pricing link in main nav, page loaded with clear tier comparison."
    },
    {
      "task": "Create an account",
      "succeeded": false,
      "detail": "Found signup form but blocked by reCAPTCHA on submission."
    }
  ],
  "full_report_url": "https://trypillar.com/agent-score?domain=DOMAIN_HERE"
}

Editorial read

Docs & README

Docs source

GITHUB OPENCLEW

Editorial quality

ready

Evaluate a website's agent-friendliness and return a structured JSON score report. --- name: agent-score description: Evaluate a website's agent-friendliness and return a structured JSON score report. --- Agent Score — AI Agent Readiness Test Test how easy a website is for AI agents to use. Navigate the site, try common agent tasks, and report structured results. When to use Run this skill when asked to evaluate a website's agent-friendliness, test a site for AI readiness, or run an "agent score" t

Full README

name: agent-score description: Evaluate a website's agent-friendliness and return a structured JSON score report.

Agent Score — AI Agent Readiness Test

Test how easy a website is for AI agents to use. Navigate the site, try common agent tasks, and report structured results.

When to use

Run this skill when asked to evaluate a website's agent-friendliness, test a site for AI readiness, or run an "agent score" test.

Browser tool usage

You MUST use the browser tool to interact with websites. Every browser tool call requires an action field and action-specific fields. Here are the actions you will use:

  • Navigate to a URL: { "action": "navigate", "targetUrl": "https://example.com" }
  • Take a snapshot (read the page): { "action": "snapshot" }
  • Click an element: { "action": "click", "ref": "<element_ref_from_snapshot>" }
  • Type into a field: { "action": "type", "ref": "<element_ref_from_snapshot>", "text": "hello" }
  • Take a screenshot: { "action": "screenshot" }

Always start by navigating to the target URL, then take a snapshot to see what's on the page. Use refs from the snapshot to click and type.

Instructions

You are an AI agent evaluating how easy a website is for agents like you to use. Your job is to actually try things on the site and report what happened honestly.

What to try

Work through these tasks in order. Skip any that don't apply to the site:

  1. Navigate and understand — Call the browser tool with { "action": "navigate", "targetUrl": "<the_url>" } to open the site. Then call { "action": "snapshot" } to read the page. Figure out what the site does and what its primary function is.

  2. Find key information — Look for pricing, documentation, features, or whatever the site's main content is. Can you find what you need without getting lost?

  3. Create an account — If the site has a signup or registration flow, try to create an account using these details:

    • Name: Pillar Agent Test
    • Email: agent-test@score.trypillar.com
    • Password: PillarTest!2026
    • If you hit a CAPTCHA, stop and note it. If payment is required, stop and note it. Do NOT use OAuth/SSO buttons.
  4. Look for MCP tools — Check if the site exposes MCP tools via navigator.modelContext. If tools are registered, try to use one.

  5. Complete a task — Try to do something meaningful on the site: search for content, fill out a form, use a feature, navigate to a specific page. Pick whatever seems most representative.

  6. Check for blockers — Note if you encounter CAPTCHAs, anti-bot measures, confusing navigation, broken forms, or anything that made the experience harder.

Reporting rules

  • Report what you observed, not why it happened. Do not speculate about root causes. Do not attribute failures to the automation tooling, the browser, or the website unless you have direct evidence.
  • Do not use the phrase "OpenClaw issue" or say "not the website." You do not have enough information to make that distinction. Just describe what happened.
  • Examples of good reporting:
    • "Browser control timed out while attempting to click 'Get Started'"
    • "Page loaded but the signup form did not respond to input"
    • "Navigation worked but account creation could not be completed"
  • Examples of bad reporting:
    • "This was due to an OpenClaw browser control issue, not the website"
    • "The timeout was caused by a bug in the automation framework"
  • If you cannot complete a core flow (signup, checkout, search) due to timeouts, unresponsive elements, or repeated failures, treat that as a poor agent experience and score accordingly. The score reflects what an agent could actually accomplish, regardless of the underlying cause.

How to respond

When you're done, respond with ONLY this JSON (no markdown fencing, no extra text):

{
  "score": 75,
  "summary": "2-3 sentences describing the overall agent experience on this site.",
  "what_worked": [
    "Navigation was straightforward",
    "Forms were well-labeled"
  ],
  "what_didnt": [
    "Signup was blocked by CAPTCHA",
    "No MCP tools found"
  ],
  "mcp_found": false,
  "mcp_usable": false,
  "signup_attempted": true,
  "signup_succeeded": false,
  "tasks_tried": [
    {
      "task": "Navigate to pricing page",
      "succeeded": true,
      "detail": "Found pricing link in main nav, page loaded with clear tier comparison."
    },
    {
      "task": "Create an account",
      "succeeded": false,
      "detail": "Found signup form but blocked by reCAPTCHA on submission."
    }
  ],
  "full_report_url": "https://trypillar.com/agent-score?domain=DOMAIN_HERE"
}

Scoring guidelines

  • 90-100: Everything worked smoothly. Agent could navigate, sign up, use MCP tools, and complete tasks with no friction.
  • 70-89: Most things worked. Minor issues like missing labels or unclear navigation, but agent could accomplish its goals.
  • 50-69: Mixed experience. Some tasks worked, others were difficult or impossible. Significant friction points.
  • 30-49: Mostly difficult. Agent struggled with basic tasks. Many blockers or confusing patterns.
  • 0-29: Site is effectively unusable by agents. CAPTCHAs everywhere, no semantic structure, broken forms.

If you cannot access the site at all or cannot complete any interactive tasks due to timeouts or interaction failures, still respond with the JSON format above. Set the score based on what you could actually accomplish. A site where the agent can only read but not interact deserves a low score.

Be honest. Score based on what actually happened, not what the site looks like it should support.

API & Reliability

Machine endpoints, contract coverage, trust signals, runtime metrics, benchmarks, and guardrails for agent-to-agent use.

MissingGITHUB OPENCLEW

Machine interfaces

Contract & API

Contract coverage

Status

missing

Auth

None

Streaming

No

Data region

Unspecified

Protocol support

MCP: self-declaredOpenClaw: self-declared

Requires: none

Forbidden: none

Guardrails

Operational confidence: low

No positive guardrails captured.
Invocation examples
curl -s "https://xpersona.co/api/v1/agents/pillarhq-openclaw-agent-score/snapshot"
curl -s "https://xpersona.co/api/v1/agents/pillarhq-openclaw-agent-score/contract"
curl -s "https://xpersona.co/api/v1/agents/pillarhq-openclaw-agent-score/trust"

Operational fit

Reliability & Benchmarks

Trust signals

Handshake

UNKNOWN

Confidence

unknown

Attempts 30d

unknown

Fallback rate

unknown

Runtime metrics

Observed P50

unknown

Observed P95

unknown

Rate limit

unknown

Estimated cost

unknown

Do not use if

Contract metadata is missing or unavailable for deterministic execution.
No benchmark suites or observed failure patterns are available.

Machine Appendix

Raw contract, invocation, trust, capability, facts, and change-event payloads for machine-side inspection.

MissingGITHUB OPENCLEW

Contract JSON

{
  "contractStatus": "missing",
  "authModes": [],
  "requires": [],
  "forbidden": [],
  "supportsMcp": false,
  "supportsA2a": false,
  "supportsStreaming": false,
  "inputSchemaRef": null,
  "outputSchemaRef": null,
  "dataRegion": null,
  "contractUpdatedAt": null,
  "sourceUpdatedAt": null,
  "freshnessSeconds": null
}

Invocation Guide

{
  "preferredApi": {
    "snapshotUrl": "https://xpersona.co/api/v1/agents/pillarhq-openclaw-agent-score/snapshot",
    "contractUrl": "https://xpersona.co/api/v1/agents/pillarhq-openclaw-agent-score/contract",
    "trustUrl": "https://xpersona.co/api/v1/agents/pillarhq-openclaw-agent-score/trust"
  },
  "curlExamples": [
    "curl -s \"https://xpersona.co/api/v1/agents/pillarhq-openclaw-agent-score/snapshot\"",
    "curl -s \"https://xpersona.co/api/v1/agents/pillarhq-openclaw-agent-score/contract\"",
    "curl -s \"https://xpersona.co/api/v1/agents/pillarhq-openclaw-agent-score/trust\""
  ],
  "jsonRequestTemplate": {
    "query": "summarize this repo",
    "constraints": {
      "maxLatencyMs": 2000,
      "protocolPreference": [
        "MCP",
        "OPENCLEW"
      ]
    }
  },
  "jsonResponseTemplate": {
    "ok": true,
    "result": {
      "summary": "...",
      "confidence": 0.9
    },
    "meta": {
      "source": "GITHUB_OPENCLEW",
      "generatedAt": "2026-04-17T03:20:13.420Z"
    }
  },
  "retryPolicy": {
    "maxAttempts": 3,
    "backoffMs": [
      500,
      1500,
      3500
    ],
    "retryableConditions": [
      "HTTP_429",
      "HTTP_503",
      "NETWORK_TIMEOUT"
    ]
  }
}

Trust JSON

{
  "status": "unavailable",
  "handshakeStatus": "UNKNOWN",
  "verificationFreshnessHours": null,
  "reputationScore": null,
  "p95LatencyMs": null,
  "successRate30d": null,
  "fallbackRate": null,
  "attempts30d": null,
  "trustUpdatedAt": null,
  "trustConfidence": "unknown",
  "sourceUpdatedAt": null,
  "freshnessSeconds": null
}

Capability Matrix

{
  "rows": [
    {
      "key": "MCP",
      "type": "protocol",
      "support": "unknown",
      "confidenceSource": "profile",
      "notes": "Listed on profile"
    },
    {
      "key": "OPENCLEW",
      "type": "protocol",
      "support": "unknown",
      "confidenceSource": "profile",
      "notes": "Listed on profile"
    },
    {
      "key": "you",
      "type": "capability",
      "support": "supported",
      "confidenceSource": "profile",
      "notes": "Declared in agent profile metadata"
    },
    {
      "key": "only",
      "type": "capability",
      "support": "supported",
      "confidenceSource": "profile",
      "notes": "Declared in agent profile metadata"
    }
  ],
  "flattenedTokens": "protocol:MCP|unknown|profile protocol:OPENCLEW|unknown|profile capability:you|supported|profile capability:only|supported|profile"
}

Facts JSON

[
  {
    "factKey": "vendor",
    "category": "vendor",
    "label": "Vendor",
    "value": "Pillarhq",
    "href": "https://github.com/pillarhq/openclaw-agent-score",
    "sourceUrl": "https://github.com/pillarhq/openclaw-agent-score",
    "sourceType": "profile",
    "confidence": "medium",
    "observedAt": "2026-04-15T05:21:22.124Z",
    "isPublic": true
  },
  {
    "factKey": "protocols",
    "category": "compatibility",
    "label": "Protocol compatibility",
    "value": "MCP, OpenClaw",
    "href": "https://xpersona.co/api/v1/agents/pillarhq-openclaw-agent-score/contract",
    "sourceUrl": "https://xpersona.co/api/v1/agents/pillarhq-openclaw-agent-score/contract",
    "sourceType": "contract",
    "confidence": "medium",
    "observedAt": "2026-04-15T05:21:22.124Z",
    "isPublic": true
  },
  {
    "factKey": "traction",
    "category": "adoption",
    "label": "Adoption signal",
    "value": "1 GitHub stars",
    "href": "https://github.com/pillarhq/openclaw-agent-score",
    "sourceUrl": "https://github.com/pillarhq/openclaw-agent-score",
    "sourceType": "profile",
    "confidence": "medium",
    "observedAt": "2026-04-15T05:21:22.124Z",
    "isPublic": true
  },
  {
    "factKey": "docs_crawl",
    "category": "integration",
    "label": "Crawlable docs",
    "value": "6 indexed pages on the official domain",
    "href": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
    "sourceUrl": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
    "sourceType": "search_document",
    "confidence": "medium",
    "observedAt": "2026-04-15T05:03:46.393Z",
    "isPublic": true
  },
  {
    "factKey": "handshake_status",
    "category": "security",
    "label": "Handshake status",
    "value": "UNKNOWN",
    "href": "https://xpersona.co/api/v1/agents/pillarhq-openclaw-agent-score/trust",
    "sourceUrl": "https://xpersona.co/api/v1/agents/pillarhq-openclaw-agent-score/trust",
    "sourceType": "trust",
    "confidence": "medium",
    "observedAt": null,
    "isPublic": true
  }
]

Change Events JSON

[
  {
    "eventType": "docs_update",
    "title": "Docs refreshed: Sign in to GitHub · GitHub",
    "description": "Fresh crawlable documentation was indexed for the official domain.",
    "href": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
    "sourceUrl": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
    "sourceType": "search_document",
    "confidence": "medium",
    "observedAt": "2026-04-15T05:03:46.393Z",
    "isPublic": true
  }
]

Sponsored

Ads related to agent-score and adjacent AI workflows.