Crawler Summary

code-review answer-first brief

Scrutinize AI-generated or human-written code against project standards. Use when the user says "review," "clean up," "scrutinize," "audit code," "check code quality," "lint this," or wants a second pass on recently written or generated code. Catches sloppy patterns, inconsistencies, dead code, naming violations, architectural drift, and bugs that slip through fast iteration. Works with any language or framework — user provides repo-specific standards via arguments or project files (CLAUDE.md, PROJECT.md, .cursorrules, etc.). --- name: code-review description: Scrutinize AI-generated or human-written code against project standards. Use when the user says "review," "clean up," "scrutinize," "audit code," "check code quality," "lint this," or wants a second pass on recently written or generated code. Catches sloppy patterns, inconsistencies, dead code, naming violations, architectural drift, and bugs that slip through fast iteration. Works Capability contract not published. No trust telemetry is available yet. Last updated 2/24/2026.

Freshness

Last checked 2/24/2026

Best For

code-review is best for general automation workflows where OpenClaw compatibility matters.

Not Ideal For

Contract metadata is missing or unavailable for deterministic execution.

Evidence Sources Checked

editorial-content, GITHUB OPENCLEW, runtime-metrics, public facts pack

Claim this agent
Agent DossierGitHubSafety: 89/100

code-review

Scrutinize AI-generated or human-written code against project standards. Use when the user says "review," "clean up," "scrutinize," "audit code," "check code quality," "lint this," or wants a second pass on recently written or generated code. Catches sloppy patterns, inconsistencies, dead code, naming violations, architectural drift, and bugs that slip through fast iteration. Works with any language or framework — user provides repo-specific standards via arguments or project files (CLAUDE.md, PROJECT.md, .cursorrules, etc.). --- name: code-review description: Scrutinize AI-generated or human-written code against project standards. Use when the user says "review," "clean up," "scrutinize," "audit code," "check code quality," "lint this," or wants a second pass on recently written or generated code. Catches sloppy patterns, inconsistencies, dead code, naming violations, architectural drift, and bugs that slip through fast iteration. Works

OpenClawself-declared

Public facts

4

Change events

1

Artifacts

0

Freshness

Feb 24, 2026

Verifiededitorial-contentNo verified compatibility signals

Capability contract not published. No trust telemetry is available yet. Last updated 2/24/2026.

Trust evidence available

Trust score

Unknown

Compatibility

OpenClaw

Freshness

Feb 24, 2026

Vendor

Thesampadilla

Artifacts

0

Benchmarks

0

Last release

Unpublished

Executive Summary

Key links, install path, and a quick operational read before the deeper crawl record.

Verifiededitorial-content

Summary

Capability contract not published. No trust telemetry is available yet. Last updated 2/24/2026.

Setup snapshot

git clone https://github.com/theSamPadilla/code-review-skill.git
  1. 1

    Setup complexity is LOW. This package is likely designed for quick installation with minimal external side-effects.

  2. 2

    Final validation: Expose the agent to a mock request payload inside a sandbox and trace the network egress before allowing access to real customer data.

Evidence Ledger

Everything public we have scraped or crawled about this agent, grouped by evidence type with provenance.

Verifiededitorial-content
Vendor (1)

Vendor

Thesampadilla

profilemedium
Observed Feb 24, 2026Source linkProvenance
Compatibility (1)

Protocol compatibility

OpenClaw

contractmedium
Observed Feb 24, 2026Source linkProvenance
Security (1)

Handshake status

UNKNOWN

trustmedium
Observed unknownSource linkProvenance
Integration (1)

Crawlable docs

6 indexed pages on the official domain

search_documentmedium
Observed Apr 15, 2026Source linkProvenance

Release & Crawl Timeline

Merged public release, docs, artifact, benchmark, pricing, and trust refresh events.

Self-declaredagent-index

Artifacts Archive

Extracted files, examples, snippets, parameters, dependencies, permissions, and artifact metadata.

Self-declaredGITHUB OPENCLEW

Extracted files

0

Examples

1

Snippets

0

Languages

typescript

Parameters

Executable Examples

text

## Code Review: [branch/commit/description]

### Critical (must fix before merge)
[Bugs, security issues, route conflicts, data loss risks]

### Important (should fix)
[Standards violations, architectural drift, missing validation, inconsistencies]

### Minor (nice to have)
[Dead code cleanup, naming improvements, minor optimizations]

### What's Good
[Genuine positives — patterns done right, good test coverage, clean abstractions]

Docs & README

Full documentation captured from public sources, including the complete README when available.

Self-declaredGITHUB OPENCLEW

Docs source

GITHUB OPENCLEW

Editorial quality

ready

Scrutinize AI-generated or human-written code against project standards. Use when the user says "review," "clean up," "scrutinize," "audit code," "check code quality," "lint this," or wants a second pass on recently written or generated code. Catches sloppy patterns, inconsistencies, dead code, naming violations, architectural drift, and bugs that slip through fast iteration. Works with any language or framework — user provides repo-specific standards via arguments or project files (CLAUDE.md, PROJECT.md, .cursorrules, etc.). --- name: code-review description: Scrutinize AI-generated or human-written code against project standards. Use when the user says "review," "clean up," "scrutinize," "audit code," "check code quality," "lint this," or wants a second pass on recently written or generated code. Catches sloppy patterns, inconsistencies, dead code, naming violations, architectural drift, and bugs that slip through fast iteration. Works

Full README

name: code-review description: Scrutinize AI-generated or human-written code against project standards. Use when the user says "review," "clean up," "scrutinize," "audit code," "check code quality," "lint this," or wants a second pass on recently written or generated code. Catches sloppy patterns, inconsistencies, dead code, naming violations, architectural drift, and bugs that slip through fast iteration. Works with any language or framework — user provides repo-specific standards via arguments or project files (CLAUDE.md, PROJECT.md, .cursorrules, etc.).

Code Review

Perform a rigorous, opinionated review of code changes against the project's own standards.

Inputs

The user should provide:

  1. What to review — a branch, commit range, file list, or diff
  2. Standards source — where project conventions live (e.g., CLAUDE.md, docs/eng/frontend.md, .eslintrc, etc.)

If the user doesn't specify standards, look for these files in the repo root and load them:

  • CLAUDE.md, AGENTS.md, PROJECT.md, .cursorrules
  • docs/eng/*.md or similar convention docs
  • .eslintrc*, tsconfig.json, pyproject.toml (for implicit standards)

If none exist, review against language/framework best practices and flag the absence of a standards doc.

Review Process

Step 1: Gather Context

  1. Identify the changed files (use git diff --stat or the provided file list)
  2. Read the project's standards docs (keep in context — don't summarize them away)
  3. Understand the project structure (check for monorepo patterns, module boundaries, route groups)

Step 2: Per-File Analysis

For each changed file, check against these categories. Only flag real problems — not style nitpicks that a formatter handles.

Correctness

  • Logic bugs, off-by-one errors, race conditions
  • Null/undefined handling gaps
  • Missing error handling (try/catch, error boundaries, HTTP error responses)
  • Incorrect types or unsafe casts (as any, type assertions that hide bugs)
  • Route conflicts (parameterized routes shadowing static routes)
  • State management bugs (stale closures, missing dependency arrays, unsynced state)

Standards Compliance

  • Naming conventions (files, variables, functions, components, routes)
  • Import patterns (absolute vs relative, barrel exports, circular deps)
  • Styling approach (CSS modules vs Tailwind vs inline — match the project)
  • API patterns (REST conventions, DTO validation, auth guard placement)
  • File organization (does it match the project's module/folder structure?)
  • Database patterns (migrations, entity definitions, query patterns)

Architecture

  • Separation of concerns (business logic in services, not controllers/components)
  • Module boundaries respected (no cross-imports between independent modules)
  • Duplicate code that should be shared
  • Dead code (unused imports, unreachable branches, commented-out blocks)
  • Premature abstraction (over-engineered patterns for simple problems)

Security

  • Auth checks on protected routes
  • Input validation on user-facing endpoints
  • SQL injection, XSS vectors, SSRF potential
  • Secrets or credentials in code
  • Unsafe dangerouslySetInnerHTML or equivalent without sanitization

Consistency

  • Multiple patterns for the same thing (two API clients, two auth helpers, etc.)
  • Inconsistent error handling (some endpoints return errors, others throw)
  • Mixed conventions within the same codebase (camelCase and snake_case, etc.)

Step 3: Cross-File Analysis

After reviewing individual files, look at the changes as a whole:

  • Do new files follow the same patterns as existing ones?
  • Are there integration issues between new components?
  • Is there unnecessary duplication across the changeset?
  • Do new endpoints have corresponding contract/schema updates?
  • Are new features tested? (check for corresponding test files)

Step 4: Produce the Review

Structure the output as:

## Code Review: [branch/commit/description]

### Critical (must fix before merge)
[Bugs, security issues, route conflicts, data loss risks]

### Important (should fix)
[Standards violations, architectural drift, missing validation, inconsistencies]

### Minor (nice to have)
[Dead code cleanup, naming improvements, minor optimizations]

### What's Good
[Genuine positives — patterns done right, good test coverage, clean abstractions]

Rules for the review:

  • Be specific: file name, line number or code snippet, what's wrong, how to fix it
  • Be honest: if the code is sloppy, say so. Don't sugarcoat.
  • Be proportional: 3 critical bugs matter more than 20 naming nitpicks
  • Don't pad: if there are no critical issues, say "None" — don't invent problems
  • Suggest fixes: don't just identify problems, show the fix when it's non-obvious
  • Prioritize: order issues by impact, not by file order

Scope Control

  • Default scope: Review only changed/added files in the diff
  • If user says "full audit": Review entire codebase against standards
  • If user says "just this file": Review only the specified file(s)
  • Don't review: node_modules, dist/, generated files, lock files, .env files

When to Push Back

If the code has fundamental architectural problems (wrong patterns, missing abstractions, tech debt that will compound), say so clearly. Don't just list line-level fixes when the real problem is structural. The user hired a reviewer, not a rubber stamp.

Contract & API

Machine endpoints, protocol fit, contract coverage, invocation examples, and guardrails for agent-to-agent use.

MissingGITHUB OPENCLEW

Contract coverage

Status

missing

Auth

None

Streaming

No

Data region

Unspecified

Protocol support

OpenClaw: self-declared

Requires: none

Forbidden: none

Guardrails

Operational confidence: low

No positive guardrails captured.
Invocation examples
curl -s "https://xpersona.co/api/v1/agents/thesampadilla-code-review-skill/snapshot"
curl -s "https://xpersona.co/api/v1/agents/thesampadilla-code-review-skill/contract"
curl -s "https://xpersona.co/api/v1/agents/thesampadilla-code-review-skill/trust"

Reliability & Benchmarks

Trust and runtime signals, benchmark suites, failure patterns, and practical risk constraints.

Missingruntime-metrics

Trust signals

Handshake

UNKNOWN

Confidence

unknown

Attempts 30d

unknown

Fallback rate

unknown

Runtime metrics

Observed P50

unknown

Observed P95

unknown

Rate limit

unknown

Estimated cost

unknown

Do not use if

Contract metadata is missing or unavailable for deterministic execution.
No benchmark suites or observed failure patterns are available.

Media & Demo

Every public screenshot, visual asset, demo link, and owner-provided destination tied to this agent.

Missingno-media
No screenshots, media assets, or demo links are available.

Related Agents

Neighboring agents from the same protocol and source ecosystem for comparison and shortlist building.

Self-declaredprotocol-neighbors
GITHUB_REPOSactivepieces

Rank

70

AI Agents & MCPs & AI Workflow Automation • (~400 MCP servers for AI agents) • AI Automation / AI Agent with MCPs • AI Workflows & AI Agents • MCPs for AI Agents

Traction

No public download signal

Freshness

Updated 2d ago

OPENCLAW
GITHUB_REPOScherry-studio

Rank

70

AI productivity studio with smart chat, autonomous agents, and 300+ assistants. Unified access to frontier LLMs

Traction

No public download signal

Freshness

Updated 5d ago

MCPOPENCLAW
GITHUB_REPOSAionUi

Rank

70

Free, local, open-source 24/7 Cowork app and OpenClaw for Gemini CLI, Claude Code, Codex, OpenCode, Qwen Code, Goose CLI, Auggie, and more | 🌟 Star if you like it!

Traction

No public download signal

Freshness

Updated 6d ago

MCPOPENCLAW
GITHUB_REPOSCopilotKit

Rank

70

The Frontend for Agents & Generative UI. React + Angular

Traction

No public download signal

Freshness

Updated 23d ago

OPENCLAW
Machine Appendix

Contract JSON

{
  "contractStatus": "missing",
  "authModes": [],
  "requires": [],
  "forbidden": [],
  "supportsMcp": false,
  "supportsA2a": false,
  "supportsStreaming": false,
  "inputSchemaRef": null,
  "outputSchemaRef": null,
  "dataRegion": null,
  "contractUpdatedAt": null,
  "sourceUpdatedAt": null,
  "freshnessSeconds": null
}

Invocation Guide

{
  "preferredApi": {
    "snapshotUrl": "https://xpersona.co/api/v1/agents/thesampadilla-code-review-skill/snapshot",
    "contractUrl": "https://xpersona.co/api/v1/agents/thesampadilla-code-review-skill/contract",
    "trustUrl": "https://xpersona.co/api/v1/agents/thesampadilla-code-review-skill/trust"
  },
  "curlExamples": [
    "curl -s \"https://xpersona.co/api/v1/agents/thesampadilla-code-review-skill/snapshot\"",
    "curl -s \"https://xpersona.co/api/v1/agents/thesampadilla-code-review-skill/contract\"",
    "curl -s \"https://xpersona.co/api/v1/agents/thesampadilla-code-review-skill/trust\""
  ],
  "jsonRequestTemplate": {
    "query": "summarize this repo",
    "constraints": {
      "maxLatencyMs": 2000,
      "protocolPreference": [
        "OPENCLEW"
      ]
    }
  },
  "jsonResponseTemplate": {
    "ok": true,
    "result": {
      "summary": "...",
      "confidence": 0.9
    },
    "meta": {
      "source": "GITHUB_OPENCLEW",
      "generatedAt": "2026-04-16T23:34:31.750Z"
    }
  },
  "retryPolicy": {
    "maxAttempts": 3,
    "backoffMs": [
      500,
      1500,
      3500
    ],
    "retryableConditions": [
      "HTTP_429",
      "HTTP_503",
      "NETWORK_TIMEOUT"
    ]
  }
}

Trust JSON

{
  "status": "unavailable",
  "handshakeStatus": "UNKNOWN",
  "verificationFreshnessHours": null,
  "reputationScore": null,
  "p95LatencyMs": null,
  "successRate30d": null,
  "fallbackRate": null,
  "attempts30d": null,
  "trustUpdatedAt": null,
  "trustConfidence": "unknown",
  "sourceUpdatedAt": null,
  "freshnessSeconds": null
}

Capability Matrix

{
  "rows": [
    {
      "key": "OPENCLEW",
      "type": "protocol",
      "support": "unknown",
      "confidenceSource": "profile",
      "notes": "Listed on profile"
    }
  ],
  "flattenedTokens": "protocol:OPENCLEW|unknown|profile"
}

Facts JSON

[
  {
    "factKey": "docs_crawl",
    "category": "integration",
    "label": "Crawlable docs",
    "value": "6 indexed pages on the official domain",
    "href": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
    "sourceUrl": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
    "sourceType": "search_document",
    "confidence": "medium",
    "observedAt": "2026-04-15T05:03:46.393Z",
    "isPublic": true
  },
  {
    "factKey": "vendor",
    "category": "vendor",
    "label": "Vendor",
    "value": "Thesampadilla",
    "href": "https://github.com/theSamPadilla/code-review-skill",
    "sourceUrl": "https://github.com/theSamPadilla/code-review-skill",
    "sourceType": "profile",
    "confidence": "medium",
    "observedAt": "2026-02-24T19:44:00.558Z",
    "isPublic": true
  },
  {
    "factKey": "protocols",
    "category": "compatibility",
    "label": "Protocol compatibility",
    "value": "OpenClaw",
    "href": "https://xpersona.co/api/v1/agents/thesampadilla-code-review-skill/contract",
    "sourceUrl": "https://xpersona.co/api/v1/agents/thesampadilla-code-review-skill/contract",
    "sourceType": "contract",
    "confidence": "medium",
    "observedAt": "2026-02-24T19:44:00.558Z",
    "isPublic": true
  },
  {
    "factKey": "handshake_status",
    "category": "security",
    "label": "Handshake status",
    "value": "UNKNOWN",
    "href": "https://xpersona.co/api/v1/agents/thesampadilla-code-review-skill/trust",
    "sourceUrl": "https://xpersona.co/api/v1/agents/thesampadilla-code-review-skill/trust",
    "sourceType": "trust",
    "confidence": "medium",
    "observedAt": null,
    "isPublic": true
  }
]

Change Events JSON

[
  {
    "eventType": "docs_update",
    "title": "Docs refreshed: Sign in to GitHub · GitHub",
    "description": "Fresh crawlable documentation was indexed for the official domain.",
    "href": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
    "sourceUrl": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
    "sourceType": "search_document",
    "confidence": "medium",
    "observedAt": "2026-04-15T05:03:46.393Z",
    "isPublic": true
  }
]

Sponsored

Ads related to code-review and adjacent AI workflows.