Rank
83
A Model Context Protocol (MCP) server for GitLab
Traction
No public download signal
Freshness
Updated 2d ago
Crawler Summary
Design, test, review, and maintain agent skills for OpenClaw systems using multi-agent iterative refinement. Orchestrates Designer, Reviewer, and Tester subagents for quality-gated skill development. Use when user asks to "design skill", "review skill", "test skill", "audit skills", "refactor skill", or mentions "agent kit quality". --- name: skill-engineer description: Design, test, review, and maintain agent skills for OpenClaw systems using multi-agent iterative refinement. Orchestrates Designer, Reviewer, and Tester subagents for quality-gated skill development. Use when user asks to "design skill", "review skill", "test skill", "audit skills", "refactor skill", or mentions "agent kit quality". metadata: author: skill-engineer version: 3.1.0 Capability contract not published. No trust telemetry is available yet. Last updated 2/25/2026.
Freshness
Last checked 2/25/2026
Best For
skill-engineer is best for assess, script, be workflows where MCP and OpenClaw compatibility matters.
Not Ideal For
Contract metadata is missing or unavailable for deterministic execution.
Evidence Sources Checked
editorial-content, GITHUB OPENCLEW, runtime-metrics, public facts pack
Design, test, review, and maintain agent skills for OpenClaw systems using multi-agent iterative refinement. Orchestrates Designer, Reviewer, and Tester subagents for quality-gated skill development. Use when user asks to "design skill", "review skill", "test skill", "audit skills", "refactor skill", or mentions "agent kit quality". --- name: skill-engineer description: Design, test, review, and maintain agent skills for OpenClaw systems using multi-agent iterative refinement. Orchestrates Designer, Reviewer, and Tester subagents for quality-gated skill development. Use when user asks to "design skill", "review skill", "test skill", "audit skills", "refactor skill", or mentions "agent kit quality". metadata: author: skill-engineer version: 3.1.0
Public facts
4
Change events
1
Artifacts
0
Freshness
Feb 25, 2026
Capability contract not published. No trust telemetry is available yet. Last updated 2/25/2026.
Trust score
Unknown
Compatibility
MCP, OpenClaw
Freshness
Feb 25, 2026
Vendor
Liaosvcaf
Artifacts
0
Benchmarks
0
Last release
Unpublished
Key links, install path, and a quick operational read before the deeper crawl record.
Summary
Capability contract not published. No trust telemetry is available yet. Last updated 2/25/2026.
Setup snapshot
git clone https://github.com/liaosvcaf/openclaw-skill-skill-engineer.gitSetup complexity is LOW. This package is likely designed for quick installation with minimal external side-effects.
Final validation: Expose the agent to a mock request payload inside a sandbox and trace the network egress before allowing access to real customer data.
Everything public we have scraped or crawled about this agent, grouped by evidence type with provenance.
Vendor
Liaosvcaf
Protocol compatibility
MCP, OpenClaw
Handshake status
UNKNOWN
Crawlable docs
6 indexed pages on the official domain
Merged public release, docs, artifact, benchmark, pricing, and trust refresh events.
Extracted files, examples, snippets, parameters, dependencies, permissions, and artifact metadata.
Extracted files
0
Examples
6
Snippets
0
Languages
typescript
Parameters
text
User: "I need a skill for analyzing competitor websites" Orchestrator gathers: - Problem: Automate competitor analysis with structured output - Audience: research-agent - Interactions: web_fetch, browser tool, writes markdown reports - Inputs: competitor URLs, analysis criteria - Outputs: comparison table, insights markdown - Constraints: must complete in <60s per site
text
Orchestrator (main agent)
│
├─ Spawn ──→ Designer (creative subagent)
│ │
│ ▼ produces skill artifacts
│
├─ Spawn ──→ Reviewer (critical subagent)
│ │
│ ▼ scores, identifies issues
│
├─ Spawn ──→ Tester (empirical subagent)
│ │
│ ▼ runs self-play, reports results
│
└─ Decision: Ship / Revise / Failtext
Designer → Reviewer ──pass──→ Tester ──pass──→ Ship
│ │
fail fail
│ │
▼ ▼
Designer revises Designer revises
│ │
▼ ▼
Reviewer Reviewer + Tester
│
(max 3 iterations, then fail)text
[Acting as DESIGNER] ...generate artifacts... [Acting as REVIEWER] ...evaluate artifacts... [Acting as TESTER] ...validate artifacts...
bash
# Spawn Designer openclaw agent --session-id "skill-v1-designer" \ --message "Act as Designer. Requirements: [...]" # Spawn Reviewer openclaw agent --session-id "skill-v1-reviewer" \ --message "Act as Reviewer. Artifacts: [path]. Rubric: [...]"
markdown
## Quality Scorecard | Category | Score | Details | |----------|-------|---------| | Completeness (SQ-A) | 7/7 | All checks pass | | Clarity (SQ-B) | 4/5 | Minor ambiguity in edge case handling | | Balance (SQ-C) | 4/4 | AI/script split appropriate | | Integration (SQ-D) | 4/4 | Compatible with standard agent kit | | Scope (SCOPE) | 3/3 | Clean boundaries, no leaks | | OPSEC | 2/2 | No violations | | References (REF) | 3/3 | All sources cited | | Architecture (ARCH) | 2/2 | Separation of concerns maintained | | **Total** | **29/30** | | *Scored by skill-engineer Reviewer (iteration 2)*
Full documentation captured from public sources, including the complete README when available.
Docs source
GITHUB OPENCLEW
Editorial quality
ready
Design, test, review, and maintain agent skills for OpenClaw systems using multi-agent iterative refinement. Orchestrates Designer, Reviewer, and Tester subagents for quality-gated skill development. Use when user asks to "design skill", "review skill", "test skill", "audit skills", "refactor skill", or mentions "agent kit quality". --- name: skill-engineer description: Design, test, review, and maintain agent skills for OpenClaw systems using multi-agent iterative refinement. Orchestrates Designer, Reviewer, and Tester subagents for quality-gated skill development. Use when user asks to "design skill", "review skill", "test skill", "audit skills", "refactor skill", or mentions "agent kit quality". metadata: author: skill-engineer version: 3.1.0
Own the full lifecycle of agent skills in your OpenClaw agent kit. The entire multi-agent workflow depends on skill quality — a weak skill produces weak results across every run.
Core principle: Builders don't evaluate their own work. This skill enforces separation of concerns through a multi-agent architecture where design, review, and testing are performed by independent subagents.
This skill produces validated skill artifacts (SKILL.md, skill.yml, README.md, tests, scripts). Once artifacts pass quality gates, responsibility transfers to whatever system handles publishing and deployment.
A skill development cycle is considered successful when:
If any criterion fails, the skill returns to the Designer for revision.
When invoking this skill, the orchestrator must gather:
| Input | Description | Required | Source | |-------|-------------|----------|--------| | Problem description | What capability or workflow needs to be enabled | Yes | User conversation | | Target audience | Which agent(s) will use this skill | Yes | User or inferred | | Expected interactions | With users, APIs, files, MCP servers, other skills | Yes | Requirements discussion | | Inputs/Outputs | What data the skill receives and produces | Yes | Requirements discussion | | Constraints | Performance limits, security requirements, dependencies | No | User or system | | Prior feedback | Review or test reports from previous iterations | No | Previous Reviewer/Tester | | Existing artifacts | If refactoring/maintaining an existing skill | No | File system |
Example requirements gathering:
User: "I need a skill for analyzing competitor websites"
Orchestrator gathers:
- Problem: Automate competitor analysis with structured output
- Audience: research-agent
- Interactions: web_fetch, browser tool, writes markdown reports
- Inputs: competitor URLs, analysis criteria
- Outputs: comparison table, insights markdown
- Constraints: must complete in <60s per site
These inputs are then passed to the Designer to begin the design process.
The skill-engineer uses a three-role iterative architecture. The orchestrator (you, the main agent) spawns subagents for each role and never does creative or evaluation work directly.
Orchestrator (main agent)
│
├─ Spawn ──→ Designer (creative subagent)
│ │
│ ▼ produces skill artifacts
│
├─ Spawn ──→ Reviewer (critical subagent)
│ │
│ ▼ scores, identifies issues
│
├─ Spawn ──→ Tester (empirical subagent)
│ │
│ ▼ runs self-play, reports results
│
└─ Decision: Ship / Revise / Fail
Designer → Reviewer ──pass──→ Tester ──pass──→ Ship
│ │
fail fail
│ │
▼ ▼
Designer revises Designer revises
│ │
▼ ▼
Reviewer Reviewer + Tester
│
(max 3 iterations, then fail)
Exit conditions:
After 3 failed iterations, the orchestrator must:
Never: Continue past 3 iterations or ship a skill that hasn't passed quality gates.
"Spawning" a subagent means creating a distinct execution context for each role. In OpenClaw:
Option 1: Role-Based Execution (Recommended for most cases) The orchestrator executes each role sequentially in the same session but with clear role boundaries:
[Acting as DESIGNER] ...generate artifacts...
[Acting as REVIEWER] ...evaluate artifacts...
[Acting as TESTER] ...validate artifacts...
Document which role is active at each step. This maintains separation of concerns without multi-session overhead.
Option 2: Separate Agent Sessions (For complex workflows)
Use openclaw agent --message "..." --session-id <unique-id> to create isolated sessions:
# Spawn Designer
openclaw agent --session-id "skill-v1-designer" \
--message "Act as Designer. Requirements: [...]"
# Spawn Reviewer
openclaw agent --session-id "skill-v1-reviewer" \
--message "Act as Reviewer. Artifacts: [path]. Rubric: [...]"
This provides true isolation but increases token cost and coordination complexity.
Which to use:
Critical: Regardless of method, the orchestrator must never perform creative (Designer) or evaluation (Reviewer/Tester) work itself. It only coordinates.
The orchestrator coordinates the loop. It does NOT write skill content or evaluate quality.
Every shipped skill must include a quality scorecard in its README.md. This is the Reviewer's final scores, added by the Orchestrator before delivery:
## Quality Scorecard
| Category | Score | Details |
|----------|-------|---------|
| Completeness (SQ-A) | 7/7 | All checks pass |
| Clarity (SQ-B) | 4/5 | Minor ambiguity in edge case handling |
| Balance (SQ-C) | 4/4 | AI/script split appropriate |
| Integration (SQ-D) | 4/4 | Compatible with standard agent kit |
| Scope (SCOPE) | 3/3 | Clean boundaries, no leaks |
| OPSEC | 2/2 | No violations |
| References (REF) | 3/3 | All sources cited |
| Architecture (ARCH) | 2/2 | Separation of concerns maintained |
| **Total** | **29/30** | |
*Scored by skill-engineer Reviewer (iteration 2)*
This scorecard serves as a quality certificate. Users can assess skill quality before installing.
The orchestrator manages git commits throughout the workflow:
When to commit:
git add . && git commit -m "feat: initial design for <skill-name>"git add . && git commit -m "fix: address review issues (iteration N)"git add README.md && git commit -m "docs: add quality scorecard for <skill-name>"When to push:
git push origin mainBranch strategy:
The orchestrator must handle technical failures gracefully:
| Failure Type | Detection | Response | |--------------|-----------|----------| | Git push fails | Exit code ≠ 0 | Retry once. If fails again, report to user: "Cannot push to remote. Check network/permissions." | | OPSEC scan script missing | File not found | Skip OPSEC automated check, but flag in review: "Manual OPSEC review required — script not found." | | File write errors | Permission denied | Report: "Cannot write to [path]. Check file permissions." Fail workflow. | | Subagent crashes | Timeout or error | Log the error, attempt retry once. If fails again, report: "Subagent failed. Manual intervention required." | | Review score = 0 | All checks fail | Report: "Skill failed all quality checks. Requirements may be unclear or skill design is fundamentally flawed. Recommend starting over." |
Retry logic:
Fail-fast rules:
Orchestrator workload: Coordinating Designer/Reviewer/Tester across 1-3 iterations can be complex, especially for large skills (1000+ lines). The orchestrator manages:
Token considerations: A full 3-iteration cycle can consume 50k-150k tokens depending on skill complexity. For extremely complex skills, consider:
If orchestrator feels overwhelmed: This is a signal that the skill being designed may be too complex. Revisit the scope definition and consider decomposition.
Each subagent receives only what it needs:
| Role | Receives | Does NOT Receive | |------|----------|------------------| | Designer | Requirements, prior feedback (if any), design principles | Reviewer rubric internals | | Reviewer | Skill artifacts, quality rubric, scope boundaries | Requirements discussion | | Tester | Skill artifacts, test protocol | Review scores |
Purpose: Generate and revise skill content.
For complete Designer instructions, see: references/designer-guide.md
Inputs: Requirements, design principles, feedback (on iterations 2+)
Outputs: SKILL.md, skill.yml, README.md, tests/, scripts/, references/
Naming step (mandatory): Before writing artifacts, present 3-5 name candidates to the user with rationale. See references/designer-guide.md Step 2 for criteria and process.
Key constraints:
Design principles:
Purpose: Independent quality evaluation. The Reviewer has never seen the requirements discussion — it evaluates artifacts on their own merits.
For complete Reviewer rubric and scoring guide, see: references/reviewer-rubric.md
Inputs: Skill artifacts, quality rubric, scope boundaries
Outputs: Review report with scores, verdict (PASS/REVISE/FAIL), issues, strengths
Quality rubric (33 checks total):
Scoring thresholds:
Pre-review: Run deterministic validation scripts before manual evaluation
Purpose: Empirical validation via self-play. The Tester loads the skill and attempts realistic tasks.
For complete Tester protocol, see: references/tester-protocol.md
Inputs: Skill artifacts, test protocol
Outputs: Test report with trigger accuracy, functional test results, edge cases, blocking/non-blocking issues, verdict (PASS/FAIL)
Test protocol:
Issue classification:
Pass criteria: No blocking issues + ≥90% trigger accuracy
Periodic full audit of the agent kit:
# Agent Kit Audit Report
**Date:** [date]
**Skills audited:** [count]
## Skill Inventory
| # | Skill | Agent | Quality Score | Status |
|---|-------|-------|--------------|--------|
| 1 | [name] | [agent] | X/33 | Deploy/Revise/Redesign |
## Issues Found
1. ...
## Recommendations
1. ...
## Action Items
| # | Action | Priority | Owner |
|---|--------|----------|-------|
Maintain a map of how skills interact:
orchestrator-agent (coordinates workflow)
├── content-creator (writes content)
│ └── consumes: research outputs, review feedback
├── content-reviewer (reviews content)
│ └── produces: review reports
├── research-analyst (researches topics)
│ └── produces: research consumed by content-creator
├── validator (validates outputs)
└── skill-engineer (this skill — meta)
└── consumes: all skills for audit
Adapt this to your specific agent architecture.
Machine endpoints, protocol fit, contract coverage, invocation examples, and guardrails for agent-to-agent use.
Contract coverage
Status
missing
Auth
None
Streaming
No
Data region
Unspecified
Protocol support
Requires: none
Forbidden: none
Guardrails
Operational confidence: low
curl -s "https://xpersona.co/api/v1/agents/liaosvcaf-openclaw-skill-skill-engineer/snapshot"
curl -s "https://xpersona.co/api/v1/agents/liaosvcaf-openclaw-skill-skill-engineer/contract"
curl -s "https://xpersona.co/api/v1/agents/liaosvcaf-openclaw-skill-skill-engineer/trust"
Trust and runtime signals, benchmark suites, failure patterns, and practical risk constraints.
Trust signals
Handshake
UNKNOWN
Confidence
unknown
Attempts 30d
unknown
Fallback rate
unknown
Runtime metrics
Observed P50
unknown
Observed P95
unknown
Rate limit
unknown
Estimated cost
unknown
Do not use if
Every public screenshot, visual asset, demo link, and owner-provided destination tied to this agent.
Neighboring agents from the same protocol and source ecosystem for comparison and shortlist building.
Rank
83
A Model Context Protocol (MCP) server for GitLab
Traction
No public download signal
Freshness
Updated 2d ago
Rank
80
A Model Context Protocol (MCP) server for GitLab
Traction
No public download signal
Freshness
Updated 2d ago
Rank
74
Expose OpenAPI definition endpoints as MCP tools using the official Rust SDK for the Model Context Protocol (https://github.com/modelcontextprotocol/rust-sdk)
Traction
No public download signal
Freshness
Updated 2d ago
Rank
72
An actix_web backend for the official Rust SDK for the Model Context Protocol (https://github.com/modelcontextprotocol/rust-sdk)
Traction
No public download signal
Freshness
Updated 2d ago
Contract JSON
{
"contractStatus": "missing",
"authModes": [],
"requires": [],
"forbidden": [],
"supportsMcp": false,
"supportsA2a": false,
"supportsStreaming": false,
"inputSchemaRef": null,
"outputSchemaRef": null,
"dataRegion": null,
"contractUpdatedAt": null,
"sourceUpdatedAt": null,
"freshnessSeconds": null
}Invocation Guide
{
"preferredApi": {
"snapshotUrl": "https://xpersona.co/api/v1/agents/liaosvcaf-openclaw-skill-skill-engineer/snapshot",
"contractUrl": "https://xpersona.co/api/v1/agents/liaosvcaf-openclaw-skill-skill-engineer/contract",
"trustUrl": "https://xpersona.co/api/v1/agents/liaosvcaf-openclaw-skill-skill-engineer/trust"
},
"curlExamples": [
"curl -s \"https://xpersona.co/api/v1/agents/liaosvcaf-openclaw-skill-skill-engineer/snapshot\"",
"curl -s \"https://xpersona.co/api/v1/agents/liaosvcaf-openclaw-skill-skill-engineer/contract\"",
"curl -s \"https://xpersona.co/api/v1/agents/liaosvcaf-openclaw-skill-skill-engineer/trust\""
],
"jsonRequestTemplate": {
"query": "summarize this repo",
"constraints": {
"maxLatencyMs": 2000,
"protocolPreference": [
"MCP",
"OPENCLEW"
]
}
},
"jsonResponseTemplate": {
"ok": true,
"result": {
"summary": "...",
"confidence": 0.9
},
"meta": {
"source": "GITHUB_OPENCLEW",
"generatedAt": "2026-04-17T03:23:01.083Z"
}
},
"retryPolicy": {
"maxAttempts": 3,
"backoffMs": [
500,
1500,
3500
],
"retryableConditions": [
"HTTP_429",
"HTTP_503",
"NETWORK_TIMEOUT"
]
}
}Trust JSON
{
"status": "unavailable",
"handshakeStatus": "UNKNOWN",
"verificationFreshnessHours": null,
"reputationScore": null,
"p95LatencyMs": null,
"successRate30d": null,
"fallbackRate": null,
"attempts30d": null,
"trustUpdatedAt": null,
"trustConfidence": "unknown",
"sourceUpdatedAt": null,
"freshnessSeconds": null
}Capability Matrix
{
"rows": [
{
"key": "MCP",
"type": "protocol",
"support": "unknown",
"confidenceSource": "profile",
"notes": "Listed on profile"
},
{
"key": "OPENCLEW",
"type": "protocol",
"support": "unknown",
"confidenceSource": "profile",
"notes": "Listed on profile"
},
{
"key": "assess",
"type": "capability",
"support": "supported",
"confidenceSource": "profile",
"notes": "Declared in agent profile metadata"
},
{
"key": "script",
"type": "capability",
"support": "supported",
"confidenceSource": "profile",
"notes": "Declared in agent profile metadata"
},
{
"key": "be",
"type": "capability",
"support": "supported",
"confidenceSource": "profile",
"notes": "Declared in agent profile metadata"
},
{
"key": "consume",
"type": "capability",
"support": "supported",
"confidenceSource": "profile",
"notes": "Declared in agent profile metadata"
}
],
"flattenedTokens": "protocol:MCP|unknown|profile protocol:OPENCLEW|unknown|profile capability:assess|supported|profile capability:script|supported|profile capability:be|supported|profile capability:consume|supported|profile"
}Facts JSON
[
{
"factKey": "docs_crawl",
"category": "integration",
"label": "Crawlable docs",
"value": "6 indexed pages on the official domain",
"href": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
"sourceUrl": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
"sourceType": "search_document",
"confidence": "medium",
"observedAt": "2026-04-15T05:03:46.393Z",
"isPublic": true
},
{
"factKey": "vendor",
"category": "vendor",
"label": "Vendor",
"value": "Liaosvcaf",
"href": "https://github.com/liaosvcaf/openclaw-skill-skill-engineer",
"sourceUrl": "https://github.com/liaosvcaf/openclaw-skill-skill-engineer",
"sourceType": "profile",
"confidence": "medium",
"observedAt": "2026-02-25T01:46:08.500Z",
"isPublic": true
},
{
"factKey": "protocols",
"category": "compatibility",
"label": "Protocol compatibility",
"value": "MCP, OpenClaw",
"href": "https://xpersona.co/api/v1/agents/liaosvcaf-openclaw-skill-skill-engineer/contract",
"sourceUrl": "https://xpersona.co/api/v1/agents/liaosvcaf-openclaw-skill-skill-engineer/contract",
"sourceType": "contract",
"confidence": "medium",
"observedAt": "2026-02-25T01:46:08.500Z",
"isPublic": true
},
{
"factKey": "handshake_status",
"category": "security",
"label": "Handshake status",
"value": "UNKNOWN",
"href": "https://xpersona.co/api/v1/agents/liaosvcaf-openclaw-skill-skill-engineer/trust",
"sourceUrl": "https://xpersona.co/api/v1/agents/liaosvcaf-openclaw-skill-skill-engineer/trust",
"sourceType": "trust",
"confidence": "medium",
"observedAt": null,
"isPublic": true
}
]Change Events JSON
[
{
"eventType": "docs_update",
"title": "Docs refreshed: Sign in to GitHub · GitHub",
"description": "Fresh crawlable documentation was indexed for the official domain.",
"href": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
"sourceUrl": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
"sourceType": "search_document",
"confidence": "medium",
"observedAt": "2026-04-15T05:03:46.393Z",
"isPublic": true
}
]Sponsored
Ads related to skill-engineer and adjacent AI workflows.