Xpersona Agent
self-improving-agent
Captures learnings, errors, and corrections to enable continuous improvement. Use when: (1) A command or operation fails unexpectedly, (2) User corrects Clau...
clawhub skill install kn70cjr952qdec1nx70zs6wefn7ynq2t:self-improving-agentOverall rank
#62
Adoption
77.6K downloads
Trust
Unknown
Freshness
Feb 28, 2026
Freshness
Last checked Feb 28, 2026
Best For
self-improving-agent is best for general automation workflows where documented compatibility matters.
Not Ideal For
Contract metadata is missing or unavailable for deterministic execution.
Evidence Sources Checked
CLAWHUB, CLAWHUB, runtime-metrics, public facts pack
Overview
Key links, install path, reliability highlights, and the shortest practical read before diving into the crawl record.
Self-declaredCLAWHUB
Overview
Key links, install path, reliability highlights, and the shortest practical read before diving into the crawl record.
Overview
Executive Summary
Captures learnings, errors, and corrections to enable continuous improvement. Use when: (1) A command or operation fails unexpectedly, (2) User corrects Clau... Capability contract not published. No trust telemetry is available yet. 77.6K downloads reported by the source. Last updated 4/15/2026.
Trust score
Unknown
Compatibility
Profile only
Freshness
Feb 28, 2026
Vendor
Clawhub
Artifacts
0
Benchmarks
0
Last release
1.0.11
Install & run
Setup Snapshot
clawhub skill install kn70cjr952qdec1nx70zs6wefn7ynq2t:self-improving-agent- 1
Install using `clawhub skill install kn70cjr952qdec1nx70zs6wefn7ynq2t:self-improving-agent` in an isolated environment before connecting it to live workloads.
- 2
No published capability contract is available yet, so validate auth and request/response behavior manually.
- 3
Review the upstream CLAWHUB listing at https://clawhub.ai/pskoett/self-improving-agent before using production credentials.
Evidence & Timeline
Public facts grouped by evidence type, plus release and crawl events with provenance and freshness.
Self-declaredCLAWHUB
Evidence & Timeline
Public facts grouped by evidence type, plus release and crawl events with provenance and freshness.
Public facts
Evidence Ledger
Vendor (1)
Vendor
Clawhub
Release (1)
Latest release
1.0.11
Adoption (1)
Adoption signal
77.6K downloads
Security (1)
Handshake status
UNKNOWN
Artifacts & Docs
Parameters, dependencies, examples, extracted files, editorial overview, and the complete README when available.
Self-declaredCLAWHUB
Artifacts & Docs
Parameters, dependencies, examples, extracted files, editorial overview, and the complete README when available.
Captured outputs
Artifacts Archive
Extracted files
5
Examples
6
Snippets
0
Languages
Unknown
Executable Examples
bash
clawdhub install self-improving-agent
bash
git clone https://github.com/peterskoett/self-improving-agent.git ~/.openclaw/skills/self-improving-agent
text
~/.openclaw/workspace/
├── AGENTS.md # Multi-agent workflows, delegation patterns
├── SOUL.md # Behavioral guidelines, personality, principles
├── TOOLS.md # Tool capabilities, integration gotchas
├── MEMORY.md # Long-term memory (main session only)
├── memory/ # Daily memory files
│ └── YYYY-MM-DD.md
└── .learnings/ # This skill's log files
├── LEARNINGS.md
├── ERRORS.md
└── FEATURE_REQUESTS.mdbash
mkdir -p ~/.openclaw/workspace/.learnings
bash
# Copy hook to OpenClaw hooks directory cp -r hooks/openclaw ~/.openclaw/hooks/self-improvement # Enable it openclaw hooks enable self-improvement
bash
mkdir -p .learnings
Extracted Files
SKILL.md
---
name: self-improvement
description: "Captures learnings, errors, and corrections to enable continuous improvement. Use when: (1) A command or operation fails unexpectedly, (2) User corrects Claude ('No, that's wrong...', 'Actually...'), (3) User requests a capability that doesn't exist, (4) An external API or tool fails, (5) Claude realizes its knowledge is outdated or incorrect, (6) A better approach is discovered for a recurring task. Also review learnings before major tasks."
metadata:
---
# Self-Improvement Skill
Log learnings and errors to markdown files for continuous improvement. Coding agents can later process these into fixes, and important learnings get promoted to project memory.
## Quick Reference
| Situation | Action |
|-----------|--------|
| Command/operation fails | Log to `.learnings/ERRORS.md` |
| User corrects you | Log to `.learnings/LEARNINGS.md` with category `correction` |
| User wants missing feature | Log to `.learnings/FEATURE_REQUESTS.md` |
| API/external tool fails | Log to `.learnings/ERRORS.md` with integration details |
| Knowledge was outdated | Log to `.learnings/LEARNINGS.md` with category `knowledge_gap` |
| Found better approach | Log to `.learnings/LEARNINGS.md` with category `best_practice` |
| Simplify/Harden recurring patterns | Log/update `.learnings/LEARNINGS.md` with `Source: simplify-and-harden` and a stable `Pattern-Key` |
| Similar to existing entry | Link with `**See Also**`, consider priority bump |
| Broadly applicable learning | Promote to `CLAUDE.md`, `AGENTS.md`, and/or `.github/copilot-instructions.md` |
| Workflow improvements | Promote to `AGENTS.md` (OpenClaw workspace) |
| Tool gotchas | Promote to `TOOLS.md` (OpenClaw workspace) |
| Behavioral patterns | Promote to `SOUL.md` (OpenClaw workspace) |
## OpenClaw Setup (Recommended)
OpenClaw is the primary platform for this skill. It uses workspace-based prompt injection with automatic skill loading.
### Installation
**Via ClawdHub (recommended):**
```bash
clawdhub install self-improving-agent
```
**Manual:**
```bash
git clone https://github.com/peterskoett/self-improving-agent.git ~/.openclaw/skills/self-improving-agent
```
Remade for openclaw from original repo : https://github.com/pskoett/pskoett-ai-skills - https://github.com/pskoett/pskoett-ai-skills/tree/main/skills/self-improvement
### Workspace Structure
OpenClaw injects these files into every session:
```
~/.openclaw/workspace/
├── AGENTS.md # Multi-agent workflows, delegation patterns
├── SOUL.md # Behavioral guidelines, personality, principles
├── TOOLS.md # Tool capabilities, integration gotchas
├── MEMORY.md # Long-term memory (main session only)
├── memory/ # Daily memory files
│ └── YYYY-MM-DD.md
└── .learnings/ # This skill's log files
├── LEARNINGS.md
├── ERRORS.md
└── FEATURE_REQUESTS.md
```
### Create Learning Files
```bash
mkdir -p ~/.openclaw/workspace/.learnings
```
Then create the _meta.json
{
"ownerId": "kn70cjr952qdec1nx70zs6wefn7ynq2t",
"slug": "self-improving-agent",
"version": "1.0.11",
"publishedAt": 1771777713337
}references/examples.md
# Entry Examples Concrete examples of well-formatted entries with all fields. ## Learning: Correction ```markdown ## [LRN-20250115-001] correction **Logged**: 2025-01-15T10:30:00Z **Priority**: high **Status**: pending **Area**: tests ### Summary Incorrectly assumed pytest fixtures are scoped to function by default ### Details When writing test fixtures, I assumed all fixtures were function-scoped. User corrected that while function scope is the default, the codebase convention uses module-scoped fixtures for database connections to improve test performance. ### Suggested Action When creating fixtures that involve expensive setup (DB, network), check existing fixtures for scope patterns before defaulting to function scope. ### Metadata - Source: user_feedback - Related Files: tests/conftest.py - Tags: pytest, testing, fixtures --- ``` ## Learning: Knowledge Gap (Resolved) ```markdown ## [LRN-20250115-002] knowledge_gap **Logged**: 2025-01-15T14:22:00Z **Priority**: medium **Status**: resolved **Area**: config ### Summary Project uses pnpm not npm for package management ### Details Attempted to run `npm install` but project uses pnpm workspaces. Lock file is `pnpm-lock.yaml`, not `package-lock.json`. ### Suggested Action Check for `pnpm-lock.yaml` or `pnpm-workspace.yaml` before assuming npm. Use `pnpm install` for this project. ### Metadata - Source: error - Related Files: pnpm-lock.yaml, pnpm-workspace.yaml - Tags: package-manager, pnpm, setup ### Resolution - **Resolved**: 2025-01-15T14:30:00Z - **Commit/PR**: N/A - knowledge update - **Notes**: Added to CLAUDE.md for future reference --- ``` ## Learning: Promoted to CLAUDE.md ```markdown ## [LRN-20250115-003] best_practice **Logged**: 2025-01-15T16:00:00Z **Priority**: high **Status**: promoted **Promoted**: CLAUDE.md **Area**: backend ### Summary API responses must include correlation ID from request headers ### Details All API responses should echo back the X-Correlation-ID header from the request. This is required for distributed tracing. Responses without this header break the observability pipeline. ### Suggested Action Always include correlation ID passthrough in API handlers. ### Metadata - Source: user_feedback - Related Files: src/middleware/correlation.ts - Tags: api, observability, tracing --- ``` ## Learning: Promoted to AGENTS.md ```markdown ## [LRN-20250116-001] best_practice **Logged**: 2025-01-16T09:00:00Z **Priority**: high **Status**: promoted **Promoted**: AGENTS.md **Area**: backend ### Summary Must regenerate API client after OpenAPI spec changes ### Details When modifying API endpoints, the TypeScript client must be regenerated. Forgetting this causes type mismatches that only appear at runtime. The generate script also runs validation. ### Suggested Action Add to agent workflow: after any API changes, run `pnpm run generate:api`. ### Metadata - Source: error - Related Files: openapi.yaml, src/client/api.ts - Tags: api, codegen, type
references/hooks-setup.md
# Hook Setup Guide
Configure automatic self-improvement triggers for AI coding agents.
## Overview
Hooks enable proactive learning capture by injecting reminders at key moments:
- **UserPromptSubmit**: Reminder after each prompt to evaluate learnings
- **PostToolUse (Bash)**: Error detection when commands fail
## Claude Code Setup
### Option 1: Project-Level Configuration
Create `.claude/settings.json` in your project root:
```json
{
"hooks": {
"UserPromptSubmit": [
{
"matcher": "",
"hooks": [
{
"type": "command",
"command": "./skills/self-improvement/scripts/activator.sh"
}
]
}
],
"PostToolUse": [
{
"matcher": "Bash",
"hooks": [
{
"type": "command",
"command": "./skills/self-improvement/scripts/error-detector.sh"
}
]
}
]
}
}
```
### Option 2: User-Level Configuration
Add to `~/.claude/settings.json` for global activation:
```json
{
"hooks": {
"UserPromptSubmit": [
{
"matcher": "",
"hooks": [
{
"type": "command",
"command": "~/.claude/skills/self-improvement/scripts/activator.sh"
}
]
}
]
}
}
```
### Minimal Setup (Activator Only)
For lower overhead, use only the UserPromptSubmit hook:
```json
{
"hooks": {
"UserPromptSubmit": [
{
"matcher": "",
"hooks": [
{
"type": "command",
"command": "./skills/self-improvement/scripts/activator.sh"
}
]
}
]
}
}
```
## Codex CLI Setup
Codex uses the same hook system as Claude Code. Create `.codex/settings.json`:
```json
{
"hooks": {
"UserPromptSubmit": [
{
"matcher": "",
"hooks": [
{
"type": "command",
"command": "./skills/self-improvement/scripts/activator.sh"
}
]
}
]
}
}
```
## GitHub Copilot Setup
Copilot doesn't support hooks directly. Instead, add guidance to `.github/copilot-instructions.md`:
```markdown
## Self-Improvement
After completing tasks that involved:
- Debugging non-obvious issues
- Discovering workarounds
- Learning project-specific patterns
- Resolving unexpected errors
Consider logging the learning to `.learnings/` using the format from the self-improvement skill.
For high-value learnings that would benefit other sessions, consider skill extraction.
```
## Verification
### Test Activator Hook
1. Enable the hook configuration
2. Start a new Claude Code session
3. Send any prompt
4. Verify you see `<self-improvement-reminder>` in the context
### Test Error Detector Hook
1. Enable PostToolUse hook for Bash
2. Run a command that fails: `ls /nonexistent/path`
3. Verify you see `<error-detected>` reminder
### Dry Run Extract Script
```bash
./skills/self-improvement/scripts/extract-skill.sh test-skill --dry-run
```
Expectedreferences/openclaw-integration.md
# OpenClaw Integration
Complete setup and usage guide for integrating the self-improvement skill with OpenClaw.
## Overview
OpenClaw uses workspace-based prompt injection combined with event-driven hooks. Context is injected from workspace files at session start, and hooks can trigger on lifecycle events.
## Workspace Structure
```
~/.openclaw/
├── workspace/ # Working directory
│ ├── AGENTS.md # Multi-agent coordination patterns
│ ├── SOUL.md # Behavioral guidelines and personality
│ ├── TOOLS.md # Tool capabilities and gotchas
│ ├── MEMORY.md # Long-term memory (main session only)
│ └── memory/ # Daily memory files
│ └── YYYY-MM-DD.md
├── skills/ # Installed skills
│ └── <skill-name>/
│ └── SKILL.md
└── hooks/ # Custom hooks
└── <hook-name>/
├── HOOK.md
└── handler.ts
```
## Quick Setup
### 1. Install the Skill
```bash
clawdhub install self-improving-agent
```
Or copy manually:
```bash
cp -r self-improving-agent ~/.openclaw/skills/
```
### 2. Install the Hook (Optional)
Copy the hook to OpenClaw's hooks directory:
```bash
cp -r hooks/openclaw ~/.openclaw/hooks/self-improvement
```
Enable the hook:
```bash
openclaw hooks enable self-improvement
```
### 3. Create Learning Files
Create the `.learnings/` directory in your workspace:
```bash
mkdir -p ~/.openclaw/workspace/.learnings
```
Or in the skill directory:
```bash
mkdir -p ~/.openclaw/skills/self-improving-agent/.learnings
```
## Injected Prompt Files
### AGENTS.md
Purpose: Multi-agent workflows and delegation patterns.
```markdown
# Agent Coordination
## Delegation Rules
- Use explore agent for open-ended codebase questions
- Spawn sub-agents for long-running tasks
- Use sessions_send for cross-session communication
## Session Handoff
When delegating to another session:
1. Provide full context in the handoff message
2. Include relevant file paths
3. Specify expected output format
```
### SOUL.md
Purpose: Behavioral guidelines and communication style.
```markdown
# Behavioral Guidelines
## Communication Style
- Be direct and concise
- Avoid unnecessary caveats and disclaimers
- Use technical language appropriate to context
## Error Handling
- Admit mistakes promptly
- Provide corrected information immediately
- Log significant errors to learnings
```
### TOOLS.md
Purpose: Tool capabilities, integration gotchas, local configuration.
```markdown
# Tool Knowledge
## Self-Improvement Skill
Log learnings to `.learnings/` for continuous improvement.
## Local Tools
- Document tool-specific gotchas here
- Note authentication requirements
- Track integration quirks
```
## Learning Workflow
### Capturing Learnings
1. **In-session**: Log to `.learnings/` as usual
2. **Cross-session**: Promote to workspace files
### Promotion Decision Tree
```
Is the learning projecEditorial read
Docs & README
Docs source
CLAWHUB
Editorial quality
thin
Skill: self-improving-agent Owner: pskoett Summary: Captures learnings, errors, and corrections to enable continuous improvement. Use when: (1) A command or operation fails unexpectedly, (2) User corrects Clau... Tags: latest:1.0.11 Version history: v1.0.11 | 2026-02-22T16:28:33.337Z | user No functional or content changes; OpenClaw-specific environment metadata was removed. - Removed the OpenClaw requires.env metada
Full README
Skill: self-improving-agent
Owner: pskoett
Summary: Captures learnings, errors, and corrections to enable continuous improvement. Use when: (1) A command or operation fails unexpectedly, (2) User corrects Clau...
Tags: latest:1.0.11
Version history:
v1.0.11 | 2026-02-22T16:28:33.337Z | user
No functional or content changes; OpenClaw-specific environment metadata was removed.
- Removed the OpenClaw
requires.envmetadata block from the skill definition. - All usage guidance, logging formats, and workflow instructions remain unchanged.
- No new features or bug fixes included in this version.
- This update does not require any action from users.
- Ensures cleaner skill metadata and wider compatibility.
v1.0.10 | 2026-02-21T21:34:25.365Z | user
self-improving-agent v1.0.10
- Added attribution: notes that this skill was remade for OpenClaw from the original repository (pskoett-ai-skills).
- No functional or structural changes to the skill—documentation only update.
- No code files were changed in this version.
v1.0.9 | 2026-02-21T20:43:11.283Z | user
- Added OpenClaw integration metadata to SKILL.md (
metadata: openclaw: requires: env: [CLAUDE_TOOL_OUTPUT]) - No changes to general skill functionality or logging workflows
- This update enables better compatibility and environment validation for OpenClaw users
v1.0.8 | 2026-02-21T20:36:06.961Z | user
self-improving-agent 1.0.8
- Clarified that referencing agent files (AGENTS.md, CLAUDE.md, or .github/copilot-instructions.md) is an alternative to hook-based reminders in the generic setup section.
- No code or file changes; documentation only.
v1.0.7 | 2026-02-21T18:12:28.113Z | user
Version 1.0.7
- Added setup guidance: Now includes instructions to reference agent files (AGENTS.md, CLAUDE.md, or .github/copilot-instructions.md) to remind logging of learnings.
- Introduced a new "Self-Improvement Workflow" section for logging and promoting learnings.
- Clarified promotion steps for broadly applicable learnings, especially for non-OpenClaw environments.
- No code or file changes; documentation only update.
v1.0.6 | 2026-02-21T17:22:45.619Z | user
self-improving-agent 1.0.6 changelog:
- Added support for recurring pattern tracking: now supports logging and updating learnings with a stable
Pattern-Keyand new metadata fields likeRecurrence-Count,First-Seen, andLast-Seen. - Introduced a "simplify-and-harden" source for learnings, enabling simplified/hardened patterns to be tracked and improved over time.
- Updated Quick Reference and Learning Entry format to reflect new pattern tracking options.
- No code or file structure changes; documentation-only update.
v1.0.5 | 2026-02-03T07:20:44.219Z | user
- fixed hook sub-agent bug by removing hook for sub-agent processes
v1.0.4 | 2026-01-31T12:39:00.016Z | user
- Added detailed OpenClaw integration instructions, including workspace structure, installation methods, and inter-session communication tools.
- Introduced dedicated section for OpenClaw setup and workflow, separating generic and OpenClaw-specific usage.
- Included instructions for enabling automatic prompts via OpenClaw session hooks.
- Removed Clawdhub metadata file (.clawdhub/origin.json) from the repository.
- Clarified file organization and promotion targets for learnings within the OpenClaw workspace.
v1.0.3 | 2026-01-31T11:04:05.160Z | auto
- Initial OpenClaw integration: added OpenClaw hooks and documentation files.
- Rebranded workspace references from "clawdbot" to "OpenClaw" throughout documentation.
- Introduced new learnings directory structure and logging templates for errors, feature requests, and learnings.
- Updated and extended instructions in SKILL.md regarding entry promotion, review, and recurring pattern detection.
- Removed legacy reference to clawdbot integration.
v1.0.2 | 2026-01-26T09:42:32.012Z | auto
- Added guidelines for promoting workflow improvements, tool gotchas, and behavioral patterns to new clawdbot workspace files (
AGENTS.md,TOOLS.md,SOUL.md). - Updated promotion targets and instructions to include
SOUL.mdandTOOLS.mdfor better organization of learning types. - Included references to clawdbot integration throughout documentation.
- No changes to entry format or basic logging workflow.
v1.0.1 | 2026-01-19T21:56:47.396Z | auto
self-improving-agent v1.0.1
- Added 7 new files, including structured templates (assets/SKILL-TEMPLATE.md, assets/LEARNINGS.md), documentation for examples and hooks, and scripts for error detection and skill extraction.
- Removed unscoped/old files (LEARNINGS.md, examples.md) in favor of new asset and reference structure.
- SKILL.md updated: clarified promotion targets to include
.github/copilot-instructions.mdalongside CLAUDE.md and AGENTS.md, improved instructions for file promotion and creation. - Improved modularity and usability by providing templates and scripts to assist logging and review workflows.
v1.0.0 | 2026-01-05T17:03:18.365Z
Archive index:
Archive v1.0.11: 16 files, 24314 bytes
Files: .learnings/ERRORS.md (75b), .learnings/FEATURE_REQUESTS.md (84b), .learnings/LEARNINGS.md (99b), assets/LEARNINGS.md (1152b), assets/SKILL-TEMPLATE.md (3407b), hooks/openclaw/handler.js (1620b), hooks/openclaw/handler.ts (1872b), hooks/openclaw/HOOK.md (589b), references/examples.md (8291b), references/hooks-setup.md (4867b), references/openclaw-integration.md (5638b), scripts/activator.sh (680b), scripts/error-detector.sh (1317b), scripts/extract-skill.sh (5293b), SKILL.md (19704b), _meta.json (140b)
File v1.0.11:SKILL.md
name: self-improvement description: "Captures learnings, errors, and corrections to enable continuous improvement. Use when: (1) A command or operation fails unexpectedly, (2) User corrects Claude ('No, that's wrong...', 'Actually...'), (3) User requests a capability that doesn't exist, (4) An external API or tool fails, (5) Claude realizes its knowledge is outdated or incorrect, (6) A better approach is discovered for a recurring task. Also review learnings before major tasks." metadata:
Self-Improvement Skill
Log learnings and errors to markdown files for continuous improvement. Coding agents can later process these into fixes, and important learnings get promoted to project memory.
Quick Reference
| Situation | Action |
|-----------|--------|
| Command/operation fails | Log to .learnings/ERRORS.md |
| User corrects you | Log to .learnings/LEARNINGS.md with category correction |
| User wants missing feature | Log to .learnings/FEATURE_REQUESTS.md |
| API/external tool fails | Log to .learnings/ERRORS.md with integration details |
| Knowledge was outdated | Log to .learnings/LEARNINGS.md with category knowledge_gap |
| Found better approach | Log to .learnings/LEARNINGS.md with category best_practice |
| Simplify/Harden recurring patterns | Log/update .learnings/LEARNINGS.md with Source: simplify-and-harden and a stable Pattern-Key |
| Similar to existing entry | Link with **See Also**, consider priority bump |
| Broadly applicable learning | Promote to CLAUDE.md, AGENTS.md, and/or .github/copilot-instructions.md |
| Workflow improvements | Promote to AGENTS.md (OpenClaw workspace) |
| Tool gotchas | Promote to TOOLS.md (OpenClaw workspace) |
| Behavioral patterns | Promote to SOUL.md (OpenClaw workspace) |
OpenClaw Setup (Recommended)
OpenClaw is the primary platform for this skill. It uses workspace-based prompt injection with automatic skill loading.
Installation
Via ClawdHub (recommended):
clawdhub install self-improving-agent
Manual:
git clone https://github.com/peterskoett/self-improving-agent.git ~/.openclaw/skills/self-improving-agent
Remade for openclaw from original repo : https://github.com/pskoett/pskoett-ai-skills - https://github.com/pskoett/pskoett-ai-skills/tree/main/skills/self-improvement
Workspace Structure
OpenClaw injects these files into every session:
~/.openclaw/workspace/
├── AGENTS.md # Multi-agent workflows, delegation patterns
├── SOUL.md # Behavioral guidelines, personality, principles
├── TOOLS.md # Tool capabilities, integration gotchas
├── MEMORY.md # Long-term memory (main session only)
├── memory/ # Daily memory files
│ └── YYYY-MM-DD.md
└── .learnings/ # This skill's log files
├── LEARNINGS.md
├── ERRORS.md
└── FEATURE_REQUESTS.md
Create Learning Files
mkdir -p ~/.openclaw/workspace/.learnings
Then create the log files (or copy from assets/):
LEARNINGS.md— corrections, knowledge gaps, best practicesERRORS.md— command failures, exceptionsFEATURE_REQUESTS.md— user-requested capabilities
Promotion Targets
When learnings prove broadly applicable, promote them to workspace files:
| Learning Type | Promote To | Example |
|---------------|------------|---------|
| Behavioral patterns | SOUL.md | "Be concise, avoid disclaimers" |
| Workflow improvements | AGENTS.md | "Spawn sub-agents for long tasks" |
| Tool gotchas | TOOLS.md | "Git push needs auth configured first" |
Inter-Session Communication
OpenClaw provides tools to share learnings across sessions:
- sessions_list — View active/recent sessions
- sessions_history — Read another session's transcript
- sessions_send — Send a learning to another session
- sessions_spawn — Spawn a sub-agent for background work
Optional: Enable Hook
For automatic reminders at session start:
# Copy hook to OpenClaw hooks directory
cp -r hooks/openclaw ~/.openclaw/hooks/self-improvement
# Enable it
openclaw hooks enable self-improvement
See references/openclaw-integration.md for complete details.
Generic Setup (Other Agents)
For Claude Code, Codex, Copilot, or other agents, create .learnings/ in your project:
mkdir -p .learnings
Copy templates from assets/ or create files with headers.
Add reference to agent files AGENTS.md, CLAUDE.md, or .github/copilot-instructions.md to remind yourself to log learnings. (this is an alternative to hook-based reminders)
Self-Improvement Workflow
When errors or corrections occur:
- Log to
.learnings/ERRORS.md,LEARNINGS.md, orFEATURE_REQUESTS.md - Review and promote broadly applicable learnings to:
CLAUDE.md- project facts and conventionsAGENTS.md- workflows and automation.github/copilot-instructions.md- Copilot context
Logging Format
Learning Entry
Append to .learnings/LEARNINGS.md:
## [LRN-YYYYMMDD-XXX] category
**Logged**: ISO-8601 timestamp
**Priority**: low | medium | high | critical
**Status**: pending
**Area**: frontend | backend | infra | tests | docs | config
### Summary
One-line description of what was learned
### Details
Full context: what happened, what was wrong, what's correct
### Suggested Action
Specific fix or improvement to make
### Metadata
- Source: conversation | error | user_feedback
- Related Files: path/to/file.ext
- Tags: tag1, tag2
- See Also: LRN-20250110-001 (if related to existing entry)
- Pattern-Key: simplify.dead_code | harden.input_validation (optional, for recurring-pattern tracking)
- Recurrence-Count: 1 (optional)
- First-Seen: 2025-01-15 (optional)
- Last-Seen: 2025-01-15 (optional)
---
Error Entry
Append to .learnings/ERRORS.md:
## [ERR-YYYYMMDD-XXX] skill_or_command_name
**Logged**: ISO-8601 timestamp
**Priority**: high
**Status**: pending
**Area**: frontend | backend | infra | tests | docs | config
### Summary
Brief description of what failed
### Error
Actual error message or output
### Context
- Command/operation attempted
- Input or parameters used
- Environment details if relevant
### Suggested Fix
If identifiable, what might resolve this
### Metadata
- Reproducible: yes | no | unknown
- Related Files: path/to/file.ext
- See Also: ERR-20250110-001 (if recurring)
---
Feature Request Entry
Append to .learnings/FEATURE_REQUESTS.md:
## [FEAT-YYYYMMDD-XXX] capability_name
**Logged**: ISO-8601 timestamp
**Priority**: medium
**Status**: pending
**Area**: frontend | backend | infra | tests | docs | config
### Requested Capability
What the user wanted to do
### User Context
Why they needed it, what problem they're solving
### Complexity Estimate
simple | medium | complex
### Suggested Implementation
How this could be built, what it might extend
### Metadata
- Frequency: first_time | recurring
- Related Features: existing_feature_name
---
ID Generation
Format: TYPE-YYYYMMDD-XXX
- TYPE:
LRN(learning),ERR(error),FEAT(feature) - YYYYMMDD: Current date
- XXX: Sequential number or random 3 chars (e.g.,
001,A7B)
Examples: LRN-20250115-001, ERR-20250115-A3F, FEAT-20250115-002
Resolving Entries
When an issue is fixed, update the entry:
- Change
**Status**: pending→**Status**: resolved - Add resolution block after Metadata:
### Resolution
- **Resolved**: 2025-01-16T09:00:00Z
- **Commit/PR**: abc123 or #42
- **Notes**: Brief description of what was done
Other status values:
in_progress- Actively being worked onwont_fix- Decided not to address (add reason in Resolution notes)promoted- Elevated to CLAUDE.md, AGENTS.md, or .github/copilot-instructions.md
Promoting to Project Memory
When a learning is broadly applicable (not a one-off fix), promote it to permanent project memory.
When to Promote
- Learning applies across multiple files/features
- Knowledge any contributor (human or AI) should know
- Prevents recurring mistakes
- Documents project-specific conventions
Promotion Targets
| Target | What Belongs There |
|--------|-------------------|
| CLAUDE.md | Project facts, conventions, gotchas for all Claude interactions |
| AGENTS.md | Agent-specific workflows, tool usage patterns, automation rules |
| .github/copilot-instructions.md | Project context and conventions for GitHub Copilot |
| SOUL.md | Behavioral guidelines, communication style, principles (OpenClaw workspace) |
| TOOLS.md | Tool capabilities, usage patterns, integration gotchas (OpenClaw workspace) |
How to Promote
- Distill the learning into a concise rule or fact
- Add to appropriate section in target file (create file if needed)
- Update original entry:
- Change
**Status**: pending→**Status**: promoted - Add
**Promoted**: CLAUDE.md,AGENTS.md, or.github/copilot-instructions.md
- Change
Promotion Examples
Learning (verbose):
Project uses pnpm workspaces. Attempted
npm installbut failed. Lock file ispnpm-lock.yaml. Must usepnpm install.
In CLAUDE.md (concise):
## Build & Dependencies
- Package manager: pnpm (not npm) - use `pnpm install`
Learning (verbose):
When modifying API endpoints, must regenerate TypeScript client. Forgetting this causes type mismatches at runtime.
In AGENTS.md (actionable):
## After API Changes
1. Regenerate client: `pnpm run generate:api`
2. Check for type errors: `pnpm tsc --noEmit`
Recurring Pattern Detection
If logging something similar to an existing entry:
- Search first:
grep -r "keyword" .learnings/ - Link entries: Add
**See Also**: ERR-20250110-001in Metadata - Bump priority if issue keeps recurring
- Consider systemic fix: Recurring issues often indicate:
- Missing documentation (→ promote to CLAUDE.md or .github/copilot-instructions.md)
- Missing automation (→ add to AGENTS.md)
- Architectural problem (→ create tech debt ticket)
Simplify & Harden Feed
Use this workflow to ingest recurring patterns from the simplify-and-harden
skill and turn them into durable prompt guidance.
Ingestion Workflow
- Read
simplify_and_harden.learning_loop.candidatesfrom the task summary. - For each candidate, use
pattern_keyas the stable dedupe key. - Search
.learnings/LEARNINGS.mdfor an existing entry with that key:grep -n "Pattern-Key: <pattern_key>" .learnings/LEARNINGS.md
- If found:
- Increment
Recurrence-Count - Update
Last-Seen - Add
See Alsolinks to related entries/tasks
- Increment
- If not found:
- Create a new
LRN-...entry - Set
Source: simplify-and-harden - Set
Pattern-Key,Recurrence-Count: 1, andFirst-Seen/Last-Seen
- Create a new
Promotion Rule (System Prompt Feedback)
Promote recurring patterns into agent context/system prompt files when all are true:
Recurrence-Count >= 3- Seen across at least 2 distinct tasks
- Occurred within a 30-day window
Promotion targets:
CLAUDE.mdAGENTS.md.github/copilot-instructions.mdSOUL.md/TOOLS.mdfor OpenClaw workspace-level guidance when applicable
Write promoted rules as short prevention rules (what to do before/while coding), not long incident write-ups.
Periodic Review
Review .learnings/ at natural breakpoints:
When to Review
- Before starting a new major task
- After completing a feature
- When working in an area with past learnings
- Weekly during active development
Quick Status Check
# Count pending items
grep -h "Status\*\*: pending" .learnings/*.md | wc -l
# List pending high-priority items
grep -B5 "Priority\*\*: high" .learnings/*.md | grep "^## \["
# Find learnings for a specific area
grep -l "Area\*\*: backend" .learnings/*.md
Review Actions
- Resolve fixed items
- Promote applicable learnings
- Link related entries
- Escalate recurring issues
Detection Triggers
Automatically log when you notice:
Corrections (→ learning with correction category):
- "No, that's not right..."
- "Actually, it should be..."
- "You're wrong about..."
- "That's outdated..."
Feature Requests (→ feature request):
- "Can you also..."
- "I wish you could..."
- "Is there a way to..."
- "Why can't you..."
Knowledge Gaps (→ learning with knowledge_gap category):
- User provides information you didn't know
- Documentation you referenced is outdated
- API behavior differs from your understanding
Errors (→ error entry):
- Command returns non-zero exit code
- Exception or stack trace
- Unexpected output or behavior
- Timeout or connection failure
Priority Guidelines
| Priority | When to Use |
|----------|-------------|
| critical | Blocks core functionality, data loss risk, security issue |
| high | Significant impact, affects common workflows, recurring issue |
| medium | Moderate impact, workaround exists |
| low | Minor inconvenience, edge case, nice-to-have |
Area Tags
Use to filter learnings by codebase region:
| Area | Scope |
|------|-------|
| frontend | UI, components, client-side code |
| backend | API, services, server-side code |
| infra | CI/CD, deployment, Docker, cloud |
| tests | Test files, testing utilities, coverage |
| docs | Documentation, comments, READMEs |
| config | Configuration files, environment, settings |
Best Practices
- Log immediately - context is freshest right after the issue
- Be specific - future agents need to understand quickly
- Include reproduction steps - especially for errors
- Link related files - makes fixes easier
- Suggest concrete fixes - not just "investigate"
- Use consistent categories - enables filtering
- Promote aggressively - if in doubt, add to CLAUDE.md or .github/copilot-instructions.md
- Review regularly - stale learnings lose value
Gitignore Options
Keep learnings local (per-developer):
.learnings/
Track learnings in repo (team-wide): Don't add to .gitignore - learnings become shared knowledge.
Hybrid (track templates, ignore entries):
.learnings/*.md
!.learnings/.gitkeep
Hook Integration
Enable automatic reminders through agent hooks. This is opt-in - you must explicitly configure hooks.
Quick Setup (Claude Code / Codex)
Create .claude/settings.json in your project:
{
"hooks": {
"UserPromptSubmit": [{
"matcher": "",
"hooks": [{
"type": "command",
"command": "./skills/self-improvement/scripts/activator.sh"
}]
}]
}
}
This injects a learning evaluation reminder after each prompt (~50-100 tokens overhead).
Full Setup (With Error Detection)
{
"hooks": {
"UserPromptSubmit": [{
"matcher": "",
"hooks": [{
"type": "command",
"command": "./skills/self-improvement/scripts/activator.sh"
}]
}],
"PostToolUse": [{
"matcher": "Bash",
"hooks": [{
"type": "command",
"command": "./skills/self-improvement/scripts/error-detector.sh"
}]
}]
}
}
Available Hook Scripts
| Script | Hook Type | Purpose |
|--------|-----------|---------|
| scripts/activator.sh | UserPromptSubmit | Reminds to evaluate learnings after tasks |
| scripts/error-detector.sh | PostToolUse (Bash) | Triggers on command errors |
See references/hooks-setup.md for detailed configuration and troubleshooting.
Automatic Skill Extraction
When a learning is valuable enough to become a reusable skill, extract it using the provided helper.
Skill Extraction Criteria
A learning qualifies for skill extraction when ANY of these apply:
| Criterion | Description |
|-----------|-------------|
| Recurring | Has See Also links to 2+ similar issues |
| Verified | Status is resolved with working fix |
| Non-obvious | Required actual debugging/investigation to discover |
| Broadly applicable | Not project-specific; useful across codebases |
| User-flagged | User says "save this as a skill" or similar |
Extraction Workflow
- Identify candidate: Learning meets extraction criteria
- Run helper (or create manually):
./skills/self-improvement/scripts/extract-skill.sh skill-name --dry-run ./skills/self-improvement/scripts/extract-skill.sh skill-name - Customize SKILL.md: Fill in template with learning content
- Update learning: Set status to
promoted_to_skill, addSkill-Path - Verify: Read skill in fresh session to ensure it's self-contained
Manual Extraction
If you prefer manual creation:
- Create
skills/<skill-name>/SKILL.md - Use template from
assets/SKILL-TEMPLATE.md - Follow Agent Skills spec:
- YAML frontmatter with
nameanddescription - Name must match folder name
- No README.md inside skill folder
- YAML frontmatter with
Extraction Detection Triggers
Watch for these signals that a learning should become a skill:
In conversation:
- "Save this as a skill"
- "I keep running into this"
- "This would be useful for other projects"
- "Remember this pattern"
In learning entries:
- Multiple
See Alsolinks (recurring issue) - High priority + resolved status
- Category:
best_practicewith broad applicability - User feedback praising the solution
Skill Quality Gates
Before extraction, verify:
- [ ] Solution is tested and working
- [ ] Description is clear without original context
- [ ] Code examples are self-contained
- [ ] No project-specific hardcoded values
- [ ] Follows skill naming conventions (lowercase, hyphens)
Multi-Agent Support
This skill works across different AI coding agents with agent-specific activation.
Claude Code
Activation: Hooks (UserPromptSubmit, PostToolUse)
Setup: .claude/settings.json with hook configuration
Detection: Automatic via hook scripts
Codex CLI
Activation: Hooks (same pattern as Claude Code)
Setup: .codex/settings.json with hook configuration
Detection: Automatic via hook scripts
GitHub Copilot
Activation: Manual (no hook support)
Setup: Add to .github/copilot-instructions.md:
## Self-Improvement
After solving non-obvious issues, consider logging to `.learnings/`:
1. Use format from self-improvement skill
2. Link related entries with See Also
3. Promote high-value learnings to skills
Ask in chat: "Should I log this as a learning?"
Detection: Manual review at session end
OpenClaw
Activation: Workspace injection + inter-agent messaging Setup: See "OpenClaw Setup" section above Detection: Via session tools and workspace files
Agent-Agnostic Guidance
Regardless of agent, apply self-improvement when you:
- Discover something non-obvious - solution wasn't immediate
- Correct yourself - initial approach was wrong
- Learn project conventions - discovered undocumented patterns
- Hit unexpected errors - especially if diagnosis was difficult
- Find better approaches - improved on your original solution
Copilot Chat Integration
For Copilot users, add this to your prompts when relevant:
After completing this task, evaluate if any learnings should be logged to
.learnings/using the self-improvement skill format.
Or use quick prompts:
- "Log this to learnings"
- "Create a skill from this solution"
- "Check .learnings/ for related issues"
File v1.0.11:_meta.json
{ "ownerId": "kn70cjr952qdec1nx70zs6wefn7ynq2t", "slug": "self-improving-agent", "version": "1.0.11", "publishedAt": 1771777713337 }
File v1.0.11:references/examples.md
Entry Examples
Concrete examples of well-formatted entries with all fields.
Learning: Correction
## [LRN-20250115-001] correction
**Logged**: 2025-01-15T10:30:00Z
**Priority**: high
**Status**: pending
**Area**: tests
### Summary
Incorrectly assumed pytest fixtures are scoped to function by default
### Details
When writing test fixtures, I assumed all fixtures were function-scoped.
User corrected that while function scope is the default, the codebase
convention uses module-scoped fixtures for database connections to
improve test performance.
### Suggested Action
When creating fixtures that involve expensive setup (DB, network),
check existing fixtures for scope patterns before defaulting to function scope.
### Metadata
- Source: user_feedback
- Related Files: tests/conftest.py
- Tags: pytest, testing, fixtures
---
Learning: Knowledge Gap (Resolved)
## [LRN-20250115-002] knowledge_gap
**Logged**: 2025-01-15T14:22:00Z
**Priority**: medium
**Status**: resolved
**Area**: config
### Summary
Project uses pnpm not npm for package management
### Details
Attempted to run `npm install` but project uses pnpm workspaces.
Lock file is `pnpm-lock.yaml`, not `package-lock.json`.
### Suggested Action
Check for `pnpm-lock.yaml` or `pnpm-workspace.yaml` before assuming npm.
Use `pnpm install` for this project.
### Metadata
- Source: error
- Related Files: pnpm-lock.yaml, pnpm-workspace.yaml
- Tags: package-manager, pnpm, setup
### Resolution
- **Resolved**: 2025-01-15T14:30:00Z
- **Commit/PR**: N/A - knowledge update
- **Notes**: Added to CLAUDE.md for future reference
---
Learning: Promoted to CLAUDE.md
## [LRN-20250115-003] best_practice
**Logged**: 2025-01-15T16:00:00Z
**Priority**: high
**Status**: promoted
**Promoted**: CLAUDE.md
**Area**: backend
### Summary
API responses must include correlation ID from request headers
### Details
All API responses should echo back the X-Correlation-ID header from
the request. This is required for distributed tracing. Responses
without this header break the observability pipeline.
### Suggested Action
Always include correlation ID passthrough in API handlers.
### Metadata
- Source: user_feedback
- Related Files: src/middleware/correlation.ts
- Tags: api, observability, tracing
---
Learning: Promoted to AGENTS.md
## [LRN-20250116-001] best_practice
**Logged**: 2025-01-16T09:00:00Z
**Priority**: high
**Status**: promoted
**Promoted**: AGENTS.md
**Area**: backend
### Summary
Must regenerate API client after OpenAPI spec changes
### Details
When modifying API endpoints, the TypeScript client must be regenerated.
Forgetting this causes type mismatches that only appear at runtime.
The generate script also runs validation.
### Suggested Action
Add to agent workflow: after any API changes, run `pnpm run generate:api`.
### Metadata
- Source: error
- Related Files: openapi.yaml, src/client/api.ts
- Tags: api, codegen, typescript
---
Error Entry
## [ERR-20250115-A3F] docker_build
**Logged**: 2025-01-15T09:15:00Z
**Priority**: high
**Status**: pending
**Area**: infra
### Summary
Docker build fails on M1 Mac due to platform mismatch
### Error
error: failed to solve: python:3.11-slim: no match for platform linux/arm64
### Context
- Command: `docker build -t myapp .`
- Dockerfile uses `FROM python:3.11-slim`
- Running on Apple Silicon (M1/M2)
### Suggested Fix
Add platform flag: `docker build --platform linux/amd64 -t myapp .`
Or update Dockerfile: `FROM --platform=linux/amd64 python:3.11-slim`
### Metadata
- Reproducible: yes
- Related Files: Dockerfile
---
Error Entry: Recurring Issue
## [ERR-20250120-B2C] api_timeout
**Logged**: 2025-01-20T11:30:00Z
**Priority**: critical
**Status**: pending
**Area**: backend
### Summary
Third-party payment API timeout during checkout
### Error
TimeoutError: Request to payments.example.com timed out after 30000ms
### Context
- Command: POST /api/checkout
- Timeout set to 30s
- Occurs during peak hours (lunch, evening)
### Suggested Fix
Implement retry with exponential backoff. Consider circuit breaker pattern.
### Metadata
- Reproducible: yes (during peak hours)
- Related Files: src/services/payment.ts
- See Also: ERR-20250115-X1Y, ERR-20250118-Z3W
---
Feature Request
## [FEAT-20250115-001] export_to_csv
**Logged**: 2025-01-15T16:45:00Z
**Priority**: medium
**Status**: pending
**Area**: backend
### Requested Capability
Export analysis results to CSV format
### User Context
User runs weekly reports and needs to share results with non-technical
stakeholders in Excel. Currently copies output manually.
### Complexity Estimate
simple
### Suggested Implementation
Add `--output csv` flag to the analyze command. Use standard csv module.
Could extend existing `--output json` pattern.
### Metadata
- Frequency: recurring
- Related Features: analyze command, json output
---
Feature Request: Resolved
## [FEAT-20250110-002] dark_mode
**Logged**: 2025-01-10T14:00:00Z
**Priority**: low
**Status**: resolved
**Area**: frontend
### Requested Capability
Dark mode support for the dashboard
### User Context
User works late hours and finds the bright interface straining.
Several other users have mentioned this informally.
### Complexity Estimate
medium
### Suggested Implementation
Use CSS variables for colors. Add toggle in user settings.
Consider system preference detection.
### Metadata
- Frequency: recurring
- Related Features: user settings, theme system
### Resolution
- **Resolved**: 2025-01-18T16:00:00Z
- **Commit/PR**: #142
- **Notes**: Implemented with system preference detection and manual toggle
---
Learning: Promoted to Skill
## [LRN-20250118-001] best_practice
**Logged**: 2025-01-18T11:00:00Z
**Priority**: high
**Status**: promoted_to_skill
**Skill-Path**: skills/docker-m1-fixes
**Area**: infra
### Summary
Docker build fails on Apple Silicon due to platform mismatch
### Details
When building Docker images on M1/M2 Macs, the build fails because
the base image doesn't have an ARM64 variant. This is a common issue
that affects many developers.
### Suggested Action
Add `--platform linux/amd64` to docker build command, or use
`FROM --platform=linux/amd64` in Dockerfile.
### Metadata
- Source: error
- Related Files: Dockerfile
- Tags: docker, arm64, m1, apple-silicon
- See Also: ERR-20250115-A3F, ERR-20250117-B2D
---
Extracted Skill Example
When the above learning is extracted as a skill, it becomes:
File: skills/docker-m1-fixes/SKILL.md
---
name: docker-m1-fixes
description: "Fixes Docker build failures on Apple Silicon (M1/M2). Use when docker build fails with platform mismatch errors."
---
# Docker M1 Fixes
Solutions for Docker build issues on Apple Silicon Macs.
## Quick Reference
| Error | Fix |
|-------|-----|
| `no match for platform linux/arm64` | Add `--platform linux/amd64` to build |
| Image runs but crashes | Use emulation or find ARM-compatible base |
## The Problem
Many Docker base images don't have ARM64 variants. When building on
Apple Silicon (M1/M2/M3), Docker attempts to pull ARM64 images by
default, causing platform mismatch errors.
## Solutions
### Option 1: Build Flag (Recommended)
Add platform flag to your build command:
\`\`\`bash
docker build --platform linux/amd64 -t myapp .
\`\`\`
### Option 2: Dockerfile Modification
Specify platform in the FROM instruction:
\`\`\`dockerfile
FROM --platform=linux/amd64 python:3.11-slim
\`\`\`
### Option 3: Docker Compose
Add platform to your service:
\`\`\`yaml
services:
app:
platform: linux/amd64
build: .
\`\`\`
## Trade-offs
| Approach | Pros | Cons |
|----------|------|------|
| Build flag | No file changes | Must remember flag |
| Dockerfile | Explicit, versioned | Affects all builds |
| Compose | Convenient for dev | Requires compose |
## Performance Note
Running AMD64 images on ARM64 uses Rosetta 2 emulation. This works
for development but may be slower. For production, find ARM-native
alternatives when possible.
## Source
- Learning ID: LRN-20250118-001
- Category: best_practice
- Extraction Date: 2025-01-18
File v1.0.11:references/hooks-setup.md
Hook Setup Guide
Configure automatic self-improvement triggers for AI coding agents.
Overview
Hooks enable proactive learning capture by injecting reminders at key moments:
- UserPromptSubmit: Reminder after each prompt to evaluate learnings
- PostToolUse (Bash): Error detection when commands fail
Claude Code Setup
Option 1: Project-Level Configuration
Create .claude/settings.json in your project root:
{
"hooks": {
"UserPromptSubmit": [
{
"matcher": "",
"hooks": [
{
"type": "command",
"command": "./skills/self-improvement/scripts/activator.sh"
}
]
}
],
"PostToolUse": [
{
"matcher": "Bash",
"hooks": [
{
"type": "command",
"command": "./skills/self-improvement/scripts/error-detector.sh"
}
]
}
]
}
}
Option 2: User-Level Configuration
Add to ~/.claude/settings.json for global activation:
{
"hooks": {
"UserPromptSubmit": [
{
"matcher": "",
"hooks": [
{
"type": "command",
"command": "~/.claude/skills/self-improvement/scripts/activator.sh"
}
]
}
]
}
}
Minimal Setup (Activator Only)
For lower overhead, use only the UserPromptSubmit hook:
{
"hooks": {
"UserPromptSubmit": [
{
"matcher": "",
"hooks": [
{
"type": "command",
"command": "./skills/self-improvement/scripts/activator.sh"
}
]
}
]
}
}
Codex CLI Setup
Codex uses the same hook system as Claude Code. Create .codex/settings.json:
{
"hooks": {
"UserPromptSubmit": [
{
"matcher": "",
"hooks": [
{
"type": "command",
"command": "./skills/self-improvement/scripts/activator.sh"
}
]
}
]
}
}
GitHub Copilot Setup
Copilot doesn't support hooks directly. Instead, add guidance to .github/copilot-instructions.md:
## Self-Improvement
After completing tasks that involved:
- Debugging non-obvious issues
- Discovering workarounds
- Learning project-specific patterns
- Resolving unexpected errors
Consider logging the learning to `.learnings/` using the format from the self-improvement skill.
For high-value learnings that would benefit other sessions, consider skill extraction.
Verification
Test Activator Hook
- Enable the hook configuration
- Start a new Claude Code session
- Send any prompt
- Verify you see
<self-improvement-reminder>in the context
Test Error Detector Hook
- Enable PostToolUse hook for Bash
- Run a command that fails:
ls /nonexistent/path - Verify you see
<error-detected>reminder
Dry Run Extract Script
./skills/self-improvement/scripts/extract-skill.sh test-skill --dry-run
Expected output shows the skill scaffold that would be created.
Troubleshooting
Hook Not Triggering
- Check script permissions:
chmod +x scripts/*.sh - Verify path: Use absolute paths or paths relative to project root
- Check settings location: Project vs user-level settings
- Restart session: Hooks are loaded at session start
Permission Denied
chmod +x ./skills/self-improvement/scripts/activator.sh
chmod +x ./skills/self-improvement/scripts/error-detector.sh
chmod +x ./skills/self-improvement/scripts/extract-skill.sh
Script Not Found
If using relative paths, ensure you're in the correct directory or use absolute paths:
{
"command": "/absolute/path/to/skills/self-improvement/scripts/activator.sh"
}
Too Much Overhead
If the activator feels intrusive:
- Use minimal setup: Only UserPromptSubmit, skip PostToolUse
- Add matcher filter: Only trigger for certain prompts:
{
"matcher": "fix|debug|error|issue",
"hooks": [...]
}
Hook Output Budget
The activator is designed to be lightweight:
- Target: ~50-100 tokens per activation
- Content: Structured reminder, not verbose instructions
- Format: XML tags for easy parsing
If you need to reduce overhead further, you can edit activator.sh to output less text.
Security Considerations
- Hook scripts run with the same permissions as Claude Code
- Scripts only output text; they don't modify files or run commands
- Error detector reads
CLAUDE_TOOL_OUTPUTenvironment variable - All scripts are opt-in (you must configure them explicitly)
Disabling Hooks
To temporarily disable without removing configuration:
- Comment out in settings:
{
"hooks": {
// "UserPromptSubmit": [...]
}
}
- Or delete the settings file: Hooks won't run without configuration
File v1.0.11:references/openclaw-integration.md
OpenClaw Integration
Complete setup and usage guide for integrating the self-improvement skill with OpenClaw.
Overview
OpenClaw uses workspace-based prompt injection combined with event-driven hooks. Context is injected from workspace files at session start, and hooks can trigger on lifecycle events.
Workspace Structure
~/.openclaw/
├── workspace/ # Working directory
│ ├── AGENTS.md # Multi-agent coordination patterns
│ ├── SOUL.md # Behavioral guidelines and personality
│ ├── TOOLS.md # Tool capabilities and gotchas
│ ├── MEMORY.md # Long-term memory (main session only)
│ └── memory/ # Daily memory files
│ └── YYYY-MM-DD.md
├── skills/ # Installed skills
│ └── <skill-name>/
│ └── SKILL.md
└── hooks/ # Custom hooks
└── <hook-name>/
├── HOOK.md
└── handler.ts
Quick Setup
1. Install the Skill
clawdhub install self-improving-agent
Or copy manually:
cp -r self-improving-agent ~/.openclaw/skills/
2. Install the Hook (Optional)
Copy the hook to OpenClaw's hooks directory:
cp -r hooks/openclaw ~/.openclaw/hooks/self-improvement
Enable the hook:
openclaw hooks enable self-improvement
3. Create Learning Files
Create the .learnings/ directory in your workspace:
mkdir -p ~/.openclaw/workspace/.learnings
Or in the skill directory:
mkdir -p ~/.openclaw/skills/self-improving-agent/.learnings
Injected Prompt Files
AGENTS.md
Purpose: Multi-agent workflows and delegation patterns.
# Agent Coordination
## Delegation Rules
- Use explore agent for open-ended codebase questions
- Spawn sub-agents for long-running tasks
- Use sessions_send for cross-session communication
## Session Handoff
When delegating to another session:
1. Provide full context in the handoff message
2. Include relevant file paths
3. Specify expected output format
SOUL.md
Purpose: Behavioral guidelines and communication style.
# Behavioral Guidelines
## Communication Style
- Be direct and concise
- Avoid unnecessary caveats and disclaimers
- Use technical language appropriate to context
## Error Handling
- Admit mistakes promptly
- Provide corrected information immediately
- Log significant errors to learnings
TOOLS.md
Purpose: Tool capabilities, integration gotchas, local configuration.
# Tool Knowledge
## Self-Improvement Skill
Log learnings to `.learnings/` for continuous improvement.
## Local Tools
- Document tool-specific gotchas here
- Note authentication requirements
- Track integration quirks
Learning Workflow
Capturing Learnings
- In-session: Log to
.learnings/as usual - Cross-session: Promote to workspace files
Promotion Decision Tree
Is the learning project-specific?
├── Yes → Keep in .learnings/
└── No → Is it behavioral/style-related?
├── Yes → Promote to SOUL.md
└── No → Is it tool-related?
├── Yes → Promote to TOOLS.md
└── No → Promote to AGENTS.md (workflow)
Promotion Format Examples
From learning:
Git push to GitHub fails without auth configured - triggers desktop prompt
To TOOLS.md:
## Git
- Don't push without confirming auth is configured
- Use `gh auth status` to check GitHub CLI auth
Inter-Agent Communication
OpenClaw provides tools for cross-session communication:
sessions_list
View active and recent sessions:
sessions_list(activeMinutes=30, messageLimit=3)
sessions_history
Read transcript from another session:
sessions_history(sessionKey="session-id", limit=50)
sessions_send
Send message to another session:
sessions_send(sessionKey="session-id", message="Learning: API requires X-Custom-Header")
sessions_spawn
Spawn a background sub-agent:
sessions_spawn(task="Research X and report back", label="research")
Available Hook Events
| Event | When It Fires |
|-------|---------------|
| agent:bootstrap | Before workspace files inject |
| command:new | When /new command issued |
| command:reset | When /reset command issued |
| command:stop | When /stop command issued |
| gateway:startup | When gateway starts |
Detection Triggers
Standard Triggers
- User corrections ("No, that's wrong...")
- Command failures (non-zero exit codes)
- API errors
- Knowledge gaps
OpenClaw-Specific Triggers
| Trigger | Action | |---------|--------| | Tool call error | Log to TOOLS.md with tool name | | Session handoff confusion | Log to AGENTS.md with delegation pattern | | Model behavior surprise | Log to SOUL.md with expected vs actual | | Skill issue | Log to .learnings/ or report upstream |
Verification
Check hook is registered:
openclaw hooks list
Check skill is loaded:
openclaw status
Troubleshooting
Hook not firing
- Ensure hooks enabled in config
- Restart gateway after config changes
- Check gateway logs for errors
Learnings not persisting
- Verify
.learnings/directory exists - Check file permissions
- Ensure workspace path is configured correctly
Skill not loading
- Check skill is in skills directory
- Verify SKILL.md has correct frontmatter
- Run
openclaw statusto see loaded skills
File v1.0.11:.learnings/ERRORS.md
Errors Log
Command failures, exceptions, and unexpected behaviors.
File v1.0.11:.learnings/FEATURE_REQUESTS.md
Feature Requests
Capabilities requested by user that don't currently exist.
File v1.0.11:.learnings/LEARNINGS.md
Learnings Log
Captured learnings, corrections, and discoveries. Review before major tasks.
File v1.0.11:assets/LEARNINGS.md
Learnings
Corrections, insights, and knowledge gaps captured during development.
Categories: correction | insight | knowledge_gap | best_practice Areas: frontend | backend | infra | tests | docs | config Statuses: pending | in_progress | resolved | wont_fix | promoted | promoted_to_skill
Status Definitions
| Status | Meaning |
|--------|---------|
| pending | Not yet addressed |
| in_progress | Actively being worked on |
| resolved | Issue fixed or knowledge integrated |
| wont_fix | Decided not to address (reason in Resolution) |
| promoted | Elevated to CLAUDE.md, AGENTS.md, or copilot-instructions.md |
| promoted_to_skill | Extracted as a reusable skill |
Skill Extraction Fields
When a learning is promoted to a skill, add these fields:
**Status**: promoted_to_skill
**Skill-Path**: skills/skill-name
Example:
## [LRN-20250115-001] best_practice
**Logged**: 2025-01-15T10:00:00Z
**Priority**: high
**Status**: promoted_to_skill
**Skill-Path**: skills/docker-m1-fixes
**Area**: infra
### Summary
Docker build fails on Apple Silicon due to platform mismatch
...
File v1.0.11:assets/SKILL-TEMPLATE.md
Skill Template
Template for creating skills extracted from learnings. Copy and customize.
SKILL.md Template
---
name: skill-name-here
description: "Concise description of when and why to use this skill. Include trigger conditions."
---
# Skill Name
Brief introduction explaining the problem this skill solves and its origin.
## Quick Reference
| Situation | Action |
|-----------|--------|
| [Trigger 1] | [Action 1] |
| [Trigger 2] | [Action 2] |
## Background
Why this knowledge matters. What problems it prevents. Context from the original learning.
## Solution
### Step-by-Step
1. First step with code or command
2. Second step
3. Verification step
### Code Example
\`\`\`language
// Example code demonstrating the solution
\`\`\`
## Common Variations
- **Variation A**: Description and how to handle
- **Variation B**: Description and how to handle
## Gotchas
- Warning or common mistake #1
- Warning or common mistake #2
## Related
- Link to related documentation
- Link to related skill
## Source
Extracted from learning entry.
- **Learning ID**: LRN-YYYYMMDD-XXX
- **Original Category**: correction | insight | knowledge_gap | best_practice
- **Extraction Date**: YYYY-MM-DD
Minimal Template
For simple skills that don't need all sections:
---
name: skill-name-here
description: "What this skill does and when to use it."
---
# Skill Name
[Problem statement in one sentence]
## Solution
[Direct solution with code/commands]
## Source
- Learning ID: LRN-YYYYMMDD-XXX
Template with Scripts
For skills that include executable helpers:
---
name: skill-name-here
description: "What this skill does and when to use it."
---
# Skill Name
[Introduction]
## Quick Reference
| Command | Purpose |
|---------|---------|
| `./scripts/helper.sh` | [What it does] |
| `./scripts/validate.sh` | [What it does] |
## Usage
### Automated (Recommended)
\`\`\`bash
./skills/skill-name/scripts/helper.sh [args]
\`\`\`
### Manual Steps
1. Step one
2. Step two
## Scripts
| Script | Description |
|--------|-------------|
| `scripts/helper.sh` | Main utility |
| `scripts/validate.sh` | Validation checker |
## Source
- Learning ID: LRN-YYYYMMDD-XXX
Naming Conventions
-
Skill name: lowercase, hyphens for spaces
- Good:
docker-m1-fixes,api-timeout-patterns - Bad:
Docker_M1_Fixes,APITimeoutPatterns
- Good:
-
Description: Start with action verb, mention trigger
- Good: "Handles Docker build failures on Apple Silicon. Use when builds fail with platform mismatch."
- Bad: "Docker stuff"
-
Files:
SKILL.md- Required, main documentationscripts/- Optional, executable codereferences/- Optional, detailed docsassets/- Optional, templates
Extraction Checklist
Before creating a skill from a learning:
- [ ] Learning is verified (status: resolved)
- [ ] Solution is broadly applicable (not one-off)
- [ ] Content is complete (has all needed context)
- [ ] Name follows conventions
- [ ] Description is concise but informative
- [ ] Quick Reference table is actionable
- [ ] Code examples are tested
- [ ] Source learning ID is recorded
After creating:
- [ ] Update original learning with
promoted_to_skillstatus - [ ] Add
Skill-Path: skills/skill-nameto learning metadata - [ ] Test skill by reading it in a fresh session
File v1.0.11:hooks/openclaw/HOOK.md
name: self-improvement description: "Injects self-improvement reminder during agent bootstrap" metadata: {"openclaw":{"emoji":"🧠","events":["agent:bootstrap"]}}
Self-Improvement Hook
Injects a reminder to evaluate learnings during agent bootstrap.
What It Does
- Fires on
agent:bootstrap(before workspace files are injected) - Adds a reminder block to check
.learnings/for relevant entries - Prompts the agent to log corrections, errors, and discoveries
Configuration
No configuration needed. Enable with:
openclaw hooks enable self-improvement
Archive v1.0.10: 16 files, 24352 bytes
Files: .learnings/ERRORS.md (75b), .learnings/FEATURE_REQUESTS.md (84b), .learnings/LEARNINGS.md (99b), assets/LEARNINGS.md (1152b), assets/SKILL-TEMPLATE.md (3407b), hooks/openclaw/handler.js (1620b), hooks/openclaw/handler.ts (1872b), hooks/openclaw/HOOK.md (589b), references/examples.md (8291b), references/hooks-setup.md (4867b), references/openclaw-integration.md (5638b), scripts/activator.sh (680b), scripts/error-detector.sh (1317b), scripts/extract-skill.sh (5293b), SKILL.md (19770b), _meta.json (140b)
File v1.0.10:SKILL.md
name: self-improvement description: "Captures learnings, errors, and corrections to enable continuous improvement. Use when: (1) A command or operation fails unexpectedly, (2) User corrects Claude ('No, that's wrong...', 'Actually...'), (3) User requests a capability that doesn't exist, (4) An external API or tool fails, (5) Claude realizes its knowledge is outdated or incorrect, (6) A better approach is discovered for a recurring task. Also review learnings before major tasks." metadata: openclaw: requires: env: - CLAUDE_TOOL_OUTPUT
Self-Improvement Skill
Log learnings and errors to markdown files for continuous improvement. Coding agents can later process these into fixes, and important learnings get promoted to project memory.
Quick Reference
| Situation | Action |
|-----------|--------|
| Command/operation fails | Log to .learnings/ERRORS.md |
| User corrects you | Log to .learnings/LEARNINGS.md with category correction |
| User wants missing feature | Log to .learnings/FEATURE_REQUESTS.md |
| API/external tool fails | Log to .learnings/ERRORS.md with integration details |
| Knowledge was outdated | Log to .learnings/LEARNINGS.md with category knowledge_gap |
| Found better approach | Log to .learnings/LEARNINGS.md with category best_practice |
| Simplify/Harden recurring patterns | Log/update .learnings/LEARNINGS.md with Source: simplify-and-harden and a stable Pattern-Key |
| Similar to existing entry | Link with **See Also**, consider priority bump |
| Broadly applicable learning | Promote to CLAUDE.md, AGENTS.md, and/or .github/copilot-instructions.md |
| Workflow improvements | Promote to AGENTS.md (OpenClaw workspace) |
| Tool gotchas | Promote to TOOLS.md (OpenClaw workspace) |
| Behavioral patterns | Promote to SOUL.md (OpenClaw workspace) |
OpenClaw Setup (Recommended)
OpenClaw is the primary platform for this skill. It uses workspace-based prompt injection with automatic skill loading.
Installation
Via ClawdHub (recommended):
clawdhub install self-improving-agent
Manual:
git clone https://github.com/peterskoett/self-improving-agent.git ~/.openclaw/skills/self-improving-agent
Remade for openclaw from original repo : https://github.com/pskoett/pskoett-ai-skills - https://github.com/pskoett/pskoett-ai-skills/tree/main/skills/self-improvement
Workspace Structure
OpenClaw injects these files into every session:
~/.openclaw/workspace/
├── AGENTS.md # Multi-agent workflows, delegation patterns
├── SOUL.md # Behavioral guidelines, personality, principles
├── TOOLS.md # Tool capabilities, integration gotchas
├── MEMORY.md # Long-term memory (main session only)
├── memory/ # Daily memory files
│ └── YYYY-MM-DD.md
└── .learnings/ # This skill's log files
├── LEARNINGS.md
├── ERRORS.md
└── FEATURE_REQUESTS.md
Create Learning Files
mkdir -p ~/.openclaw/workspace/.learnings
Then create the log files (or copy from assets/):
LEARNINGS.md— corrections, knowledge gaps, best practicesERRORS.md— command failures, exceptionsFEATURE_REQUESTS.md— user-requested capabilities
Promotion Targets
When learnings prove broadly applicable, promote them to workspace files:
| Learning Type | Promote To | Example |
|---------------|------------|---------|
| Behavioral patterns | SOUL.md | "Be concise, avoid disclaimers" |
| Workflow improvements | AGENTS.md | "Spawn sub-agents for long tasks" |
| Tool gotchas | TOOLS.md | "Git push needs auth configured first" |
Inter-Session Communication
OpenClaw provides tools to share learnings across sessions:
- sessions_list — View active/recent sessions
- sessions_history — Read another session's transcript
- sessions_send — Send a learning to another session
- sessions_spawn — Spawn a sub-agent for background work
Optional: Enable Hook
For automatic reminders at session start:
# Copy hook to OpenClaw hooks directory
cp -r hooks/openclaw ~/.openclaw/hooks/self-improvement
# Enable it
openclaw hooks enable self-improvement
See references/openclaw-integration.md for complete details.
Generic Setup (Other Agents)
For Claude Code, Codex, Copilot, or other agents, create .learnings/ in your project:
mkdir -p .learnings
Copy templates from assets/ or create files with headers.
Add reference to agent files AGENTS.md, CLAUDE.md, or .github/copilot-instructions.md to remind yourself to log learnings. (this is an alternative to hook-based reminders)
Self-Improvement Workflow
When errors or corrections occur:
- Log to
.learnings/ERRORS.md,LEARNINGS.md, orFEATURE_REQUESTS.md - Review and promote broadly applicable learnings to:
CLAUDE.md- project facts and conventionsAGENTS.md- workflows and automation.github/copilot-instructions.md- Copilot context
Logging Format
Learning Entry
Append to .learnings/LEARNINGS.md:
## [LRN-YYYYMMDD-XXX] category
**Logged**: ISO-8601 timestamp
**Priority**: low | medium | high | critical
**Status**: pending
**Area**: frontend | backend | infra | tests | docs | config
### Summary
One-line description of what was learned
### Details
Full context: what happened, what was wrong, what's correct
### Suggested Action
Specific fix or improvement to make
### Metadata
- Source: conversation | error | user_feedback
- Related Files: path/to/file.ext
- Tags: tag1, tag2
- See Also: LRN-20250110-001 (if related to existing entry)
- Pattern-Key: simplify.dead_code | harden.input_validation (optional, for recurring-pattern tracking)
- Recurrence-Count: 1 (optional)
- First-Seen: 2025-01-15 (optional)
- Last-Seen: 2025-01-15 (optional)
---
Error Entry
Append to .learnings/ERRORS.md:
## [ERR-YYYYMMDD-XXX] skill_or_command_name
**Logged**: ISO-8601 timestamp
**Priority**: high
**Status**: pending
**Area**: frontend | backend | infra | tests | docs | config
### Summary
Brief description of what failed
### Error
Actual error message or output
### Context
- Command/operation attempted
- Input or parameters used
- Environment details if relevant
### Suggested Fix
If identifiable, what might resolve this
### Metadata
- Reproducible: yes | no | unknown
- Related Files: path/to/file.ext
- See Also: ERR-20250110-001 (if recurring)
---
Feature Request Entry
Append to .learnings/FEATURE_REQUESTS.md:
## [FEAT-YYYYMMDD-XXX] capability_name
**Logged**: ISO-8601 timestamp
**Priority**: medium
**Status**: pending
**Area**: frontend | backend | infra | tests | docs | config
### Requested Capability
What the user wanted to do
### User Context
Why they needed it, what problem they're solving
### Complexity Estimate
simple | medium | complex
### Suggested Implementation
How this could be built, what it might extend
### Metadata
- Frequency: first_time | recurring
- Related Features: existing_feature_name
---
ID Generation
Format: TYPE-YYYYMMDD-XXX
- TYPE:
LRN(learning),ERR(error),FEAT(feature) - YYYYMMDD: Current date
- XXX: Sequential number or random 3 chars (e.g.,
001,A7B)
Examples: LRN-20250115-001, ERR-20250115-A3F, FEAT-20250115-002
Resolving Entries
When an issue is fixed, update the entry:
- Change
**Status**: pending→**Status**: resolved - Add resolution block after Metadata:
### Resolution
- **Resolved**: 2025-01-16T09:00:00Z
- **Commit/PR**: abc123 or #42
- **Notes**: Brief description of what was done
Other status values:
in_progress- Actively being worked onwont_fix- Decided not to address (add reason in Resolution notes)promoted- Elevated to CLAUDE.md, AGENTS.md, or .github/copilot-instructions.md
Promoting to Project Memory
When a learning is broadly applicable (not a one-off fix), promote it to permanent project memory.
When to Promote
- Learning applies across multiple files/features
- Knowledge any contributor (human or AI) should know
- Prevents recurring mistakes
- Documents project-specific conventions
Promotion Targets
| Target | What Belongs There |
|--------|-------------------|
| CLAUDE.md | Project facts, conventions, gotchas for all Claude interactions |
| AGENTS.md | Agent-specific workflows, tool usage patterns, automation rules |
| .github/copilot-instructions.md | Project context and conventions for GitHub Copilot |
| SOUL.md | Behavioral guidelines, communication style, principles (OpenClaw workspace) |
| TOOLS.md | Tool capabilities, usage patterns, integration gotchas (OpenClaw workspace) |
How to Promote
- Distill the learning into a concise rule or fact
- Add to appropriate section in target file (create file if needed)
- Update original entry:
- Change
**Status**: pending→**Status**: promoted - Add
**Promoted**: CLAUDE.md,AGENTS.md, or.github/copilot-instructions.md
- Change
Promotion Examples
Learning (verbose):
Project uses pnpm workspaces. Attempted
npm installbut failed. Lock file ispnpm-lock.yaml. Must usepnpm install.
In CLAUDE.md (concise):
## Build & Dependencies
- Package manager: pnpm (not npm) - use `pnpm install`
Learning (verbose):
When modifying API endpoints, must regenerate TypeScript client. Forgetting this causes type mismatches at runtime.
In AGENTS.md (actionable):
## After API Changes
1. Regenerate client: `pnpm run generate:api`
2. Check for type errors: `pnpm tsc --noEmit`
Recurring Pattern Detection
If logging something similar to an existing entry:
- Search first:
grep -r "keyword" .learnings/ - Link entries: Add
**See Also**: ERR-20250110-001in Metadata - Bump priority if issue keeps recurring
- Consider systemic fix: Recurring issues often indicate:
- Missing documentation (→ promote to CLAUDE.md or .github/copilot-instructions.md)
- Missing automation (→ add to AGENTS.md)
- Architectural problem (→ create tech debt ticket)
Simplify & Harden Feed
Use this workflow to ingest recurring patterns from the simplify-and-harden
skill and turn them into durable prompt guidance.
Ingestion Workflow
- Read
simplify_and_harden.learning_loop.candidatesfrom the task summary. - For each candidate, use
pattern_keyas the stable dedupe key. - Search
.learnings/LEARNINGS.mdfor an existing entry with that key:grep -n "Pattern-Key: <pattern_key>" .learnings/LEARNINGS.md
- If found:
- Increment
Recurrence-Count - Update
Last-Seen - Add
See Alsolinks to related entries/tasks
- Increment
- If not found:
- Create a new
LRN-...entry - Set
Source: simplify-and-harden - Set
Pattern-Key,Recurrence-Count: 1, andFirst-Seen/Last-Seen
- Create a new
Promotion Rule (System Prompt Feedback)
Promote recurring patterns into agent context/system prompt files when all are true:
Recurrence-Count >= 3- Seen across at least 2 distinct tasks
- Occurred within a 30-day window
Promotion targets:
CLAUDE.mdAGENTS.md.github/copilot-instructions.mdSOUL.md/TOOLS.mdfor OpenClaw workspace-level guidance when applicable
Write promoted rules as short prevention rules (what to do before/while coding), not long incident write-ups.
Periodic Review
Review .learnings/ at natural breakpoints:
When to Review
- Before starting a new major task
- After completing a feature
- When working in an area with past learnings
- Weekly during active development
Quick Status Check
# Count pending items
grep -h "Status\*\*: pending" .learnings/*.md | wc -l
# List pending high-priority items
grep -B5 "Priority\*\*: high" .learnings/*.md | grep "^## \["
# Find learnings for a specific area
grep -l "Area\*\*: backend" .learnings/*.md
Review Actions
- Resolve fixed items
- Promote applicable learnings
- Link related entries
- Escalate recurring issues
Detection Triggers
Automatically log when you notice:
Corrections (→ learning with correction category):
- "No, that's not right..."
- "Actually, it should be..."
- "You're wrong about..."
- "That's outdated..."
Feature Requests (→ feature request):
- "Can you also..."
- "I wish you could..."
- "Is there a way to..."
- "Why can't you..."
Knowledge Gaps (→ learning with knowledge_gap category):
- User provides information you didn't know
- Documentation you referenced is outdated
- API behavior differs from your understanding
Errors (→ error entry):
- Command returns non-zero exit code
- Exception or stack trace
- Unexpected output or behavior
- Timeout or connection failure
Priority Guidelines
| Priority | When to Use |
|----------|-------------|
| critical | Blocks core functionality, data loss risk, security issue |
| high | Significant impact, affects common workflows, recurring issue |
| medium | Moderate impact, workaround exists |
| low | Minor inconvenience, edge case, nice-to-have |
Area Tags
Use to filter learnings by codebase region:
| Area | Scope |
|------|-------|
| frontend | UI, components, client-side code |
| backend | API, services, server-side code |
| infra | CI/CD, deployment, Docker, cloud |
| tests | Test files, testing utilities, coverage |
| docs | Documentation, comments, READMEs |
| config | Configuration files, environment, settings |
Best Practices
- Log immediately - context is freshest right after the issue
- Be specific - future agents need to understand quickly
- Include reproduction steps - especially for errors
- Link related files - makes fixes easier
- Suggest concrete fixes - not just "investigate"
- Use consistent categories - enables filtering
- Promote aggressively - if in doubt, add to CLAUDE.md or .github/copilot-instructions.md
- Review regularly - stale learnings lose value
Gitignore Options
Keep learnings local (per-developer):
.learnings/
Track learnings in repo (team-wide): Don't add to .gitignore - learnings become shared knowledge.
Hybrid (track templates, ignore entries):
.learnings/*.md
!.learnings/.gitkeep
Hook Integration
Enable automatic reminders through agent hooks. This is opt-in - you must explicitly configure hooks.
Quick Setup (Claude Code / Codex)
Create .claude/settings.json in your project:
{
"hooks": {
"UserPromptSubmit": [{
"matcher": "",
"hooks": [{
"type": "command",
"command": "./skills/self-improvement/scripts/activator.sh"
}]
}]
}
}
This injects a learning evaluation reminder after each prompt (~50-100 tokens overhead).
Full Setup (With Error Detection)
{
"hooks": {
"UserPromptSubmit": [{
"matcher": "",
"hooks": [{
"type": "command",
"command": "./skills/self-improvement/scripts/activator.sh"
}]
}],
"PostToolUse": [{
"matcher": "Bash",
"hooks": [{
"type": "command",
"command": "./skills/self-improvement/scripts/error-detector.sh"
}]
}]
}
}
Available Hook Scripts
| Script | Hook Type | Purpose |
|--------|-----------|---------|
| scripts/activator.sh | UserPromptSubmit | Reminds to evaluate learnings after tasks |
| scripts/error-detector.sh | PostToolUse (Bash) | Triggers on command errors |
See references/hooks-setup.md for detailed configuration and troubleshooting.
Automatic Skill Extraction
When a learning is valuable enough to become a reusable skill, extract it using the provided helper.
Skill Extraction Criteria
A learning qualifies for skill extraction when ANY of these apply:
| Criterion | Description |
|-----------|-------------|
| Recurring | Has See Also links to 2+ similar issues |
| Verified | Status is resolved with working fix |
| Non-obvious | Required actual debugging/investigation to discover |
| Broadly applicable | Not project-specific; useful across codebases |
| User-flagged | User says "save this as a skill" or similar |
Extraction Workflow
- Identify candidate: Learning meets extraction criteria
- Run helper (or create manually):
./skills/self-improvement/scripts/extract-skill.sh skill-name --dry-run ./skills/self-improvement/scripts/extract-skill.sh skill-name - Customize SKILL.md: Fill in template with learning content
- Update learning: Set status to
promoted_to_skill, addSkill-Path - Verify: Read skill in fresh session to ensure it's self-contained
Manual Extraction
If you prefer manual creation:
- Create
skills/<skill-name>/SKILL.md - Use template from
assets/SKILL-TEMPLATE.md - Follow Agent Skills spec:
- YAML frontmatter with
nameanddescription - Name must match folder name
- No README.md inside skill folder
- YAML frontmatter with
Extraction Detection Triggers
Watch for these signals that a learning should become a skill:
In conversation:
- "Save this as a skill"
- "I keep running into this"
- "This would be useful for other projects"
- "Remember this pattern"
In learning entries:
- Multiple
See Alsolinks (recurring issue) - High priority + resolved status
- Category:
best_practicewith broad applicability - User feedback praising the solution
Skill Quality Gates
Before extraction, verify:
- [ ] Solution is tested and working
- [ ] Description is clear without original context
- [ ] Code examples are self-contained
- [ ] No project-specific hardcoded values
- [ ] Follows skill naming conventions (lowercase, hyphens)
Multi-Agent Support
This skill works across different AI coding agents with agent-specific activation.
Claude Code
Activation: Hooks (UserPromptSubmit, PostToolUse)
Setup: .claude/settings.json with hook configuration
Detection: Automatic via hook scripts
Codex CLI
Activation: Hooks (same pattern as Claude Code)
Setup: .codex/settings.json with hook configuration
Detection: Automatic via hook scripts
GitHub Copilot
Activation: Manual (no hook support)
Setup: Add to .github/copilot-instructions.md:
## Self-Improvement
After solving non-obvious issues, consider logging to `.learnings/`:
1. Use format from self-improvement skill
2. Link related entries with See Also
3. Promote high-value learnings to skills
Ask in chat: "Should I log this as a learning?"
Detection: Manual review at session end
OpenClaw
Activation: Workspace injection + inter-agent messaging Setup: See "OpenClaw Setup" section above Detection: Via session tools and workspace files
Agent-Agnostic Guidance
Regardless of agent, apply self-improvement when you:
- Discover something non-obvious - solution wasn't immediate
- Correct yourself - initial approach was wrong
- Learn project conventions - discovered undocumented patterns
- Hit unexpected errors - especially if diagnosis was difficult
- Find better approaches - improved on your original solution
Copilot Chat Integration
For Copilot users, add this to your prompts when relevant:
After completing this task, evaluate if any learnings should be logged to
.learnings/using the self-improvement skill format.
Or use quick prompts:
- "Log this to learnings"
- "Create a skill from this solution"
- "Check .learnings/ for related issues"
File v1.0.10:_meta.json
{ "ownerId": "kn70cjr952qdec1nx70zs6wefn7ynq2t", "slug": "self-improving-agent", "version": "1.0.10", "publishedAt": 1771709665365 }
File v1.0.10:references/examples.md
Entry Examples
Concrete examples of well-formatted entries with all fields.
Learning: Correction
## [LRN-20250115-001] correction
**Logged**: 2025-01-15T10:30:00Z
**Priority**: high
**Status**: pending
**Area**: tests
### Summary
Incorrectly assumed pytest fixtures are scoped to function by default
### Details
When writing test fixtures, I assumed all fixtures were function-scoped.
User corrected that while function scope is the default, the codebase
convention uses module-scoped fixtures for database connections to
improve test performance.
### Suggested Action
When creating fixtures that involve expensive setup (DB, network),
check existing fixtures for scope patterns before defaulting to function scope.
### Metadata
- Source: user_feedback
- Related Files: tests/conftest.py
- Tags: pytest, testing, fixtures
---
Learning: Knowledge Gap (Resolved)
## [LRN-20250115-002] knowledge_gap
**Logged**: 2025-01-15T14:22:00Z
**Priority**: medium
**Status**: resolved
**Area**: config
### Summary
Project uses pnpm not npm for package management
### Details
Attempted to run `npm install` but project uses pnpm workspaces.
Lock file is `pnpm-lock.yaml`, not `package-lock.json`.
### Suggested Action
Check for `pnpm-lock.yaml` or `pnpm-workspace.yaml` before assuming npm.
Use `pnpm install` for this project.
### Metadata
- Source: error
- Related Files: pnpm-lock.yaml, pnpm-workspace.yaml
- Tags: package-manager, pnpm, setup
### Resolution
- **Resolved**: 2025-01-15T14:30:00Z
- **Commit/PR**: N/A - knowledge update
- **Notes**: Added to CLAUDE.md for future reference
---
Learning: Promoted to CLAUDE.md
## [LRN-20250115-003] best_practice
**Logged**: 2025-01-15T16:00:00Z
**Priority**: high
**Status**: promoted
**Promoted**: CLAUDE.md
**Area**: backend
### Summary
API responses must include correlation ID from request headers
### Details
All API responses should echo back the X-Correlation-ID header from
the request. This is required for distributed tracing. Responses
without this header break the observability pipeline.
### Suggested Action
Always include correlation ID passthrough in API handlers.
### Metadata
- Source: user_feedback
- Related Files: src/middleware/correlation.ts
- Tags: api, observability, tracing
---
Learning: Promoted to AGENTS.md
## [LRN-20250116-001] best_practice
**Logged**: 2025-01-16T09:00:00Z
**Priority**: high
**Status**: promoted
**Promoted**: AGENTS.md
**Area**: backend
### Summary
Must regenerate API client after OpenAPI spec changes
### Details
When modifying API endpoints, the TypeScript client must be regenerated.
Forgetting this causes type mismatches that only appear at runtime.
The generate script also runs validation.
### Suggested Action
Add to agent workflow: after any API changes, run `pnpm run generate:api`.
### Metadata
- Source: error
- Related Files: openapi.yaml, src/client/api.ts
- Tags: api, codegen, typescript
---
Error Entry
## [ERR-20250115-A3F] docker_build
**Logged**: 2025-01-15T09:15:00Z
**Priority**: high
**Status**: pending
**Area**: infra
### Summary
Docker build fails on M1 Mac due to platform mismatch
### Error
error: failed to solve: python:3.11-slim: no match for platform linux/arm64
### Context
- Command: `docker build -t myapp .`
- Dockerfile uses `FROM python:3.11-slim`
- Running on Apple Silicon (M1/M2)
### Suggested Fix
Add platform flag: `docker build --platform linux/amd64 -t myapp .`
Or update Dockerfile: `FROM --platform=linux/amd64 python:3.11-slim`
### Metadata
- Reproducible: yes
- Related Files: Dockerfile
---
Error Entry: Recurring Issue
## [ERR-20250120-B2C] api_timeout
**Logged**: 2025-01-20T11:30:00Z
**Priority**: critical
**Status**: pending
**Area**: backend
### Summary
Third-party payment API timeout during checkout
### Error
TimeoutError: Request to payments.example.com timed out after 30000ms
### Context
- Command: POST /api/checkout
- Timeout set to 30s
- Occurs during peak hours (lunch, evening)
### Suggested Fix
Implement retry with exponential backoff. Consider circuit breaker pattern.
### Metadata
- Reproducible: yes (during peak hours)
- Related Files: src/services/payment.ts
- See Also: ERR-20250115-X1Y, ERR-20250118-Z3W
---
Feature Request
## [FEAT-20250115-001] export_to_csv
**Logged**: 2025-01-15T16:45:00Z
**Priority**: medium
**Status**: pending
**Area**: backend
### Requested Capability
Export analysis results to CSV format
### User Context
User runs weekly reports and needs to share results with non-technical
stakeholders in Excel. Currently copies output manually.
### Complexity Estimate
simple
### Suggested Implementation
Add `--output csv` flag to the analyze command. Use standard csv module.
Could extend existing `--output json` pattern.
### Metadata
- Frequency: recurring
- Related Features: analyze command, json output
---
Feature Request: Resolved
## [FEAT-20250110-002] dark_mode
**Logged**: 2025-01-10T14:00:00Z
**Priority**: low
**Status**: resolved
**Area**: frontend
### Requested Capability
Dark mode support for the dashboard
### User Context
User works late hours and finds the bright interface straining.
Several other users have mentioned this informally.
### Complexity Estimate
medium
### Suggested Implementation
Use CSS variables for colors. Add toggle in user settings.
Consider system preference detection.
### Metadata
- Frequency: recurring
- Related Features: user settings, theme system
### Resolution
- **Resolved**: 2025-01-18T16:00:00Z
- **Commit/PR**: #142
- **Notes**: Implemented with system preference detection and manual toggle
---
Learning: Promoted to Skill
## [LRN-20250118-001] best_practice
**Logged**: 2025-01-18T11:00:00Z
**Priority**: high
**Status**: promoted_to_skill
**Skill-Path**: skills/docker-m1-fixes
**Area**: infra
### Summary
Docker build fails on Apple Silicon due to platform mismatch
### Details
When building Docker images on M1/M2 Macs, the build fails because
the base image doesn't have an ARM64 variant. This is a common issue
that affects many developers.
### Suggested Action
Add `--platform linux/amd64` to docker build command, or use
`FROM --platform=linux/amd64` in Dockerfile.
### Metadata
- Source: error
- Related Files: Dockerfile
- Tags: docker, arm64, m1, apple-silicon
- See Also: ERR-20250115-A3F, ERR-20250117-B2D
---
Extracted Skill Example
When the above learning is extracted as a skill, it becomes:
File: skills/docker-m1-fixes/SKILL.md
---
name: docker-m1-fixes
description: "Fixes Docker build failures on Apple Silicon (M1/M2). Use when docker build fails with platform mismatch errors."
---
# Docker M1 Fixes
Solutions for Docker build issues on Apple Silicon Macs.
## Quick Reference
| Error | Fix |
|-------|-----|
| `no match for platform linux/arm64` | Add `--platform linux/amd64` to build |
| Image runs but crashes | Use emulation or find ARM-compatible base |
## The Problem
Many Docker base images don't have ARM64 variants. When building on
Apple Silicon (M1/M2/M3), Docker attempts to pull ARM64 images by
default, causing platform mismatch errors.
## Solutions
### Option 1: Build Flag (Recommended)
Add platform flag to your build command:
\`\`\`bash
docker build --platform linux/amd64 -t myapp .
\`\`\`
### Option 2: Dockerfile Modification
Specify platform in the FROM instruction:
\`\`\`dockerfile
FROM --platform=linux/amd64 python:3.11-slim
\`\`\`
### Option 3: Docker Compose
Add platform to your service:
\`\`\`yaml
services:
app:
platform: linux/amd64
build: .
\`\`\`
## Trade-offs
| Approach | Pros | Cons |
|----------|------|------|
| Build flag | No file changes | Must remember flag |
| Dockerfile | Explicit, versioned | Affects all builds |
| Compose | Convenient for dev | Requires compose |
## Performance Note
Running AMD64 images on ARM64 uses Rosetta 2 emulation. This works
for development but may be slower. For production, find ARM-native
alternatives when possible.
## Source
- Learning ID: LRN-20250118-001
- Category: best_practice
- Extraction Date: 2025-01-18
File v1.0.10:references/hooks-setup.md
Hook Setup Guide
Configure automatic self-improvement triggers for AI coding agents.
Overview
Hooks enable proactive learning capture by injecting reminders at key moments:
- UserPromptSubmit: Reminder after each prompt to evaluate learnings
- PostToolUse (Bash): Error detection when commands fail
Claude Code Setup
Option 1: Project-Level Configuration
Create .claude/settings.json in your project root:
{
"hooks": {
"UserPromptSubmit": [
{
"matcher": "",
"hooks": [
{
"type": "command",
"command": "./skills/self-improvement/scripts/activator.sh"
}
]
}
],
"PostToolUse": [
{
"matcher": "Bash",
"hooks": [
{
"type": "command",
"command": "./skills/self-improvement/scripts/error-detector.sh"
}
]
}
]
}
}
Option 2: User-Level Configuration
Add to ~/.claude/settings.json for global activation:
{
"hooks": {
"UserPromptSubmit": [
{
"matcher": "",
"hooks": [
{
"type": "command",
"command": "~/.claude/skills/self-improvement/scripts/activator.sh"
}
]
}
]
}
}
Minimal Setup (Activator Only)
For lower overhead, use only the UserPromptSubmit hook:
{
"hooks": {
"UserPromptSubmit": [
{
"matcher": "",
"hooks": [
{
"type": "command",
"command": "./skills/self-improvement/scripts/activator.sh"
}
]
}
]
}
}
Codex CLI Setup
Codex uses the same hook system as Claude Code. Create .codex/settings.json:
{
"hooks": {
"UserPromptSubmit": [
{
"matcher": "",
"hooks": [
{
"type": "command",
"command": "./skills/self-improvement/scripts/activator.sh"
}
]
}
]
}
}
GitHub Copilot Setup
Copilot doesn't support hooks directly. Instead, add guidance to .github/copilot-instructions.md:
## Self-Improvement
After completing tasks that involved:
- Debugging non-obvious issues
- Discovering workarounds
- Learning project-specific patterns
- Resolving unexpected errors
Consider logging the learning to `.learnings/` using the format from the self-improvement skill.
For high-value learnings that would benefit other sessions, consider skill extraction.
Verification
Test Activator Hook
- Enable the hook configuration
- Start a new Claude Code session
- Send any prompt
- Verify you see
<self-improvement-reminder>in the context
Test Error Detector Hook
- Enable PostToolUse hook for Bash
- Run a command that fails:
ls /nonexistent/path - Verify you see
<error-detected>reminder
Dry Run Extract Script
./skills/self-improvement/scripts/extract-skill.sh test-skill --dry-run
Expected output shows the skill scaffold that would be created.
Troubleshooting
Hook Not Triggering
- Check script permissions:
chmod +x scripts/*.sh - Verify path: Use absolute paths or paths relative to project root
- Check settings location: Project vs user-level settings
- Restart session: Hooks are loaded at session start
Permission Denied
chmod +x ./skills/self-improvement/scripts/activator.sh
chmod +x ./skills/self-improvement/scripts/error-detector.sh
chmod +x ./skills/self-improvement/scripts/extract-skill.sh
Script Not Found
If using relative paths, ensure you're in the correct directory or use absolute paths:
{
"command": "/absolute/path/to/skills/self-improvement/scripts/activator.sh"
}
Too Much Overhead
If the activator feels intrusive:
- Use minimal setup: Only UserPromptSubmit, skip PostToolUse
- Add matcher filter: Only trigger for certain prompts:
{
"matcher": "fix|debug|error|issue",
"hooks": [...]
}
Hook Output Budget
The activator is designed to be lightweight:
- Target: ~50-100 tokens per activation
- Content: Structured reminder, not verbose instructions
- Format: XML tags for easy parsing
If you need to reduce overhead further, you can edit activator.sh to output less text.
Security Considerations
- Hook scripts run with the same permissions as Claude Code
- Scripts only output text; they don't modify files or run commands
- Error detector reads
CLAUDE_TOOL_OUTPUTenvironment variable - All scripts are opt-in (you must configure them explicitly)
Disabling Hooks
To temporarily disable without removing configuration:
- Comment out in settings:
{
"hooks": {
// "UserPromptSubmit": [...]
}
}
- Or delete the settings file: Hooks won't run without configuration
File v1.0.10:references/openclaw-integration.md
OpenClaw Integration
Complete setup and usage guide for integrating the self-improvement skill with OpenClaw.
Overview
OpenClaw uses workspace-based prompt injection combined with event-driven hooks. Context is injected from workspace files at session start, and hooks can trigger on lifecycle events.
Workspace Structure
~/.openclaw/
├── workspace/ # Working directory
│ ├── AGENTS.md # Multi-agent coordination patterns
│ ├── SOUL.md # Behavioral guidelines and personality
│ ├── TOOLS.md # Tool capabilities and gotchas
│ ├── MEMORY.md # Long-term memory (main session only)
│ └── memory/ # Daily memory files
│ └── YYYY-MM-DD.md
├── skills/ # Installed skills
│ └── <skill-name>/
│ └── SKILL.md
└── hooks/ # Custom hooks
└── <hook-name>/
├── HOOK.md
└── handler.ts
Quick Setup
1. Install the Skill
clawdhub install self-improving-agent
Or copy manually:
cp -r self-improving-agent ~/.openclaw/skills/
2. Install the Hook (Optional)
Copy the hook to OpenClaw's hooks directory:
cp -r hooks/openclaw ~/.openclaw/hooks/self-improvement
Enable the hook:
openclaw hooks enable self-improvement
3. Create Learning Files
Create the .learnings/ directory in your workspace:
mkdir -p ~/.openclaw/workspace/.learnings
Or in the skill directory:
mkdir -p ~/.openclaw/skills/self-improving-agent/.learnings
Injected Prompt Files
AGENTS.md
Purpose: Multi-agent workflows and delegation patterns.
# Agent Coordination
## Delegation Rules
- Use explore agent for open-ended codebase questions
- Spawn sub-agents for long-running tasks
- Use sessions_send for cross-session communication
## Session Handoff
When delegating to another session:
1. Provide full context in the handoff message
2. Include relevant file paths
3. Specify expected output format
SOUL.md
Purpose: Behavioral guidelines and communication style.
# Behavioral Guidelines
## Communication Style
- Be direct and concise
- Avoid unnecessary caveats and disclaimers
- Use technical language appropriate to context
## Error Handling
- Admit mistakes promptly
- Provide corrected information immediately
- Log significant errors to learnings
TOOLS.md
Purpose: Tool capabilities, integration gotchas, local configuration.
# Tool Knowledge
## Self-Improvement Skill
Log learnings to `.learnings/` for continuous improvement.
## Local Tools
- Document tool-specific gotchas here
- Note authentication requirements
- Track integration quirks
Learning Workflow
Capturing Learnings
- In-session: Log to
.learnings/as usual - Cross-session: Promote to workspace files
Promotion Decision Tree
Is the learning project-specific?
├── Yes → Keep in .learnings/
└── No → Is it behavioral/style-related?
├── Yes → Promote to SOUL.md
└── No → Is it tool-related?
├── Yes → Promote to TOOLS.md
└── No → Promote to AGENTS.md (workflow)
Promotion Format Examples
From learning:
Git push to GitHub fails without auth configured - triggers desktop prompt
To TOOLS.md:
## Git
- Don't push without confirming auth is configured
- Use `gh auth status` to check GitHub CLI auth
Inter-Agent Communication
OpenClaw provides tools for cross-session communication:
sessions_list
View active and recent sessions:
sessions_list(activeMinutes=30, messageLimit=3)
sessions_history
Read transcript from another session:
sessions_history(sessionKey="session-id", limit=50)
sessions_send
Send message to another session:
sessions_send(sessionKey="session-id", message="Learning: API requires X-Custom-Header")
sessions_spawn
Spawn a background sub-agent:
sessions_spawn(task="Research X and report back", label="research")
Available Hook Events
| Event | When It Fires |
|-------|---------------|
| agent:bootstrap | Before workspace files inject |
| command:new | When /new command issued |
| command:reset | When /reset command issued |
| command:stop | When /stop command issued |
| gateway:startup | When gateway starts |
Detection Triggers
Standard Triggers
- User corrections ("No, that's wrong...")
- Command failures (non-zero exit codes)
- API errors
- Knowledge gaps
OpenClaw-Specific Triggers
| Trigger | Action | |---------|--------| | Tool call error | Log to TOOLS.md with tool name | | Session handoff confusion | Log to AGENTS.md with delegation pattern | | Model behavior surprise | Log to SOUL.md with expected vs actual | | Skill issue | Log to .learnings/ or report upstream |
Verification
Check hook is registered:
openclaw hooks list
Check skill is loaded:
openclaw status
Troubleshooting
Hook not firing
- Ensure hooks enabled in config
- Restart gateway after config changes
- Check gateway logs for errors
Learnings not persisting
- Verify
.learnings/directory exists - Check file permissions
- Ensure workspace path is configured correctly
Skill not loading
- Check skill is in skills directory
- Verify SKILL.md has correct frontmatter
- Run
openclaw statusto see loaded skills
File v1.0.10:.learnings/ERRORS.md
Errors Log
Command failures, exceptions, and unexpected behaviors.
File v1.0.10:.learnings/FEATURE_REQUESTS.md
Feature Requests
Capabilities requested by user that don't currently exist.
File v1.0.10:.learnings/LEARNINGS.md
Learnings Log
Captured learnings, corrections, and discoveries. Review before major tasks.
File v1.0.10:assets/LEARNINGS.md
Learnings
Corrections, insights, and knowledge gaps captured during development.
Categories: correction | insight | knowledge_gap | best_practice Areas: frontend | backend | infra | tests | docs | config Statuses: pending | in_progress | resolved | wont_fix | promoted | promoted_to_skill
Status Definitions
| Status | Meaning |
|--------|---------|
| pending | Not yet addressed |
| in_progress | Actively being worked on |
| resolved | Issue fixed or knowledge integrated |
| wont_fix | Decided not to address (reason in Resolution) |
| promoted | Elevated to CLAUDE.md, AGENTS.md, or copilot-instructions.md |
| promoted_to_skill | Extracted as a reusable skill |
Skill Extraction Fields
When a learning is promoted to a skill, add these fields:
**Status**: promoted_to_skill
**Skill-Path**: skills/skill-name
Example:
## [LRN-20250115-001] best_practice
**Logged**: 2025-01-15T10:00:00Z
**Priority**: high
**Status**: promoted_to_skill
**Skill-Path**: skills/docker-m1-fixes
**Area**: infra
### Summary
Docker build fails on Apple Silicon due to platform mismatch
...
File v1.0.10:assets/SKILL-TEMPLATE.md
Skill Template
Template for creating skills extracted from learnings. Copy and customize.
SKILL.md Template
---
name: skill-name-here
description: "Concise description of when and why to use this skill. Include trigger conditions."
---
# Skill Name
Brief introduction explaining the problem this skill solves and its origin.
## Quick Reference
| Situation | Action |
|-----------|--------|
| [Trigger 1] | [Action 1] |
| [Trigger 2] | [Action 2] |
## Background
Why this knowledge matters. What problems it prevents. Context from the original learning.
## Solution
### Step-by-Step
1. First step with code or command
2. Second step
3. Verification step
### Code Example
\`\`\`language
// Example code demonstrating the solution
\`\`\`
## Common Variations
- **Variation A**: Description and how to handle
- **Variation B**: Description and how to handle
## Gotchas
- Warning or common mistake #1
- Warning or common mistake #2
## Related
- Link to related documentation
- Link to related skill
## Source
Extracted from learning entry.
- **Learning ID**: LRN-YYYYMMDD-XXX
- **Original Category**: correction | insight | knowledge_gap | best_practice
- **Extraction Date**: YYYY-MM-DD
Minimal Template
For simple skills that don't need all sections:
---
name: skill-name-here
description: "What this skill does and when to use it."
---
# Skill Name
[Problem statement in one sentence]
## Solution
[Direct solution with code/commands]
## Source
- Learning ID: LRN-YYYYMMDD-XXX
Template with Scripts
For skills that include executable helpers:
---
name: skill-name-here
description: "What this skill does and when to use it."
---
# Skill Name
[Introduction]
## Quick Reference
| Command | Purpose |
|---------|---------|
| `./scripts/helper.sh` | [What it does] |
| `./scripts/validate.sh` | [What it does] |
## Usage
### Automated (Recommended)
\`\`\`bash
./skills/skill-name/scripts/helper.sh [args]
\`\`\`
### Manual Steps
1. Step one
2. Step two
## Scripts
| Script | Description |
|--------|-------------|
| `scripts/helper.sh` | Main utility |
| `scripts/validate.sh` | Validation checker |
## Source
- Learning ID: LRN-YYYYMMDD-XXX
Naming Conventions
-
Skill name: lowercase, hyphens for spaces
- Good:
docker-m1-fixes,api-timeout-patterns - Bad:
Docker_M1_Fixes,APITimeoutPatterns
- Good:
-
Description: Start with action verb, mention trigger
- Good: "Handles Docker build failures on Apple Silicon. Use when builds fail with platform mismatch."
- Bad: "Docker stuff"
-
Files:
SKILL.md- Required, main documentationscripts/- Optional, executable codereferences/- Optional, detailed docsassets/- Optional, templates
Extraction Checklist
Before creating a skill from a learning:
- [ ] Learning is verified (status: resolved)
- [ ] Solution is broadly applicable (not one-off)
- [ ] Content is complete (has all needed context)
- [ ] Name follows conventions
- [ ] Description is concise but informative
- [ ] Quick Reference table is actionable
- [ ] Code examples are tested
- [ ] Source learning ID is recorded
After creating:
- [ ] Update original learning with
promoted_to_skillstatus - [ ] Add
Skill-Path: skills/skill-nameto learning metadata - [ ] Test skill by reading it in a fresh session
File v1.0.10:hooks/openclaw/HOOK.md
name: self-improvement description: "Injects self-improvement reminder during agent bootstrap" metadata: {"openclaw":{"emoji":"🧠","events":["agent:bootstrap"]}}
Self-Improvement Hook
Injects a reminder to evaluate learnings during agent bootstrap.
What It Does
- Fires on
agent:bootstrap(before workspace files are injected) - Adds a reminder block to check
.learnings/for relevant entries - Prompts the agent to log corrections, errors, and discoveries
Configuration
No configuration needed. Enable with:
openclaw hooks enable self-improvement
API & Reliability
Machine endpoints, contract coverage, trust signals, runtime metrics, benchmarks, and guardrails for agent-to-agent use.
MissingCLAWHUB
API & Reliability
Machine endpoints, contract coverage, trust signals, runtime metrics, benchmarks, and guardrails for agent-to-agent use.
Machine interfaces
Contract & API
Contract coverage
Status
missing
Auth
None
Streaming
No
Data region
Unspecified
Protocol support
Requires: none
Forbidden: none
Guardrails
Operational confidence: low
Invocation examples
curl -s "https://xpersona.co/api/v1/agents/clawhub-pskoett-self-improving-agent/snapshot"
curl -s "https://xpersona.co/api/v1/agents/clawhub-pskoett-self-improving-agent/contract"
curl -s "https://xpersona.co/api/v1/agents/clawhub-pskoett-self-improving-agent/trust"
Operational fit
Reliability & Benchmarks
Trust signals
Handshake
UNKNOWN
Confidence
unknown
Attempts 30d
unknown
Fallback rate
unknown
Runtime metrics
Observed P50
unknown
Observed P95
unknown
Rate limit
unknown
Estimated cost
unknown
Do not use if
Machine Appendix
Raw contract, invocation, trust, capability, facts, and change-event payloads for machine-side inspection.
MissingCLAWHUB
Machine Appendix
Raw contract, invocation, trust, capability, facts, and change-event payloads for machine-side inspection.
Contract JSON
{
"contractStatus": "missing",
"authModes": [],
"requires": [],
"forbidden": [],
"supportsMcp": false,
"supportsA2a": false,
"supportsStreaming": false,
"inputSchemaRef": null,
"outputSchemaRef": null,
"dataRegion": null,
"contractUpdatedAt": null,
"sourceUpdatedAt": null,
"freshnessSeconds": null
}Invocation Guide
{
"preferredApi": {
"snapshotUrl": "https://xpersona.co/api/v1/agents/clawhub-pskoett-self-improving-agent/snapshot",
"contractUrl": "https://xpersona.co/api/v1/agents/clawhub-pskoett-self-improving-agent/contract",
"trustUrl": "https://xpersona.co/api/v1/agents/clawhub-pskoett-self-improving-agent/trust"
},
"curlExamples": [
"curl -s \"https://xpersona.co/api/v1/agents/clawhub-pskoett-self-improving-agent/snapshot\"",
"curl -s \"https://xpersona.co/api/v1/agents/clawhub-pskoett-self-improving-agent/contract\"",
"curl -s \"https://xpersona.co/api/v1/agents/clawhub-pskoett-self-improving-agent/trust\""
],
"jsonRequestTemplate": {
"query": "summarize this repo",
"constraints": {
"maxLatencyMs": 2000,
"protocolPreference": []
}
},
"jsonResponseTemplate": {
"ok": true,
"result": {
"summary": "...",
"confidence": 0.9
},
"meta": {
"source": "CLAWHUB",
"generatedAt": "2026-04-17T02:04:22.433Z"
}
},
"retryPolicy": {
"maxAttempts": 3,
"backoffMs": [
500,
1500,
3500
],
"retryableConditions": [
"HTTP_429",
"HTTP_503",
"NETWORK_TIMEOUT"
]
}
}Trust JSON
{
"status": "unavailable",
"handshakeStatus": "UNKNOWN",
"verificationFreshnessHours": null,
"reputationScore": null,
"p95LatencyMs": null,
"successRate30d": null,
"fallbackRate": null,
"attempts30d": null,
"trustUpdatedAt": null,
"trustConfidence": "unknown",
"sourceUpdatedAt": null,
"freshnessSeconds": null
}Capability Matrix
{
"rows": [],
"flattenedTokens": ""
}Facts JSON
[
{
"factKey": "vendor",
"category": "vendor",
"label": "Vendor",
"value": "Clawhub",
"href": "https://clawhub.ai/pskoett/self-improving-agent",
"sourceUrl": "https://clawhub.ai/pskoett/self-improving-agent",
"sourceType": "profile",
"confidence": "medium",
"observedAt": "2026-04-15T00:45:39.800Z",
"isPublic": true
},
{
"factKey": "traction",
"category": "adoption",
"label": "Adoption signal",
"value": "77.6K downloads",
"href": "https://clawhub.ai/pskoett/self-improving-agent",
"sourceUrl": "https://clawhub.ai/pskoett/self-improving-agent",
"sourceType": "profile",
"confidence": "medium",
"observedAt": "2026-04-15T00:45:39.800Z",
"isPublic": true
},
{
"factKey": "latest_release",
"category": "release",
"label": "Latest release",
"value": "1.0.11",
"href": "https://clawhub.ai/pskoett/self-improving-agent",
"sourceUrl": "https://clawhub.ai/pskoett/self-improving-agent",
"sourceType": "release",
"confidence": "medium",
"observedAt": "2026-02-22T16:28:33.337Z",
"isPublic": true
},
{
"factKey": "handshake_status",
"category": "security",
"label": "Handshake status",
"value": "UNKNOWN",
"href": "https://xpersona.co/api/v1/agents/clawhub-pskoett-self-improving-agent/trust",
"sourceUrl": "https://xpersona.co/api/v1/agents/clawhub-pskoett-self-improving-agent/trust",
"sourceType": "trust",
"confidence": "medium",
"observedAt": null,
"isPublic": true
}
]Change Events JSON
[
{
"eventType": "release",
"title": "Release 1.0.11",
"description": "No functional or content changes; OpenClaw-specific environment metadata was removed. - Removed the OpenClaw `requires.env` metadata block from the skill definition. - All usage guidance, logging formats, and workflow instructions remain unchanged. - No new features or bug fixes included in this version. - This update does not require any action from users. - Ensures cleaner skill metadata and wider compatibility.",
"href": "https://clawhub.ai/pskoett/self-improving-agent",
"sourceUrl": "https://clawhub.ai/pskoett/self-improving-agent",
"sourceType": "release",
"confidence": "medium",
"observedAt": "2026-02-22T16:28:33.337Z",
"isPublic": true
}
]