Rank
70
AI Agents & MCPs & AI Workflow Automation • (~400 MCP servers for AI agents) • AI Automation / AI Agent with MCPs • AI Workflows & AI Agents • MCPs for AI Agents
Traction
No public download signal
Freshness
Updated 2d ago
Xpersona Agent
usewhisper-autohook (OpenClaw Skill) --- name: usewhisper-autohook version: 1.0.0 description: Auto-hook tools for OpenClaw: query Whisper Context before every generation, ingest after every turn. Built for Telegram agents (stable user_id/session_id). author: "usewhisper" metadata: openclaw: requires: bins: ["node"] env: ["WHISPER_CONTEXT_API_KEY", "WHISPER_CONTEXT_PROJECT"] optional_env: ["WHISPER_CONTEXT_API_URL"] security: notes: - Makes outbound HTT
clawhub skill install skills:alinxus:usewhisper-autohookOverall rank
#62
Adoption
No public adoption signal
Trust
Unknown
Freshness
Feb 25, 2026
Freshness
Last checked Feb 25, 2026
Best For
usewhisper-autohook is best for set, store, overriding workflows where OpenClaw compatibility matters.
Not Ideal For
Contract metadata is missing or unavailable for deterministic execution.
Evidence Sources Checked
editorial-content, CLAWHUB, runtime-metrics, public facts pack
Key links, install path, reliability highlights, and the shortest practical read before diving into the crawl record.
Overview
usewhisper-autohook (OpenClaw Skill) --- name: usewhisper-autohook version: 1.0.0 description: Auto-hook tools for OpenClaw: query Whisper Context before every generation, ingest after every turn. Built for Telegram agents (stable user_id/session_id). author: "usewhisper" metadata: openclaw: requires: bins: ["node"] env: ["WHISPER_CONTEXT_API_KEY", "WHISPER_CONTEXT_PROJECT"] optional_env: ["WHISPER_CONTEXT_API_URL"] security: notes: - Makes outbound HTT Capability contract not published. No trust telemetry is available yet. Last updated 4/15/2026.
Trust score
Unknown
Compatibility
OpenClaw
Freshness
Feb 25, 2026
Vendor
Openclaw
Artifacts
0
Benchmarks
0
Last release
Unpublished
Install & run
clawhub skill install skills:alinxus:usewhisper-autohookSetup complexity is LOW. This package is likely designed for quick installation with minimal external side-effects.
Final validation: Expose the agent to a mock request payload inside a sandbox and trace the network egress before allowing access to real customer data.
Public facts grouped by evidence type, plus release and crawl events with provenance and freshness.
Public facts
Vendor
Openclaw
Protocol compatibility
OpenClaw
Handshake status
UNKNOWN
Crawlable docs
6 indexed pages on the official domain
Parameters, dependencies, examples, extracted files, editorial overview, and the complete README when available.
Captured outputs
Extracted files
0
Examples
6
Snippets
0
Languages
typescript
Parameters
bash
npx clawhub@latest install usewhisper-autohook
bash
WHISPER_CONTEXT_API_URL=https://context.usewhisper.dev WHISPER_CONTEXT_API_KEY=YOUR_KEY WHISPER_CONTEXT_PROJECT=openclaw-yourname
text
Before you think or respond to any message:
1) Call get_whisper_context with:
user_id = "telegram:{from_id}"
session_id = "telegram:{chat_id}"
current_query = the user's message text
2) If the returned context is not empty, prepend it to your prompt as:
"Relevant long-term memory:\n{context}\n\nNow respond to:\n{user_message}"
After you generate your final response:
1) Call ingest_whisper_turn with the same user_id and session_id and:
user_msg = the full user message
assistant_msg = your full final reply
Always do this. Never skip.bash
export OPENAI_API_KEY="YOUR_UPSTREAM_KEY" node usewhisper-autohook.mjs serve_openai_proxy --port 8787
bash
export ANTHROPIC_API_KEY="YOUR_ANTHROPIC_KEY" node usewhisper-autohook.mjs serve_anthropic_proxy --port 8788
bash
node usewhisper-autohook.mjs get_whisper_context \ --current_query "What did we decide last time?" \ --user_id "telegram:123" \ --session_id "telegram:456"
Editorial read
Docs source
CLAWHUB
Editorial quality
ready
usewhisper-autohook (OpenClaw Skill) --- name: usewhisper-autohook version: 1.0.0 description: Auto-hook tools for OpenClaw: query Whisper Context before every generation, ingest after every turn. Built for Telegram agents (stable user_id/session_id). author: "usewhisper" metadata: openclaw: requires: bins: ["node"] env: ["WHISPER_CONTEXT_API_KEY", "WHISPER_CONTEXT_PROJECT"] optional_env: ["WHISPER_CONTEXT_API_URL"] security: notes: - Makes outbound HTT
This skill is a thin wrapper designed to make "automatic memory" easy:
get_whisper_context(user_id, session_id, current_query) for pre-response context injectioningest_whisper_turn(user_id, session_id, user_msg, assistant_msg) for post-response ingestionIt defaults to the token-saving settings you almost always want:
compress: truecompression_strategy: "delta"use_cache: trueinclude_memories: trueIt also persists the last context_hash locally (per api_url + project + user_id + session_id) so delta compression works by default without you needing to pass previous_context_hash.
npx clawhub@latest install usewhisper-autohook
Set env vars wherever OpenClaw runs your agent:
WHISPER_CONTEXT_API_URL=https://context.usewhisper.dev
WHISPER_CONTEXT_API_KEY=YOUR_KEY
WHISPER_CONTEXT_PROJECT=openclaw-yourname
Notes:
WHISPER_CONTEXT_API_URL is optional (defaults to https://context.usewhisper.dev).Add this to your agent's system instruction (or equivalent):
Before you think or respond to any message:
1) Call get_whisper_context with:
user_id = "telegram:{from_id}"
session_id = "telegram:{chat_id}"
current_query = the user's message text
2) If the returned context is not empty, prepend it to your prompt as:
"Relevant long-term memory:\n{context}\n\nNow respond to:\n{user_message}"
After you generate your final response:
1) Call ingest_whisper_turn with the same user_id and session_id and:
user_msg = the full user message
assistant_msg = your full final reply
Always do this. Never skip.
If you are not on Telegram, keep the same structure: the important part is that user_id and session_id are stable.
If you cannot control how your agent/framework constructs prompts (it always sends the full conversation history), a system prompt cannot reduce token spend: the tokens are already sent to the model.
In that case, run the built-in OpenAI-compatible proxy so the network payload is actually reduced. The proxy:
POST /v1/chat/completionsRelevant long-term memory: ...Start the proxy:
export OPENAI_API_KEY="YOUR_UPSTREAM_KEY"
node usewhisper-autohook.mjs serve_openai_proxy --port 8787
Then point your agent’s OpenAI base URL to http://127.0.0.1:8787 (exact env/config depends on your agent).
If your agent supports overriding the upstream base URL, you can set:
OPENAI_BASE_URL (for OpenAI-compatible upstreams)ANTHROPIC_BASE_URL (for Anthropic upstreams)Or pass --upstream_base_url when starting the proxy.
For correct per-user/session memory, pass headers on each request:
x-whisper-user-id: telegram:{from_id}x-whisper-session-id: telegram:{chat_id}/v1/messages)If your agent uses Anthropic's native API (not OpenAI-compatible), run the Anthropic proxy instead:
export ANTHROPIC_API_KEY="YOUR_ANTHROPIC_KEY"
node usewhisper-autohook.mjs serve_anthropic_proxy --port 8788
Then point your agent’s Anthropic base URL to http://127.0.0.1:8788.
Pass IDs via headers (recommended):
x-whisper-user-id: telegram:{from_id}x-whisper-session-id: telegram:{chat_id}If you do not pass headers, the proxies will attempt to infer stable IDs from OpenClaw's system prompt / session key if present. This is best-effort; headers are still the most reliable.
All commands print JSON to stdout.
node usewhisper-autohook.mjs get_whisper_context \
--current_query "What did we decide last time?" \
--user_id "telegram:123" \
--session_id "telegram:456"
node usewhisper-autohook.mjs ingest_whisper_turn \
--user_id "telegram:123" \
--session_id "telegram:456" \
--user_msg "..." \
--assistant_msg "..."
For large content, pass JSON via stdin:
echo '{ "user_msg": "....", "assistant_msg": "...." }' | node usewhisper-autohook.mjs ingest_whisper_turn --session_id "telegram:456" --user_id "telegram:123" --turn_json -
get_whisper_context returns:
context: the packed context string to prependcontext_hash: a short hash you can store and pass back as previous_context_hash next time (optional)meta: cache hit and compression info (useful for debugging)Machine endpoints, contract coverage, trust signals, runtime metrics, benchmarks, and guardrails for agent-to-agent use.
Machine interfaces
Contract coverage
Status
missing
Auth
None
Streaming
No
Data region
Unspecified
Protocol support
Requires: none
Forbidden: none
Guardrails
Operational confidence: low
curl -s "https://xpersona.co/api/v1/agents/clawhub-skills-alinxus-usewhisper-autohook/snapshot"
curl -s "https://xpersona.co/api/v1/agents/clawhub-skills-alinxus-usewhisper-autohook/contract"
curl -s "https://xpersona.co/api/v1/agents/clawhub-skills-alinxus-usewhisper-autohook/trust"
Operational fit
Trust signals
Handshake
UNKNOWN
Confidence
unknown
Attempts 30d
unknown
Fallback rate
unknown
Runtime metrics
Observed P50
unknown
Observed P95
unknown
Rate limit
unknown
Estimated cost
unknown
Do not use if
Raw contract, invocation, trust, capability, facts, and change-event payloads for machine-side inspection.
Contract JSON
{
"contractStatus": "missing",
"authModes": [],
"requires": [],
"forbidden": [],
"supportsMcp": false,
"supportsA2a": false,
"supportsStreaming": false,
"inputSchemaRef": null,
"outputSchemaRef": null,
"dataRegion": null,
"contractUpdatedAt": null,
"sourceUpdatedAt": null,
"freshnessSeconds": null
}Invocation Guide
{
"preferredApi": {
"snapshotUrl": "https://xpersona.co/api/v1/agents/clawhub-skills-alinxus-usewhisper-autohook/snapshot",
"contractUrl": "https://xpersona.co/api/v1/agents/clawhub-skills-alinxus-usewhisper-autohook/contract",
"trustUrl": "https://xpersona.co/api/v1/agents/clawhub-skills-alinxus-usewhisper-autohook/trust"
},
"curlExamples": [
"curl -s \"https://xpersona.co/api/v1/agents/clawhub-skills-alinxus-usewhisper-autohook/snapshot\"",
"curl -s \"https://xpersona.co/api/v1/agents/clawhub-skills-alinxus-usewhisper-autohook/contract\"",
"curl -s \"https://xpersona.co/api/v1/agents/clawhub-skills-alinxus-usewhisper-autohook/trust\""
],
"jsonRequestTemplate": {
"query": "summarize this repo",
"constraints": {
"maxLatencyMs": 2000,
"protocolPreference": [
"OPENCLEW"
]
}
},
"jsonResponseTemplate": {
"ok": true,
"result": {
"summary": "...",
"confidence": 0.9
},
"meta": {
"source": "CLAWHUB",
"generatedAt": "2026-04-17T04:13:38.189Z"
}
},
"retryPolicy": {
"maxAttempts": 3,
"backoffMs": [
500,
1500,
3500
],
"retryableConditions": [
"HTTP_429",
"HTTP_503",
"NETWORK_TIMEOUT"
]
}
}Trust JSON
{
"status": "unavailable",
"handshakeStatus": "UNKNOWN",
"verificationFreshnessHours": null,
"reputationScore": null,
"p95LatencyMs": null,
"successRate30d": null,
"fallbackRate": null,
"attempts30d": null,
"trustUpdatedAt": null,
"trustConfidence": "unknown",
"sourceUpdatedAt": null,
"freshnessSeconds": null
}Capability Matrix
{
"rows": [
{
"key": "OPENCLEW",
"type": "protocol",
"support": "unknown",
"confidenceSource": "profile",
"notes": "Listed on profile"
},
{
"key": "set",
"type": "capability",
"support": "supported",
"confidenceSource": "profile",
"notes": "Declared in agent profile metadata"
},
{
"key": "store",
"type": "capability",
"support": "supported",
"confidenceSource": "profile",
"notes": "Declared in agent profile metadata"
},
{
"key": "overriding",
"type": "capability",
"support": "supported",
"confidenceSource": "profile",
"notes": "Declared in agent profile metadata"
}
],
"flattenedTokens": "protocol:OPENCLEW|unknown|profile capability:set|supported|profile capability:store|supported|profile capability:overriding|supported|profile"
}Facts JSON
[
{
"factKey": "docs_crawl",
"category": "integration",
"label": "Crawlable docs",
"value": "6 indexed pages on the official domain",
"href": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
"sourceUrl": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
"sourceType": "search_document",
"confidence": "medium",
"observedAt": "2026-04-15T05:03:46.393Z",
"isPublic": true
},
{
"factKey": "vendor",
"category": "vendor",
"label": "Vendor",
"value": "Openclaw",
"href": "https://github.com/openclaw/skills/tree/main/skills/alinxus/usewhisper-autohook",
"sourceUrl": "https://github.com/openclaw/skills/tree/main/skills/alinxus/usewhisper-autohook",
"sourceType": "profile",
"confidence": "medium",
"observedAt": "2026-04-15T00:45:39.800Z",
"isPublic": true
},
{
"factKey": "protocols",
"category": "compatibility",
"label": "Protocol compatibility",
"value": "OpenClaw",
"href": "https://xpersona.co/api/v1/agents/clawhub-skills-alinxus-usewhisper-autohook/contract",
"sourceUrl": "https://xpersona.co/api/v1/agents/clawhub-skills-alinxus-usewhisper-autohook/contract",
"sourceType": "contract",
"confidence": "medium",
"observedAt": "2026-04-15T00:45:39.800Z",
"isPublic": true
},
{
"factKey": "handshake_status",
"category": "security",
"label": "Handshake status",
"value": "UNKNOWN",
"href": "https://xpersona.co/api/v1/agents/clawhub-skills-alinxus-usewhisper-autohook/trust",
"sourceUrl": "https://xpersona.co/api/v1/agents/clawhub-skills-alinxus-usewhisper-autohook/trust",
"sourceType": "trust",
"confidence": "medium",
"observedAt": null,
"isPublic": true
}
]Change Events JSON
[
{
"eventType": "docs_update",
"title": "Docs refreshed: Sign in to GitHub · GitHub",
"description": "Fresh crawlable documentation was indexed for the official domain.",
"href": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
"sourceUrl": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
"sourceType": "search_document",
"confidence": "medium",
"observedAt": "2026-04-15T05:03:46.393Z",
"isPublic": true
}
]Sponsored
Ads related to usewhisper-autohook and adjacent AI workflows.