Rank
70
AI Agents & MCPs & AI Workflow Automation • (~400 MCP servers for AI agents) • AI Automation / AI Agent with MCPs • AI Workflows & AI Agents • MCPs for AI Agents
Traction
No public download signal
Freshness
Updated 2d ago
Crawler Summary
Query local LM Studio or any OpenAI-compatible LLM server (like Ollama, llama.cpp, vLLM) for coding tasks, explanations, or text generation without using paid API tokens. Use when the user explicitly asks to use a local model, or for low-stakes tasks like code examples, documentation, simple scripts, or exploratory work where perfect accuracy isn't critical. --- name: local-llm description: Query local LM Studio or any OpenAI-compatible LLM server (like Ollama, llama.cpp, vLLM) for coding tasks, explanations, or text generation without using paid API tokens. Use when the user explicitly asks to use a local model, or for low-stakes tasks like code examples, documentation, simple scripts, or exploratory work where perfect accuracy isn't critical. --- Local LLM (LM Studio / Published capability contract available. No trust telemetry is available yet. Last updated 4/14/2026.
Freshness
Last checked 4/14/2026
Best For
Contract is available with explicit auth and schema references.
Not Ideal For
local-llm is not ideal for teams that need stronger public trust telemetry, lower setup complexity, or more explicit contract coverage before production rollout.
Evidence Sources Checked
editorial-content, capability-contract, runtime-metrics, public facts pack
Query local LM Studio or any OpenAI-compatible LLM server (like Ollama, llama.cpp, vLLM) for coding tasks, explanations, or text generation without using paid API tokens. Use when the user explicitly asks to use a local model, or for low-stakes tasks like code examples, documentation, simple scripts, or exploratory work where perfect accuracy isn't critical. --- name: local-llm description: Query local LM Studio or any OpenAI-compatible LLM server (like Ollama, llama.cpp, vLLM) for coding tasks, explanations, or text generation without using paid API tokens. Use when the user explicitly asks to use a local model, or for low-stakes tasks like code examples, documentation, simple scripts, or exploratory work where perfect accuracy isn't critical. --- Local LLM (LM Studio /
Public facts
6
Change events
1
Artifacts
0
Freshness
Apr 14, 2026
Published capability contract available. No trust telemetry is available yet. Last updated 4/14/2026.
Trust score
Unknown
Compatibility
OpenClaw
Freshness
Apr 14, 2026
Vendor
Honkimon
Artifacts
0
Benchmarks
0
Last release
Unpublished
Key links, install path, and a quick operational read before the deeper crawl record.
Summary
Published capability contract available. No trust telemetry is available yet. Last updated 4/14/2026.
Setup snapshot
git clone https://github.com/honkimon/openclaw-local-llm.gitSetup complexity is LOW. This package is likely designed for quick installation with minimal external side-effects.
Final validation: Expose the agent to a mock request payload inside a sandbox and trace the network egress before allowing access to real customer data.
Everything public we have scraped or crawled about this agent, grouped by evidence type with provenance.
Vendor
Honkimon
Protocol compatibility
OpenClaw
Auth modes
api_key
Machine-readable schemas
OpenAPI or schema references published
Handshake status
UNKNOWN
Crawlable docs
6 indexed pages on the official domain
Merged public release, docs, artifact, benchmark, pricing, and trust refresh events.
Extracted files, examples, snippets, parameters, dependencies, permissions, and artifact metadata.
Extracted files
0
Examples
6
Snippets
0
Languages
typescript
Parameters
bash
export LM_STUDIO_URL="http://localhost:1234/v1/chat/completions" export LM_STUDIO_MODEL="qwen/qwen2.5-coder-14b" # Optional, auto-detects if omitted
json
{
"skills": {
"entries": {
"local-llm": {
"env": {
"LM_STUDIO_URL": "http://192.168.1.100:1234/v1/chat/completions",
"LM_STUDIO_MODEL": "qwen/qwen2.5-coder-14b"
}
}
}
}
}bash
{baseDir}/scripts/query_llm.py "Your prompt here"bash
{baseDir}/scripts/query_llm.py "Write a function to parse JSON" \
--system "You are a Python expert focused on clean, readable code"bash
{baseDir}/scripts/query_llm.py "Explain asyncio" \
--endpoint "http://192.168.1.100:1234/v1/chat/completions" \
--model "qwen/qwen2.5-coder-14b"bash
{baseDir}/scripts/query_llm.py --list-modelsFull documentation captured from public sources, including the complete README when available.
Docs source
GITHUB OPENCLEW
Editorial quality
ready
Query local LM Studio or any OpenAI-compatible LLM server (like Ollama, llama.cpp, vLLM) for coding tasks, explanations, or text generation without using paid API tokens. Use when the user explicitly asks to use a local model, or for low-stakes tasks like code examples, documentation, simple scripts, or exploratory work where perfect accuracy isn't critical. --- name: local-llm description: Query local LM Studio or any OpenAI-compatible LLM server (like Ollama, llama.cpp, vLLM) for coding tasks, explanations, or text generation without using paid API tokens. Use when the user explicitly asks to use a local model, or for low-stakes tasks like code examples, documentation, simple scripts, or exploratory work where perfect accuracy isn't critical. --- Local LLM (LM Studio /
Query any local LLM server for free, token-less inference.
Works with:
Set via environment variables or OpenClaw skill config:
Option 1: Environment variables
export LM_STUDIO_URL="http://localhost:1234/v1/chat/completions"
export LM_STUDIO_MODEL="qwen/qwen2.5-coder-14b" # Optional, auto-detects if omitted
Option 2: OpenClaw config (~/.openclaw/openclaw.json)
{
"skills": {
"entries": {
"local-llm": {
"env": {
"LM_STUDIO_URL": "http://192.168.1.100:1234/v1/chat/completions",
"LM_STUDIO_MODEL": "qwen/qwen2.5-coder-14b"
}
}
}
}
}
Default: http://localhost:1234/v1/chat/completions (LM Studio default)
✅ Good for:
❌ Not good for:
Basic query:
{baseDir}/scripts/query_llm.py "Your prompt here"
With system prompt:
{baseDir}/scripts/query_llm.py "Write a function to parse JSON" \
--system "You are a Python expert focused on clean, readable code"
Custom endpoint and model:
{baseDir}/scripts/query_llm.py "Explain asyncio" \
--endpoint "http://192.168.1.100:1234/v1/chat/completions" \
--model "qwen/qwen2.5-coder-14b"
List available models:
{baseDir}/scripts/query_llm.py --list-models
With custom parameters:
{baseDir}/scripts/query_llm.py "Explain recursion" \
--max-tokens 1000 \
--temperature 0.3
prompt (required): The question or task--endpoint: Server URL (default: $LM_STUDIO_URL or http://localhost:1234/v1/chat/completions)--model: Model ID (default: auto-detect from /models endpoint)--system: System prompt to set context/role--max-tokens: Max response length (default: 2000)--temperature: Randomness 0.0-1.0 (default: 0.7, lower = more deterministic)--list-models: Show available modelsLM Studio:
http://localhost:1234/v1/chat/completions
Ollama:
http://localhost:11434/v1/chat/completions
llama.cpp server:
http://localhost:8080/v1/chat/completions
Over network (replace with your server IP):
http://192.168.1.100:1234/v1/chat/completions
Do:
Don't:
"Connection Error: Is LM Studio running?"
curl http://localhost:1234/v1/models"No models found"
--list-models to debugSlow responses:
--max-tokens for faster responsesPoor output quality:
For coding:
For general use:
Install models via LM Studio's search/download feature.
Code generation:
{baseDir}/scripts/query_llm.py "Write a Python function to validate email addresses" \
--system "You are an expert Python developer"
Explain a concept:
{baseDir}/scripts/query_llm.py "Explain how async/await works in Python" \
--temperature 0.8
Generate tests:
{baseDir}/scripts/query_llm.py "Write pytest tests for this function: [paste code]" \
--max-tokens 1500
Document code:
{baseDir}/scripts/query_llm.py "Add docstrings to this Python class: [paste code]" \
--temperature 0.3
Machine endpoints, protocol fit, contract coverage, invocation examples, and guardrails for agent-to-agent use.
Contract coverage
Status
ready
Auth
api_key
Streaming
No
Data region
global
Protocol support
Requires: openclew, lang:typescript
Forbidden: none
Guardrails
Operational confidence: medium
curl -s "https://xpersona.co/api/v1/agents/honkimon-openclaw-local-llm/snapshot"
curl -s "https://xpersona.co/api/v1/agents/honkimon-openclaw-local-llm/contract"
curl -s "https://xpersona.co/api/v1/agents/honkimon-openclaw-local-llm/trust"
Trust and runtime signals, benchmark suites, failure patterns, and practical risk constraints.
Trust signals
Handshake
UNKNOWN
Confidence
unknown
Attempts 30d
unknown
Fallback rate
unknown
Runtime metrics
Observed P50
unknown
Observed P95
unknown
Rate limit
unknown
Estimated cost
unknown
Every public screenshot, visual asset, demo link, and owner-provided destination tied to this agent.
Neighboring agents from the same protocol and source ecosystem for comparison and shortlist building.
Rank
70
AI Agents & MCPs & AI Workflow Automation • (~400 MCP servers for AI agents) • AI Automation / AI Agent with MCPs • AI Workflows & AI Agents • MCPs for AI Agents
Traction
No public download signal
Freshness
Updated 2d ago
Rank
70
AI productivity studio with smart chat, autonomous agents, and 300+ assistants. Unified access to frontier LLMs
Traction
No public download signal
Freshness
Updated 5d ago
Rank
70
Free, local, open-source 24/7 Cowork app and OpenClaw for Gemini CLI, Claude Code, Codex, OpenCode, Qwen Code, Goose CLI, Auggie, and more | 🌟 Star if you like it!
Traction
No public download signal
Freshness
Updated 6d ago
Rank
70
The Frontend for Agents & Generative UI. React + Angular
Traction
No public download signal
Freshness
Updated 23d ago
Contract JSON
{
"contractStatus": "ready",
"authModes": [
"api_key"
],
"requires": [
"openclew",
"lang:typescript"
],
"forbidden": [],
"supportsMcp": false,
"supportsA2a": false,
"supportsStreaming": false,
"inputSchemaRef": "https://github.com/honkimon/openclaw-local-llm#input",
"outputSchemaRef": "https://github.com/honkimon/openclaw-local-llm#output",
"dataRegion": "global",
"contractUpdatedAt": "2026-02-24T19:44:18.248Z",
"sourceUpdatedAt": "2026-02-24T19:44:18.248Z",
"freshnessSeconds": 4420923
}Invocation Guide
{
"preferredApi": {
"snapshotUrl": "https://xpersona.co/api/v1/agents/honkimon-openclaw-local-llm/snapshot",
"contractUrl": "https://xpersona.co/api/v1/agents/honkimon-openclaw-local-llm/contract",
"trustUrl": "https://xpersona.co/api/v1/agents/honkimon-openclaw-local-llm/trust"
},
"curlExamples": [
"curl -s \"https://xpersona.co/api/v1/agents/honkimon-openclaw-local-llm/snapshot\"",
"curl -s \"https://xpersona.co/api/v1/agents/honkimon-openclaw-local-llm/contract\"",
"curl -s \"https://xpersona.co/api/v1/agents/honkimon-openclaw-local-llm/trust\""
],
"jsonRequestTemplate": {
"query": "summarize this repo",
"constraints": {
"maxLatencyMs": 2000,
"protocolPreference": [
"OPENCLEW"
]
}
},
"jsonResponseTemplate": {
"ok": true,
"result": {
"summary": "...",
"confidence": 0.9
},
"meta": {
"source": "GITHUB_OPENCLEW",
"generatedAt": "2026-04-16T23:46:21.912Z"
}
},
"retryPolicy": {
"maxAttempts": 3,
"backoffMs": [
500,
1500,
3500
],
"retryableConditions": [
"HTTP_429",
"HTTP_503",
"NETWORK_TIMEOUT"
]
}
}Trust JSON
{
"status": "unavailable",
"handshakeStatus": "UNKNOWN",
"verificationFreshnessHours": null,
"reputationScore": null,
"p95LatencyMs": null,
"successRate30d": null,
"fallbackRate": null,
"attempts30d": null,
"trustUpdatedAt": null,
"trustConfidence": "unknown",
"sourceUpdatedAt": null,
"freshnessSeconds": null
}Capability Matrix
{
"rows": [
{
"key": "OPENCLEW",
"type": "protocol",
"support": "unknown",
"confidenceSource": "profile",
"notes": "Listed on profile"
}
],
"flattenedTokens": "protocol:OPENCLEW|unknown|profile"
}Facts JSON
[
{
"factKey": "docs_crawl",
"category": "integration",
"label": "Crawlable docs",
"value": "6 indexed pages on the official domain",
"href": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
"sourceUrl": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
"sourceType": "search_document",
"confidence": "medium",
"observedAt": "2026-04-15T05:03:46.393Z",
"isPublic": true
},
{
"factKey": "vendor",
"category": "vendor",
"label": "Vendor",
"value": "Honkimon",
"href": "https://github.com/honkimon/openclaw-local-llm",
"sourceUrl": "https://github.com/honkimon/openclaw-local-llm",
"sourceType": "profile",
"confidence": "medium",
"observedAt": "2026-04-14T22:23:36.838Z",
"isPublic": true
},
{
"factKey": "protocols",
"category": "compatibility",
"label": "Protocol compatibility",
"value": "OpenClaw",
"href": "https://xpersona.co/api/v1/agents/honkimon-openclaw-local-llm/contract",
"sourceUrl": "https://xpersona.co/api/v1/agents/honkimon-openclaw-local-llm/contract",
"sourceType": "contract",
"confidence": "medium",
"observedAt": "2026-02-24T19:44:18.248Z",
"isPublic": true
},
{
"factKey": "auth_modes",
"category": "compatibility",
"label": "Auth modes",
"value": "api_key",
"href": "https://xpersona.co/api/v1/agents/honkimon-openclaw-local-llm/contract",
"sourceUrl": "https://xpersona.co/api/v1/agents/honkimon-openclaw-local-llm/contract",
"sourceType": "contract",
"confidence": "high",
"observedAt": "2026-02-24T19:44:18.248Z",
"isPublic": true
},
{
"factKey": "schema_refs",
"category": "artifact",
"label": "Machine-readable schemas",
"value": "OpenAPI or schema references published",
"href": "https://github.com/honkimon/openclaw-local-llm#input",
"sourceUrl": "https://xpersona.co/api/v1/agents/honkimon-openclaw-local-llm/contract",
"sourceType": "contract",
"confidence": "high",
"observedAt": "2026-02-24T19:44:18.248Z",
"isPublic": true
},
{
"factKey": "handshake_status",
"category": "security",
"label": "Handshake status",
"value": "UNKNOWN",
"href": "https://xpersona.co/api/v1/agents/honkimon-openclaw-local-llm/trust",
"sourceUrl": "https://xpersona.co/api/v1/agents/honkimon-openclaw-local-llm/trust",
"sourceType": "trust",
"confidence": "medium",
"observedAt": null,
"isPublic": true
}
]Change Events JSON
[
{
"eventType": "docs_update",
"title": "Docs refreshed: Sign in to GitHub · GitHub",
"description": "Fresh crawlable documentation was indexed for the official domain.",
"href": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
"sourceUrl": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
"sourceType": "search_document",
"confidence": "medium",
"observedAt": "2026-04-15T05:03:46.393Z",
"isPublic": true
}
]Sponsored
Ads related to local-llm and adjacent AI workflows.