Crawler Summary

local-llm answer-first brief

Query local LM Studio or any OpenAI-compatible LLM server (like Ollama, llama.cpp, vLLM) for coding tasks, explanations, or text generation without using paid API tokens. Use when the user explicitly asks to use a local model, or for low-stakes tasks like code examples, documentation, simple scripts, or exploratory work where perfect accuracy isn't critical. --- name: local-llm description: Query local LM Studio or any OpenAI-compatible LLM server (like Ollama, llama.cpp, vLLM) for coding tasks, explanations, or text generation without using paid API tokens. Use when the user explicitly asks to use a local model, or for low-stakes tasks like code examples, documentation, simple scripts, or exploratory work where perfect accuracy isn't critical. --- Local LLM (LM Studio / Published capability contract available. No trust telemetry is available yet. Last updated 4/14/2026.

Freshness

Last checked 4/14/2026

Best For

Contract is available with explicit auth and schema references.

Not Ideal For

local-llm is not ideal for teams that need stronger public trust telemetry, lower setup complexity, or more explicit contract coverage before production rollout.

Evidence Sources Checked

editorial-content, capability-contract, runtime-metrics, public facts pack

Claim this agent
Agent DossierGitHubSafety: 94/100

local-llm

Query local LM Studio or any OpenAI-compatible LLM server (like Ollama, llama.cpp, vLLM) for coding tasks, explanations, or text generation without using paid API tokens. Use when the user explicitly asks to use a local model, or for low-stakes tasks like code examples, documentation, simple scripts, or exploratory work where perfect accuracy isn't critical. --- name: local-llm description: Query local LM Studio or any OpenAI-compatible LLM server (like Ollama, llama.cpp, vLLM) for coding tasks, explanations, or text generation without using paid API tokens. Use when the user explicitly asks to use a local model, or for low-stakes tasks like code examples, documentation, simple scripts, or exploratory work where perfect accuracy isn't critical. --- Local LLM (LM Studio /

OpenClawself-declared

Public facts

6

Change events

1

Artifacts

0

Freshness

Apr 14, 2026

Verifiededitorial-contentNo verified compatibility signals

Published capability contract available. No trust telemetry is available yet. Last updated 4/14/2026.

Schema refs publishedTrust evidence available

Trust score

Unknown

Compatibility

OpenClaw

Freshness

Apr 14, 2026

Vendor

Honkimon

Artifacts

0

Benchmarks

0

Last release

Unpublished

Executive Summary

Key links, install path, and a quick operational read before the deeper crawl record.

Verifiededitorial-content

Summary

Published capability contract available. No trust telemetry is available yet. Last updated 4/14/2026.

Setup snapshot

git clone https://github.com/honkimon/openclaw-local-llm.git
  1. 1

    Setup complexity is LOW. This package is likely designed for quick installation with minimal external side-effects.

  2. 2

    Final validation: Expose the agent to a mock request payload inside a sandbox and trace the network egress before allowing access to real customer data.

Evidence Ledger

Everything public we have scraped or crawled about this agent, grouped by evidence type with provenance.

Verifiededitorial-content
Vendor (1)

Vendor

Honkimon

profilemedium
Observed Apr 14, 2026Source linkProvenance
Compatibility (2)

Protocol compatibility

OpenClaw

contractmedium
Observed Feb 24, 2026Source linkProvenance

Auth modes

api_key

contracthigh
Observed Feb 24, 2026Source linkProvenance
Artifact (1)

Machine-readable schemas

OpenAPI or schema references published

contracthigh
Observed Feb 24, 2026Source linkProvenance
Security (1)

Handshake status

UNKNOWN

trustmedium
Observed unknownSource linkProvenance
Integration (1)

Crawlable docs

6 indexed pages on the official domain

search_documentmedium
Observed Apr 15, 2026Source linkProvenance

Release & Crawl Timeline

Merged public release, docs, artifact, benchmark, pricing, and trust refresh events.

Self-declaredagent-index

Artifacts Archive

Extracted files, examples, snippets, parameters, dependencies, permissions, and artifact metadata.

Self-declaredGITHUB OPENCLEW

Extracted files

0

Examples

6

Snippets

0

Languages

typescript

Parameters

Executable Examples

bash

export LM_STUDIO_URL="http://localhost:1234/v1/chat/completions"
export LM_STUDIO_MODEL="qwen/qwen2.5-coder-14b"  # Optional, auto-detects if omitted

json

{
  "skills": {
    "entries": {
      "local-llm": {
        "env": {
          "LM_STUDIO_URL": "http://192.168.1.100:1234/v1/chat/completions",
          "LM_STUDIO_MODEL": "qwen/qwen2.5-coder-14b"
        }
      }
    }
  }
}

bash

{baseDir}/scripts/query_llm.py "Your prompt here"

bash

{baseDir}/scripts/query_llm.py "Write a function to parse JSON" \
  --system "You are a Python expert focused on clean, readable code"

bash

{baseDir}/scripts/query_llm.py "Explain asyncio" \
  --endpoint "http://192.168.1.100:1234/v1/chat/completions" \
  --model "qwen/qwen2.5-coder-14b"

bash

{baseDir}/scripts/query_llm.py --list-models

Docs & README

Full documentation captured from public sources, including the complete README when available.

Self-declaredGITHUB OPENCLEW

Docs source

GITHUB OPENCLEW

Editorial quality

ready

Query local LM Studio or any OpenAI-compatible LLM server (like Ollama, llama.cpp, vLLM) for coding tasks, explanations, or text generation without using paid API tokens. Use when the user explicitly asks to use a local model, or for low-stakes tasks like code examples, documentation, simple scripts, or exploratory work where perfect accuracy isn't critical. --- name: local-llm description: Query local LM Studio or any OpenAI-compatible LLM server (like Ollama, llama.cpp, vLLM) for coding tasks, explanations, or text generation without using paid API tokens. Use when the user explicitly asks to use a local model, or for low-stakes tasks like code examples, documentation, simple scripts, or exploratory work where perfect accuracy isn't critical. --- Local LLM (LM Studio /

Full README

name: local-llm description: Query local LM Studio or any OpenAI-compatible LLM server (like Ollama, llama.cpp, vLLM) for coding tasks, explanations, or text generation without using paid API tokens. Use when the user explicitly asks to use a local model, or for low-stakes tasks like code examples, documentation, simple scripts, or exploratory work where perfect accuracy isn't critical.

Local LLM (LM Studio / OpenAI-compatible)

Query any local LLM server for free, token-less inference.

Works with:

  • LM Studio (most common)
  • Ollama
  • llama.cpp server
  • vLLM
  • LocalAI
  • Any OpenAI-compatible endpoint

Configuration

Set via environment variables or OpenClaw skill config:

Option 1: Environment variables

export LM_STUDIO_URL="http://localhost:1234/v1/chat/completions"
export LM_STUDIO_MODEL="qwen/qwen2.5-coder-14b"  # Optional, auto-detects if omitted

Option 2: OpenClaw config (~/.openclaw/openclaw.json)

{
  "skills": {
    "entries": {
      "local-llm": {
        "env": {
          "LM_STUDIO_URL": "http://192.168.1.100:1234/v1/chat/completions",
          "LM_STUDIO_MODEL": "qwen/qwen2.5-coder-14b"
        }
      }
    }
  }
}

Default: http://localhost:1234/v1/chat/completions (LM Studio default)

When to Use

Good for:

  • Code generation and examples
  • Documentation/comments
  • Explanations of concepts
  • Simple scripts or utilities
  • Exploratory/draft work
  • Tasks where GPT-4/Claude might be overkill

Not good for:

  • Critical production code
  • Complex multi-step reasoning
  • Tasks requiring latest knowledge (model-dependent)
  • High-stakes decisions
  • Anything requiring guaranteed accuracy

Usage

Basic query:

{baseDir}/scripts/query_llm.py "Your prompt here"

With system prompt:

{baseDir}/scripts/query_llm.py "Write a function to parse JSON" \
  --system "You are a Python expert focused on clean, readable code"

Custom endpoint and model:

{baseDir}/scripts/query_llm.py "Explain asyncio" \
  --endpoint "http://192.168.1.100:1234/v1/chat/completions" \
  --model "qwen/qwen2.5-coder-14b"

List available models:

{baseDir}/scripts/query_llm.py --list-models

With custom parameters:

{baseDir}/scripts/query_llm.py "Explain recursion" \
  --max-tokens 1000 \
  --temperature 0.3

Parameters

  • prompt (required): The question or task
  • --endpoint: Server URL (default: $LM_STUDIO_URL or http://localhost:1234/v1/chat/completions)
  • --model: Model ID (default: auto-detect from /models endpoint)
  • --system: System prompt to set context/role
  • --max-tokens: Max response length (default: 2000)
  • --temperature: Randomness 0.0-1.0 (default: 0.7, lower = more deterministic)
  • --list-models: Show available models

Common Endpoints

LM Studio:

http://localhost:1234/v1/chat/completions

Ollama:

http://localhost:11434/v1/chat/completions

llama.cpp server:

http://localhost:8080/v1/chat/completions

Over network (replace with your server IP):

http://192.168.1.100:1234/v1/chat/completions

Best Practices

Do:

  • Use for code snippets and examples
  • Use for "how do I...?" questions
  • Try local first for simple tasks
  • Check output for accuracy (models vary in quality)

Don't:

  • Use for anything that could break production systems
  • Trust it blindly (always review output)
  • Use for tasks requiring GPT-4/Claude-level reasoning
  • Expect it to know about very recent frameworks/APIs

Troubleshooting

"Connection Error: Is LM Studio running?"

  • Start LM Studio and load a model
  • Verify the server is enabled (Server tab in LM Studio)
  • Check the correct port (default: 1234)
  • Test endpoint: curl http://localhost:1234/v1/models

"No models found"

  • Load a model in LM Studio (Models tab)
  • Verify the model is actually loaded (should show in Server tab)
  • Try --list-models to debug

Slow responses:

  • Normal for larger models or CPU-only inference
  • Consider smaller models (7B-14B params work well)
  • Reduce --max-tokens for faster responses
  • Check if GPU acceleration is enabled

Poor output quality:

  • Try different models (coding-specific models work best for code)
  • Adjust temperature (0.3-0.5 for code, 0.7-0.9 for creative)
  • Add a system prompt to guide the model
  • Be more specific in your prompt
  • Consider using Claude/GPT-4 for complex tasks instead

Recommended Models

For coding:

  • Qwen 2.5 Coder (7B/14B/32B)
  • DeepSeek Coder (6B/33B)
  • CodeLlama (7B/13B/34B)

For general use:

  • Llama 3.1 (8B/70B)
  • Mistral (7B)
  • Phi-3 (mini/small/medium)

Install models via LM Studio's search/download feature.

Examples

Code generation:

{baseDir}/scripts/query_llm.py "Write a Python function to validate email addresses" \
  --system "You are an expert Python developer"

Explain a concept:

{baseDir}/scripts/query_llm.py "Explain how async/await works in Python" \
  --temperature 0.8

Generate tests:

{baseDir}/scripts/query_llm.py "Write pytest tests for this function: [paste code]" \
  --max-tokens 1500

Document code:

{baseDir}/scripts/query_llm.py "Add docstrings to this Python class: [paste code]" \
  --temperature 0.3

Contract & API

Machine endpoints, protocol fit, contract coverage, invocation examples, and guardrails for agent-to-agent use.

Verifiedcapability-contract

Contract coverage

Status

ready

Auth

api_key

Streaming

No

Data region

global

Protocol support

OpenClaw: self-declared

Requires: openclew, lang:typescript

Forbidden: none

Guardrails

Operational confidence: medium

Contract is available with explicit auth and schema references.
Trust confidence is not low and verification freshness is acceptable.
Invocation examples
curl -s "https://xpersona.co/api/v1/agents/honkimon-openclaw-local-llm/snapshot"
curl -s "https://xpersona.co/api/v1/agents/honkimon-openclaw-local-llm/contract"
curl -s "https://xpersona.co/api/v1/agents/honkimon-openclaw-local-llm/trust"

Reliability & Benchmarks

Trust and runtime signals, benchmark suites, failure patterns, and practical risk constraints.

Missingruntime-metrics

Trust signals

Handshake

UNKNOWN

Confidence

unknown

Attempts 30d

unknown

Fallback rate

unknown

Runtime metrics

Observed P50

unknown

Observed P95

unknown

Rate limit

unknown

Estimated cost

unknown

No benchmark suites or observed failure patterns are available.

Media & Demo

Every public screenshot, visual asset, demo link, and owner-provided destination tied to this agent.

Missingno-media
No screenshots, media assets, or demo links are available.

Related Agents

Neighboring agents from the same protocol and source ecosystem for comparison and shortlist building.

Self-declaredprotocol-neighbors
GITHUB_REPOSactivepieces

Rank

70

AI Agents & MCPs & AI Workflow Automation • (~400 MCP servers for AI agents) • AI Automation / AI Agent with MCPs • AI Workflows & AI Agents • MCPs for AI Agents

Traction

No public download signal

Freshness

Updated 2d ago

OPENCLAW
GITHUB_REPOScherry-studio

Rank

70

AI productivity studio with smart chat, autonomous agents, and 300+ assistants. Unified access to frontier LLMs

Traction

No public download signal

Freshness

Updated 5d ago

MCPOPENCLAW
GITHUB_REPOSAionUi

Rank

70

Free, local, open-source 24/7 Cowork app and OpenClaw for Gemini CLI, Claude Code, Codex, OpenCode, Qwen Code, Goose CLI, Auggie, and more | 🌟 Star if you like it!

Traction

No public download signal

Freshness

Updated 6d ago

MCPOPENCLAW
GITHUB_REPOSCopilotKit

Rank

70

The Frontend for Agents & Generative UI. React + Angular

Traction

No public download signal

Freshness

Updated 23d ago

OPENCLAW
Machine Appendix

Contract JSON

{
  "contractStatus": "ready",
  "authModes": [
    "api_key"
  ],
  "requires": [
    "openclew",
    "lang:typescript"
  ],
  "forbidden": [],
  "supportsMcp": false,
  "supportsA2a": false,
  "supportsStreaming": false,
  "inputSchemaRef": "https://github.com/honkimon/openclaw-local-llm#input",
  "outputSchemaRef": "https://github.com/honkimon/openclaw-local-llm#output",
  "dataRegion": "global",
  "contractUpdatedAt": "2026-02-24T19:44:18.248Z",
  "sourceUpdatedAt": "2026-02-24T19:44:18.248Z",
  "freshnessSeconds": 4420923
}

Invocation Guide

{
  "preferredApi": {
    "snapshotUrl": "https://xpersona.co/api/v1/agents/honkimon-openclaw-local-llm/snapshot",
    "contractUrl": "https://xpersona.co/api/v1/agents/honkimon-openclaw-local-llm/contract",
    "trustUrl": "https://xpersona.co/api/v1/agents/honkimon-openclaw-local-llm/trust"
  },
  "curlExamples": [
    "curl -s \"https://xpersona.co/api/v1/agents/honkimon-openclaw-local-llm/snapshot\"",
    "curl -s \"https://xpersona.co/api/v1/agents/honkimon-openclaw-local-llm/contract\"",
    "curl -s \"https://xpersona.co/api/v1/agents/honkimon-openclaw-local-llm/trust\""
  ],
  "jsonRequestTemplate": {
    "query": "summarize this repo",
    "constraints": {
      "maxLatencyMs": 2000,
      "protocolPreference": [
        "OPENCLEW"
      ]
    }
  },
  "jsonResponseTemplate": {
    "ok": true,
    "result": {
      "summary": "...",
      "confidence": 0.9
    },
    "meta": {
      "source": "GITHUB_OPENCLEW",
      "generatedAt": "2026-04-16T23:46:21.912Z"
    }
  },
  "retryPolicy": {
    "maxAttempts": 3,
    "backoffMs": [
      500,
      1500,
      3500
    ],
    "retryableConditions": [
      "HTTP_429",
      "HTTP_503",
      "NETWORK_TIMEOUT"
    ]
  }
}

Trust JSON

{
  "status": "unavailable",
  "handshakeStatus": "UNKNOWN",
  "verificationFreshnessHours": null,
  "reputationScore": null,
  "p95LatencyMs": null,
  "successRate30d": null,
  "fallbackRate": null,
  "attempts30d": null,
  "trustUpdatedAt": null,
  "trustConfidence": "unknown",
  "sourceUpdatedAt": null,
  "freshnessSeconds": null
}

Capability Matrix

{
  "rows": [
    {
      "key": "OPENCLEW",
      "type": "protocol",
      "support": "unknown",
      "confidenceSource": "profile",
      "notes": "Listed on profile"
    }
  ],
  "flattenedTokens": "protocol:OPENCLEW|unknown|profile"
}

Facts JSON

[
  {
    "factKey": "docs_crawl",
    "category": "integration",
    "label": "Crawlable docs",
    "value": "6 indexed pages on the official domain",
    "href": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
    "sourceUrl": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
    "sourceType": "search_document",
    "confidence": "medium",
    "observedAt": "2026-04-15T05:03:46.393Z",
    "isPublic": true
  },
  {
    "factKey": "vendor",
    "category": "vendor",
    "label": "Vendor",
    "value": "Honkimon",
    "href": "https://github.com/honkimon/openclaw-local-llm",
    "sourceUrl": "https://github.com/honkimon/openclaw-local-llm",
    "sourceType": "profile",
    "confidence": "medium",
    "observedAt": "2026-04-14T22:23:36.838Z",
    "isPublic": true
  },
  {
    "factKey": "protocols",
    "category": "compatibility",
    "label": "Protocol compatibility",
    "value": "OpenClaw",
    "href": "https://xpersona.co/api/v1/agents/honkimon-openclaw-local-llm/contract",
    "sourceUrl": "https://xpersona.co/api/v1/agents/honkimon-openclaw-local-llm/contract",
    "sourceType": "contract",
    "confidence": "medium",
    "observedAt": "2026-02-24T19:44:18.248Z",
    "isPublic": true
  },
  {
    "factKey": "auth_modes",
    "category": "compatibility",
    "label": "Auth modes",
    "value": "api_key",
    "href": "https://xpersona.co/api/v1/agents/honkimon-openclaw-local-llm/contract",
    "sourceUrl": "https://xpersona.co/api/v1/agents/honkimon-openclaw-local-llm/contract",
    "sourceType": "contract",
    "confidence": "high",
    "observedAt": "2026-02-24T19:44:18.248Z",
    "isPublic": true
  },
  {
    "factKey": "schema_refs",
    "category": "artifact",
    "label": "Machine-readable schemas",
    "value": "OpenAPI or schema references published",
    "href": "https://github.com/honkimon/openclaw-local-llm#input",
    "sourceUrl": "https://xpersona.co/api/v1/agents/honkimon-openclaw-local-llm/contract",
    "sourceType": "contract",
    "confidence": "high",
    "observedAt": "2026-02-24T19:44:18.248Z",
    "isPublic": true
  },
  {
    "factKey": "handshake_status",
    "category": "security",
    "label": "Handshake status",
    "value": "UNKNOWN",
    "href": "https://xpersona.co/api/v1/agents/honkimon-openclaw-local-llm/trust",
    "sourceUrl": "https://xpersona.co/api/v1/agents/honkimon-openclaw-local-llm/trust",
    "sourceType": "trust",
    "confidence": "medium",
    "observedAt": null,
    "isPublic": true
  }
]

Change Events JSON

[
  {
    "eventType": "docs_update",
    "title": "Docs refreshed: Sign in to GitHub · GitHub",
    "description": "Fresh crawlable documentation was indexed for the official domain.",
    "href": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
    "sourceUrl": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
    "sourceType": "search_document",
    "confidence": "medium",
    "observedAt": "2026-04-15T05:03:46.393Z",
    "isPublic": true
  }
]

Sponsored

Ads related to local-llm and adjacent AI workflows.