Rank
83
A Model Context Protocol (MCP) server for GitLab
Traction
No public download signal
Freshness
Updated 2d ago
Crawler Summary
MCP server for local LLMs — connects to LM Studio or any OpenAI-compatible endpoint @houtini/lm $1 $1 MCP server that connects Claude to **any OpenAI-compatible LLM endpoint** — LM Studio, Ollama, vLLM, llama.cpp, or any remote API. Offload routine work to a local model. Keep your Claude context window for the hard stuff. Why Claude is great at orchestration and reasoning. Local models are great at bulk analysis, classification, extraction, and summarisation. This server lets Claude delegate to a lo Capability contract not published. No trust telemetry is available yet. 9 GitHub stars reported by the source. Last updated 2/25/2026.
Freshness
Last checked 2/25/2026
Best For
@houtini/lm is best for mcp, model-context-protocol, mcp-server workflows where MCP compatibility matters.
Not Ideal For
Contract metadata is missing or unavailable for deterministic execution.
Evidence Sources Checked
editorial-content, GITHUB MCP, runtime-metrics, public facts pack
MCP server for local LLMs — connects to LM Studio or any OpenAI-compatible endpoint @houtini/lm $1 $1 MCP server that connects Claude to **any OpenAI-compatible LLM endpoint** — LM Studio, Ollama, vLLM, llama.cpp, or any remote API. Offload routine work to a local model. Keep your Claude context window for the hard stuff. Why Claude is great at orchestration and reasoning. Local models are great at bulk analysis, classification, extraction, and summarisation. This server lets Claude delegate to a lo
Public facts
4
Change events
0
Artifacts
0
Freshness
Feb 25, 2026
Capability contract not published. No trust telemetry is available yet. 9 GitHub stars reported by the source. Last updated 2/25/2026.
Trust score
Unknown
Compatibility
MCP
Freshness
Feb 25, 2026
Vendor
Houtini
Artifacts
0
Benchmarks
0
Last release
2.1.0
Key links, install path, and a quick operational read before the deeper crawl record.
Summary
Capability contract not published. No trust telemetry is available yet. 9 GitHub stars reported by the source. Last updated 2/25/2026.
Setup snapshot
git clone https://github.com/houtini-ai/lm.gitSetup complexity is MEDIUM. Standard integration tests and API key provisioning are required before connecting this to production workloads.
Final validation: Expose the agent to a mock request payload inside a sandbox and trace the network egress before allowing access to real customer data.
Everything public we have scraped or crawled about this agent, grouped by evidence type with provenance.
Vendor
Houtini
Protocol compatibility
MCP
Adoption signal
9 GitHub stars
Handshake status
UNKNOWN
Merged public release, docs, artifact, benchmark, pricing, and trust refresh events.
Extracted files, examples, snippets, parameters, dependencies, permissions, and artifact metadata.
Extracted files
0
Examples
6
Snippets
0
Languages
typescript
bash
claude mcp add houtini-lm -e LM_STUDIO_URL=http://localhost:1234 -- npx -y @houtini/lm
json
{
"mcpServers": {
"houtini-lm": {
"command": "npx",
"args": ["-y", "@houtini/lm"],
"env": {
"LM_STUDIO_URL": "http://localhost:1234"
}
}
}
}bash
npx @houtini/lm
text
message (required) — the task, with explicit output format instructions system — persona (be specific: "Senior TypeScript dev", not "helpful assistant") temperature — 0.1 for code, 0.3 for analysis (default), 0.5 for suggestions max_tokens — match to expected output: 150 for quick answers, 300 for explanations, 500 for code gen
text
instruction (required) — what to produce (under 50 words works best) system — persona, specific and under 30 words context — COMPLETE data to analyse (never truncated) temperature — 0.1 for review, 0.3 for analysis (default) max_tokens — 200 for bullets, 400 for detailed review, 600 for code gen
text
code (required) — complete source code (never truncate) task (required) — what to do: "Find bugs", "Explain this function", "Add error handling" language — "typescript", "python", "rust", etc. max_tokens — default 500 (200 for quick answers, 800 for code generation)
Full documentation captured from public sources, including the complete README when available.
Docs source
GITHUB MCP
Editorial quality
ready
MCP server for local LLMs — connects to LM Studio or any OpenAI-compatible endpoint @houtini/lm $1 $1 MCP server that connects Claude to **any OpenAI-compatible LLM endpoint** — LM Studio, Ollama, vLLM, llama.cpp, or any remote API. Offload routine work to a local model. Keep your Claude context window for the hard stuff. Why Claude is great at orchestration and reasoning. Local models are great at bulk analysis, classification, extraction, and summarisation. This server lets Claude delegate to a lo
MCP server that connects Claude to any OpenAI-compatible LLM endpoint — LM Studio, Ollama, vLLM, llama.cpp, or any remote API.
Offload routine work to a local model. Keep your Claude context window for the hard stuff.
Claude is great at orchestration and reasoning. Local models are great at bulk analysis, classification, extraction, and summarisation. This server lets Claude delegate to a local model on the fly — no API keys, no cloud round-trips, no context wasted.
Common use cases:
code_task tool — purpose-built for code analysis with an optimised system prompt and sensible defaults (temp 0.2, 500 token cap)claude mcp add houtini-lm -e LM_STUDIO_URL=http://localhost:1234 -- npx -y @houtini/lm
Add to claude_desktop_config.json:
{
"mcpServers": {
"houtini-lm": {
"command": "npx",
"args": ["-y", "@houtini/lm"],
"env": {
"LM_STUDIO_URL": "http://localhost:1234"
}
}
}
}
npx @houtini/lm
Set via environment variables or in your MCP client config:
| Variable | Default | Description |
|----------|---------|-------------|
| LM_STUDIO_URL | http://localhost:1234 | Base URL of the OpenAI-compatible API |
| LM_STUDIO_MODEL | (auto-detect) | Model identifier — leave blank to use whatever's loaded |
| LM_STUDIO_PASSWORD | (none) | Bearer token for authenticated endpoints |
chatDelegate a bounded task to the local LLM. The workhorse for quick questions, code explanation, and pattern recognition.
message (required) — the task, with explicit output format instructions
system — persona (be specific: "Senior TypeScript dev", not "helpful assistant")
temperature — 0.1 for code, 0.3 for analysis (default), 0.5 for suggestions
max_tokens — match to expected output: 150 for quick answers, 300 for explanations, 500 for code gen
Tip: Always send complete code — local models hallucinate details for truncated input.
custom_promptStructured 3-part prompt with separate system, context, and instruction fields. The separation prevents context bleed in local models — better results than stuffing everything into a single message.
instruction (required) — what to produce (under 50 words works best)
system — persona, specific and under 30 words
context — COMPLETE data to analyse (never truncated)
temperature — 0.1 for review, 0.3 for analysis (default)
max_tokens — 200 for bullets, 400 for detailed review, 600 for code gen
code_taskPurpose-built for code analysis. Wraps the local LLM with an optimised code-review system prompt and low temperature (0.2).
code (required) — complete source code (never truncate)
task (required) — what to do: "Find bugs", "Explain this function", "Add error handling"
language — "typescript", "python", "rust", etc.
max_tokens — default 500 (200 for quick answers, 800 for code generation)
The local LLM excels at: explaining code, finding common bugs, suggesting improvements, comparing patterns, generating boilerplate.
It struggles with: subtle/adversarial bugs, multi-file reasoning, design tasks requiring integration.
list_modelsReturns the models currently loaded on the LLM server.
health_checkChecks connectivity. Returns response time, auth status, and loaded model count.
At typical local LLM speeds (~3-4 tokens/second on consumer hardware):
| max_tokens | Response time | Best for | |------------|--------------|----------| | 150 | ~45 seconds | Quick questions, classifications | | 300 | ~100 seconds | Code explanations, summaries | | 500 | ~170 seconds | Code review, generation |
Set max_tokens to match your expected output — lower values mean faster responses.
| Provider | URL | Notes |
|----------|-----|-------|
| LM Studio | http://localhost:1234 | Default, zero config |
| Ollama | http://localhost:11434 | Use OpenAI-compatible mode |
| vLLM | http://localhost:8000 | Native OpenAI API |
| llama.cpp | http://localhost:8080 | Server mode |
| Remote / cloud APIs | Any URL | Set LM_STUDIO_URL + LM_STUDIO_PASSWORD |
git clone https://github.com/houtini-ai/lm.git
cd lm
npm install
npm run build
Run the test suite against a live LLM server:
node test.mjs
MIT
Machine endpoints, protocol fit, contract coverage, invocation examples, and guardrails for agent-to-agent use.
Contract coverage
Status
missing
Auth
None
Streaming
No
Data region
Unspecified
Protocol support
Requires: none
Forbidden: none
Guardrails
Operational confidence: low
curl -s "https://xpersona.co/api/v1/agents/mcp-houtini-ai-lm/snapshot"
curl -s "https://xpersona.co/api/v1/agents/mcp-houtini-ai-lm/contract"
curl -s "https://xpersona.co/api/v1/agents/mcp-houtini-ai-lm/trust"
Trust and runtime signals, benchmark suites, failure patterns, and practical risk constraints.
Trust signals
Handshake
UNKNOWN
Confidence
unknown
Attempts 30d
unknown
Fallback rate
unknown
Runtime metrics
Observed P50
unknown
Observed P95
unknown
Rate limit
unknown
Estimated cost
unknown
Do not use if
Every public screenshot, visual asset, demo link, and owner-provided destination tied to this agent.
Neighboring agents from the same protocol and source ecosystem for comparison and shortlist building.
Rank
83
A Model Context Protocol (MCP) server for GitLab
Traction
No public download signal
Freshness
Updated 2d ago
Rank
80
A Model Context Protocol (MCP) server for GitLab
Traction
No public download signal
Freshness
Updated 2d ago
Rank
74
Expose OpenAPI definition endpoints as MCP tools using the official Rust SDK for the Model Context Protocol (https://github.com/modelcontextprotocol/rust-sdk)
Traction
No public download signal
Freshness
Updated 2d ago
Rank
72
An actix_web backend for the official Rust SDK for the Model Context Protocol (https://github.com/modelcontextprotocol/rust-sdk)
Traction
No public download signal
Freshness
Updated 2d ago
Contract JSON
{
"contractStatus": "missing",
"authModes": [],
"requires": [],
"forbidden": [],
"supportsMcp": false,
"supportsA2a": false,
"supportsStreaming": false,
"inputSchemaRef": null,
"outputSchemaRef": null,
"dataRegion": null,
"contractUpdatedAt": null,
"sourceUpdatedAt": null,
"freshnessSeconds": null
}Invocation Guide
{
"preferredApi": {
"snapshotUrl": "https://xpersona.co/api/v1/agents/mcp-houtini-ai-lm/snapshot",
"contractUrl": "https://xpersona.co/api/v1/agents/mcp-houtini-ai-lm/contract",
"trustUrl": "https://xpersona.co/api/v1/agents/mcp-houtini-ai-lm/trust"
},
"curlExamples": [
"curl -s \"https://xpersona.co/api/v1/agents/mcp-houtini-ai-lm/snapshot\"",
"curl -s \"https://xpersona.co/api/v1/agents/mcp-houtini-ai-lm/contract\"",
"curl -s \"https://xpersona.co/api/v1/agents/mcp-houtini-ai-lm/trust\""
],
"jsonRequestTemplate": {
"query": "summarize this repo",
"constraints": {
"maxLatencyMs": 2000,
"protocolPreference": [
"MCP"
]
}
},
"jsonResponseTemplate": {
"ok": true,
"result": {
"summary": "...",
"confidence": 0.9
},
"meta": {
"source": "GITHUB_MCP",
"generatedAt": "2026-04-17T02:17:41.721Z"
}
},
"retryPolicy": {
"maxAttempts": 3,
"backoffMs": [
500,
1500,
3500
],
"retryableConditions": [
"HTTP_429",
"HTTP_503",
"NETWORK_TIMEOUT"
]
}
}Trust JSON
{
"status": "unavailable",
"handshakeStatus": "UNKNOWN",
"verificationFreshnessHours": null,
"reputationScore": null,
"p95LatencyMs": null,
"successRate30d": null,
"fallbackRate": null,
"attempts30d": null,
"trustUpdatedAt": null,
"trustConfidence": "unknown",
"sourceUpdatedAt": null,
"freshnessSeconds": null
}Capability Matrix
{
"rows": [
{
"key": "MCP",
"type": "protocol",
"support": "unknown",
"confidenceSource": "profile",
"notes": "Listed on profile"
},
{
"key": "mcp",
"type": "capability",
"support": "supported",
"confidenceSource": "profile",
"notes": "Declared in agent profile metadata"
},
{
"key": "model-context-protocol",
"type": "capability",
"support": "supported",
"confidenceSource": "profile",
"notes": "Declared in agent profile metadata"
},
{
"key": "mcp-server",
"type": "capability",
"support": "supported",
"confidenceSource": "profile",
"notes": "Declared in agent profile metadata"
},
{
"key": "lm-studio",
"type": "capability",
"support": "supported",
"confidenceSource": "profile",
"notes": "Declared in agent profile metadata"
},
{
"key": "ollama",
"type": "capability",
"support": "supported",
"confidenceSource": "profile",
"notes": "Declared in agent profile metadata"
},
{
"key": "vllm",
"type": "capability",
"support": "supported",
"confidenceSource": "profile",
"notes": "Declared in agent profile metadata"
},
{
"key": "openai",
"type": "capability",
"support": "supported",
"confidenceSource": "profile",
"notes": "Declared in agent profile metadata"
},
{
"key": "openai-compatible",
"type": "capability",
"support": "supported",
"confidenceSource": "profile",
"notes": "Declared in agent profile metadata"
},
{
"key": "local-llm",
"type": "capability",
"support": "supported",
"confidenceSource": "profile",
"notes": "Declared in agent profile metadata"
},
{
"key": "claude",
"type": "capability",
"support": "supported",
"confidenceSource": "profile",
"notes": "Declared in agent profile metadata"
},
{
"key": "ai-tools",
"type": "capability",
"support": "supported",
"confidenceSource": "profile",
"notes": "Declared in agent profile metadata"
},
{
"key": "llama-cpp",
"type": "capability",
"support": "supported",
"confidenceSource": "profile",
"notes": "Declared in agent profile metadata"
},
{
"key": "ai",
"type": "capability",
"support": "supported",
"confidenceSource": "profile",
"notes": "Declared in agent profile metadata"
},
{
"key": "llm",
"type": "capability",
"support": "supported",
"confidenceSource": "profile",
"notes": "Declared in agent profile metadata"
},
{
"key": "cli",
"type": "capability",
"support": "supported",
"confidenceSource": "profile",
"notes": "Declared in agent profile metadata"
}
],
"flattenedTokens": "protocol:MCP|unknown|profile capability:mcp|supported|profile capability:model-context-protocol|supported|profile capability:mcp-server|supported|profile capability:lm-studio|supported|profile capability:ollama|supported|profile capability:vllm|supported|profile capability:openai|supported|profile capability:openai-compatible|supported|profile capability:local-llm|supported|profile capability:claude|supported|profile capability:ai-tools|supported|profile capability:llama-cpp|supported|profile capability:ai|supported|profile capability:llm|supported|profile capability:cli|supported|profile"
}Facts JSON
[
{
"factKey": "vendor",
"category": "vendor",
"label": "Vendor",
"value": "Houtini",
"href": "https://houtini.ai",
"sourceUrl": "https://houtini.ai",
"sourceType": "profile",
"confidence": "medium",
"observedAt": "2026-02-25T03:07:28.982Z",
"isPublic": true
},
{
"factKey": "protocols",
"category": "compatibility",
"label": "Protocol compatibility",
"value": "MCP",
"href": "https://xpersona.co/api/v1/agents/mcp-houtini-ai-lm/contract",
"sourceUrl": "https://xpersona.co/api/v1/agents/mcp-houtini-ai-lm/contract",
"sourceType": "contract",
"confidence": "medium",
"observedAt": "2026-02-25T03:07:28.982Z",
"isPublic": true
},
{
"factKey": "traction",
"category": "adoption",
"label": "Adoption signal",
"value": "9 GitHub stars",
"href": "https://github.com/houtini-ai/lm",
"sourceUrl": "https://github.com/houtini-ai/lm",
"sourceType": "profile",
"confidence": "medium",
"observedAt": "2026-02-25T03:07:28.982Z",
"isPublic": true
},
{
"factKey": "handshake_status",
"category": "security",
"label": "Handshake status",
"value": "UNKNOWN",
"href": "https://xpersona.co/api/v1/agents/mcp-houtini-ai-lm/trust",
"sourceUrl": "https://xpersona.co/api/v1/agents/mcp-houtini-ai-lm/trust",
"sourceType": "trust",
"confidence": "medium",
"observedAt": null,
"isPublic": true
}
]Change Events JSON
[]
Sponsored
Ads related to @houtini/lm and adjacent AI workflows.