Crawler Summary

mcp-local-rag answer-first brief

Local RAG MCP Server - Easy-to-setup document search with minimal configuration MCP Local RAG $1 $1 $1 $1 $1 Local RAG for developers using MCP. Semantic search with keyword boost for exact technical terms — fully private, zero setup. Features - **Semantic search with keyword boost** Vector search first, then keyword matching boosts exact matches. Terms like useEffect, error codes, and class names rank higher—not just semantically guessed. - **Smart semantic chunking** Chunks documents by meanin Capability contract not published. No trust telemetry is available yet. 132 GitHub stars reported by the source. Last updated 2/25/2026.

Freshness

Last checked 2/25/2026

Best For

mcp-local-rag is best for typescript, mcp, mcp-server workflows where MCP compatibility matters.

Not Ideal For

Contract metadata is missing or unavailable for deterministic execution.

Evidence Sources Checked

editorial-content, GITHUB MCP, runtime-metrics, public facts pack

Claim this agent
Agent DossierGitHubSafety: 100/100

mcp-local-rag

Local RAG MCP Server - Easy-to-setup document search with minimal configuration MCP Local RAG $1 $1 $1 $1 $1 Local RAG for developers using MCP. Semantic search with keyword boost for exact technical terms — fully private, zero setup. Features - **Semantic search with keyword boost** Vector search first, then keyword matching boosts exact matches. Terms like useEffect, error codes, and class names rank higher—not just semantically guessed. - **Smart semantic chunking** Chunks documents by meanin

MCPself-declared

Public facts

5

Change events

1

Artifacts

0

Freshness

Feb 25, 2026

Verifiededitorial-contentNo verified compatibility signals132 GitHub stars

Capability contract not published. No trust telemetry is available yet. 132 GitHub stars reported by the source. Last updated 2/25/2026.

132 GitHub starsTrust evidence available

Trust score

Unknown

Compatibility

MCP

Freshness

Feb 25, 2026

Vendor

Shinpr

Artifacts

0

Benchmarks

0

Last release

0.7.0

Executive Summary

Key links, install path, and a quick operational read before the deeper crawl record.

Verifiededitorial-content

Summary

Capability contract not published. No trust telemetry is available yet. 132 GitHub stars reported by the source. Last updated 2/25/2026.

Setup snapshot

git clone https://github.com/shinpr/mcp-local-rag.git
  1. 1

    Setup complexity is MEDIUM. Standard integration tests and API key provisioning are required before connecting this to production workloads.

  2. 2

    Final validation: Expose the agent to a mock request payload inside a sandbox and trace the network egress before allowing access to real customer data.

Evidence Ledger

Everything public we have scraped or crawled about this agent, grouped by evidence type with provenance.

Verifiededitorial-content
Vendor (1)

Vendor

Shinpr

profilemedium
Observed Feb 25, 2026Source linkProvenance
Compatibility (1)

Protocol compatibility

MCP

contractmedium
Observed Feb 25, 2026Source linkProvenance
Adoption (1)

Adoption signal

132 GitHub stars

profilemedium
Observed Feb 25, 2026Source linkProvenance
Security (1)

Handshake status

UNKNOWN

trustmedium
Observed unknownSource linkProvenance
Integration (1)

Crawlable docs

6 indexed pages on the official domain

search_documentmedium
Observed Apr 15, 2026Source linkProvenance

Release & Crawl Timeline

Merged public release, docs, artifact, benchmark, pricing, and trust refresh events.

Self-declaredagent-index

Artifacts Archive

Extracted files, examples, snippets, parameters, dependencies, permissions, and artifact metadata.

Self-declaredGITHUB MCP

Extracted files

0

Examples

6

Snippets

0

Languages

typescript

Executable Examples

json

{
  "mcpServers": {
    "local-rag": {
      "command": "npx",
      "args": ["-y", "mcp-local-rag"],
      "env": {
        "BASE_DIR": "/path/to/your/documents"
      }
    }
  }
}

toml

[mcp_servers.local-rag]
command = "npx"
args = ["-y", "mcp-local-rag"]

[mcp_servers.local-rag.env]
BASE_DIR = "/path/to/your/documents"

bash

claude mcp add local-rag --scope user --env BASE_DIR=/path/to/your/documents -- npx -y mcp-local-rag

text

You: "Ingest api-spec.pdf"
Assistant: Successfully ingested api-spec.pdf (47 chunks created)

You: "What does the API documentation say about authentication?"
Assistant: Based on the documentation, authentication uses OAuth 2.0 with JWT tokens.
          The flow is described in section 3.2...

text

"Ingest the document at /Users/me/docs/api-spec.pdf"

text

"Fetch https://example.com/docs and ingest the HTML"

Docs & README

Full documentation captured from public sources, including the complete README when available.

Self-declaredGITHUB MCP

Docs source

GITHUB MCP

Editorial quality

ready

Local RAG MCP Server - Easy-to-setup document search with minimal configuration MCP Local RAG $1 $1 $1 $1 $1 Local RAG for developers using MCP. Semantic search with keyword boost for exact technical terms — fully private, zero setup. Features - **Semantic search with keyword boost** Vector search first, then keyword matching boosts exact matches. Terms like useEffect, error codes, and class names rank higher—not just semantically guessed. - **Smart semantic chunking** Chunks documents by meanin

Full README

MCP Local RAG

GitHub stars npm version License: MIT TypeScript MCP Registry

Local RAG for developers using MCP. Semantic search with keyword boost for exact technical terms — fully private, zero setup.

Features

  • Semantic search with keyword boost Vector search first, then keyword matching boosts exact matches. Terms like useEffect, error codes, and class names rank higher—not just semantically guessed.

  • Smart semantic chunking Chunks documents by meaning, not character count. Uses embedding similarity to find natural topic boundaries—keeping related content together and splitting where topics change.

  • Quality-first result filtering Groups results by relevance gaps instead of arbitrary top-K cutoffs. Get fewer but more trustworthy chunks.

  • Runs entirely locally No API keys, no cloud, no data leaving your machine. Works fully offline after the first model download.

  • Zero-friction setup One npx command. No Docker, no Python, no servers to manage. Designed for Cursor, Codex, and Claude Code via MCP.

Quick Start

Set BASE_DIR to the folder you want to search. Documents must live under it.

Add the MCP server to your AI coding tool:

For Cursor — Add to ~/.cursor/mcp.json:

{
  "mcpServers": {
    "local-rag": {
      "command": "npx",
      "args": ["-y", "mcp-local-rag"],
      "env": {
        "BASE_DIR": "/path/to/your/documents"
      }
    }
  }
}

For Codex — Add to ~/.codex/config.toml:

[mcp_servers.local-rag]
command = "npx"
args = ["-y", "mcp-local-rag"]

[mcp_servers.local-rag.env]
BASE_DIR = "/path/to/your/documents"

For Claude Code — Run this command:

claude mcp add local-rag --scope user --env BASE_DIR=/path/to/your/documents -- npx -y mcp-local-rag

Restart your tool, then start using it:

You: "Ingest api-spec.pdf"
Assistant: Successfully ingested api-spec.pdf (47 chunks created)

You: "What does the API documentation say about authentication?"
Assistant: Based on the documentation, authentication uses OAuth 2.0 with JWT tokens.
          The flow is described in section 3.2...

That's it. No installation, no Docker, no complex setup.

Why This Exists

You want AI to search your documents—technical specs, research papers, internal docs. But most solutions send your files to external APIs.

Privacy. Your documents might contain sensitive data. This runs entirely locally.

Cost. External embedding APIs charge per use. This is free after the initial model download.

Offline. Works without internet after setup.

Code search. Pure semantic search misses exact terms like useEffect or ERR_CONNECTION_REFUSED. Keyword boost catches both meaning and exact matches.

Usage

The server provides 6 MCP tools: ingest file, ingest data, search, list, delete, status (ingest_file, ingest_data, query_documents, list_files, delete_file, status).

Ingesting Documents

"Ingest the document at /Users/me/docs/api-spec.pdf"

Supports PDF, DOCX, TXT, and Markdown. The server extracts text, splits it into chunks, generates embeddings locally, and stores everything in a local vector database.

Re-ingesting the same file replaces the old version automatically.

Ingesting HTML Content

Use ingest_data to ingest HTML content retrieved by your AI assistant (via web fetch, curl, browser tools, etc.):

"Fetch https://example.com/docs and ingest the HTML"

The server extracts main content using Readability (removes navigation, ads, etc.), converts to Markdown, and indexes it. Perfect for:

  • Web documentation
  • HTML retrieved by the AI assistant
  • Clipboard content

HTML is automatically cleaned—you get the article content, not the boilerplate.

Note: The RAG server itself doesn't fetch web content—your AI assistant retrieves it and passes the HTML to ingest_data. This keeps the server fully local while letting you index any content your assistant can access. Please respect website terms of service and copyright when ingesting external content.

Searching Documents

"What does the API documentation say about authentication?"
"Find information about rate limiting"
"Search for error handling best practices"

Search uses semantic similarity with keyword boost. This means useEffect finds documents containing that exact term, not just semantically similar React concepts.

Results include text content, source file, document title, and relevance score. The document title provides context for each chunk, helping identify which document a result belongs to. Adjust result count with limit (1-20, default 10).

Managing Files

"List all files in BASE_DIR and their ingested status"   # See what's indexed
"Delete old-spec.pdf from RAG"     # Remove a file
"Show RAG server status"           # Check system health

Search Tuning

Adjust these for your use case:

| Variable | Default | Description | |----------|---------|-------------| | RAG_HYBRID_WEIGHT | 0.6 | Keyword boost factor. 0 = semantic only, higher = stronger keyword boost. | | RAG_GROUPING | (not set) | similar for top group only, related for top 2 groups. | | RAG_MAX_DISTANCE | (not set) | Filter out low-relevance results (e.g., 0.5). | | RAG_MAX_FILES | (not set) | Limit results to top N files (e.g., 1 for single best file). |

Code-focused tuning

For codebases and API specs, increase keyword boost so exact identifiers (useEffect, ERR_*, class names) dominate ranking:

"env": {
  "RAG_HYBRID_WEIGHT": "0.7",
  "RAG_GROUPING": "similar"
}
  • 0.7 — balanced semantic + keyword
  • 1.0 — aggressive; exact matches strongly rerank results

Keyword boost is applied after semantic filtering, so it improves precision without surfacing unrelated matches.

How It Works

TL;DR:

  • Documents are chunked by semantic similarity, not fixed character counts
  • Each chunk is embedded locally using Transformers.js
  • Search uses semantic similarity with keyword boost for exact matches
  • Results are filtered based on relevance gaps, not raw scores

Details

When you ingest a document, the parser extracts text based on file type (PDF via pdfjs-dist, DOCX via mammoth, text files directly).

The semantic chunker splits text into sentences, then groups them using embedding similarity. It finds natural topic boundaries where the meaning shifts—keeping related content together instead of cutting at arbitrary character limits. This produces chunks that are coherent units of meaning, typically 500-1000 characters. Markdown code blocks are kept intact—never split mid-block—preserving copy-pastable code in search results.

Each chunk goes through a Transformers.js embedding model (default: all-MiniLM-L6-v2, configurable via MODEL_NAME), converting text into vectors. Vectors are stored in LanceDB, a file-based vector database requiring no server process.

When you search:

  1. Your query becomes a vector using the same model
  2. Semantic (vector) search finds the most relevant chunks
  3. Quality filters apply (distance threshold, grouping)
  4. Keyword matches boost rankings for exact term matching

The keyword boost ensures exact terms like useEffect or error codes rank higher when they match.

Agent Skills

Agent Skills provide optimized prompts that help AI assistants use RAG tools more effectively. Install skills for better query formulation, result interpretation, and ingestion workflows:

# Claude Code (project-level)
npx mcp-local-rag skills install --claude-code

# Claude Code (user-level)
npx mcp-local-rag skills install --claude-code --global

# Codex
npx mcp-local-rag skills install --codex

Skills include:

  • Query optimization: Better search query formulation
  • Result interpretation: Score thresholds and filtering guidelines
  • HTML ingestion: Format selection and source naming

Ensuring Skill Activation

Skills are loaded automatically in most cases—AI assistants scan skill metadata and load relevant instructions when needed. For consistent behavior:

Option 1: Explicit request (natural language) Before RAG operations, request in natural language:

  • "Use the mcp-local-rag skill for this search"
  • "Apply RAG best practices from skills"

Option 2: Add to agent instruction file Add to your AGENTS.md, CLAUDE.md, or other agent instruction file:

When using query_documents, ingest_file, or ingest_data tools,
apply the mcp-local-rag skill for optimal query formulation and result interpretation.
<details> <summary><strong>Configuration</strong></summary>

Environment Variables

| Variable | Default | Description | |----------|---------|-------------| | BASE_DIR | Current directory | Document root directory (security boundary) | | DB_PATH | ./lancedb/ | Vector database location | | CACHE_DIR | ./models/ | Model cache directory | | MODEL_NAME | Xenova/all-MiniLM-L6-v2 | HuggingFace model ID (available models) | | MAX_FILE_SIZE | 104857600 (100MB) | Maximum file size in bytes |

Model choice tips:

  • Multilingual docs → e.g., onnx-community/embeddinggemma-300m-ONNX (100+ languages)
  • Scientific papers → e.g., sentence-transformers/allenai-specter (citation-aware)
  • Code repositories → default often suffices; keyword boost matters more (or jinaai/jina-embeddings-v2-base-code)

⚠️ Changing MODEL_NAME changes embedding dimensions. Delete DB_PATH and re-ingest after switching models.

Client-Specific Setup

Cursor — Global: ~/.cursor/mcp.json, Project: .cursor/mcp.json

{
  "mcpServers": {
    "local-rag": {
      "command": "npx",
      "args": ["-y", "mcp-local-rag"],
      "env": {
        "BASE_DIR": "/path/to/your/documents"
      }
    }
  }
}

Codex~/.codex/config.toml (note: must use mcp_servers with underscore)

[mcp_servers.local-rag]
command = "npx"
args = ["-y", "mcp-local-rag"]

[mcp_servers.local-rag.env]
BASE_DIR = "/path/to/your/documents"

Claude Code:

claude mcp add local-rag --scope user \
  --env BASE_DIR=/path/to/your/documents \
  -- npx -y mcp-local-rag

First Run

The embedding model (~90MB) downloads on first use. Takes 1-2 minutes, then works offline.

Security

  • Path restriction: Only files within BASE_DIR are accessible
  • Local only: No network requests after model download
  • Model source: Official HuggingFace repository (verify here)
</details> <details> <summary><strong>Performance</strong></summary>

Tested on MacBook Pro M1 (16GB RAM), Node.js 22:

Query Speed: ~1.2 seconds for 10,000 chunks (p90 < 3s)

Ingestion (10MB PDF):

  • PDF parsing: ~8s
  • Chunking: ~2s
  • Embedding: ~30s
  • DB insertion: ~5s

Memory: ~200MB idle, ~800MB peak (50MB file ingestion)

Concurrency: Handles 5 parallel queries without degradation.

</details> <details> <summary><strong>Troubleshooting</strong></summary>

"No results found"

Documents must be ingested first. Run "List all ingested files" to verify.

Model download failed

Check internet connection. If behind a proxy, configure network settings. The model can also be downloaded manually.

"File too large"

Default limit is 100MB. Split large files or increase MAX_FILE_SIZE.

Slow queries

Check chunk count with status. Large documents with many chunks may slow queries. Consider splitting very large files.

"Path outside BASE_DIR"

Ensure file paths are within BASE_DIR. Use absolute paths.

MCP client doesn't see tools

  1. Verify config file syntax
  2. Restart client completely (Cmd+Q on Mac for Cursor)
  3. Test directly: npx mcp-local-rag should run without errors
</details> <details> <summary><strong>FAQ</strong></summary>

Is this really private? Yes. After model download, nothing leaves your machine. Verify with network monitoring.

Can I use this offline? Yes, after the first model download (~90MB).

How does this compare to cloud RAG? Cloud services offer better accuracy at scale but require sending data externally. This trades some accuracy for complete privacy and zero runtime cost.

What file formats are supported? PDF, DOCX, TXT, Markdown, and HTML (via ingest_data). Not yet: Excel, PowerPoint, images.

Can I change the embedding model? Yes, but you must delete your database and re-ingest all documents. Different models produce incompatible vector dimensions.

GPU acceleration? Transformers.js runs on CPU. GPU support is experimental. CPU performance is adequate for most use cases.

Multi-user support? No. Designed for single-user, local access. Multi-user would require authentication/access control.

How to backup? Copy DB_PATH directory (default: ./lancedb/).

</details> <details> <summary><strong>Development</strong></summary>

Building from Source

git clone https://github.com/shinpr/mcp-local-rag.git
cd mcp-local-rag
pnpm install

Testing

pnpm test              # Run all tests
pnpm run test:watch    # Watch mode

Code Quality

pnpm run type-check    # TypeScript check
pnpm run check:fix     # Lint and format
pnpm run check:deps    # Circular dependency check
pnpm run check:all     # Full quality check

Project Structure

src/
  index.ts      # Entry point
  server/       # MCP tool handlers
  parser/       # PDF, DOCX, TXT, MD parsing
  chunker/      # Text splitting
  embedder/     # Transformers.js embeddings
  vectordb/     # LanceDB operations
  __tests__/    # Test suites
</details>

Contributing

Contributions welcome! See CONTRIBUTING.md for setup and guidelines.

License

MIT License. Free for personal and commercial use.

Blog Posts

Acknowledgments

Built with Model Context Protocol by Anthropic, LanceDB, and Transformers.js.

Contract & API

Machine endpoints, protocol fit, contract coverage, invocation examples, and guardrails for agent-to-agent use.

MissingGITHUB MCP

Contract coverage

Status

missing

Auth

None

Streaming

No

Data region

Unspecified

Protocol support

MCP: self-declared

Requires: none

Forbidden: none

Guardrails

Operational confidence: low

No positive guardrails captured.
Invocation examples
curl -s "https://xpersona.co/api/v1/agents/mcp-shinpr-mcp-local-rag/snapshot"
curl -s "https://xpersona.co/api/v1/agents/mcp-shinpr-mcp-local-rag/contract"
curl -s "https://xpersona.co/api/v1/agents/mcp-shinpr-mcp-local-rag/trust"

Reliability & Benchmarks

Trust and runtime signals, benchmark suites, failure patterns, and practical risk constraints.

Missingruntime-metrics

Trust signals

Handshake

UNKNOWN

Confidence

unknown

Attempts 30d

unknown

Fallback rate

unknown

Runtime metrics

Observed P50

unknown

Observed P95

unknown

Rate limit

unknown

Estimated cost

unknown

Do not use if

Contract metadata is missing or unavailable for deterministic execution.
No benchmark suites or observed failure patterns are available.

Media & Demo

Every public screenshot, visual asset, demo link, and owner-provided destination tied to this agent.

Missingno-media
No screenshots, media assets, or demo links are available.

Related Agents

Neighboring agents from the same protocol and source ecosystem for comparison and shortlist building.

Self-declaredprotocol-neighbors
GITLAB_AI_CATALOGgitlab-mcp

Rank

83

A Model Context Protocol (MCP) server for GitLab

Traction

No public download signal

Freshness

Updated 2d ago

MCP
GITLAB_PUBLIC_PROJECTSgitlab-mcp

Rank

80

A Model Context Protocol (MCP) server for GitLab

Traction

No public download signal

Freshness

Updated 2d ago

MCP
GITLAB_AI_CATALOGrmcp-openapi

Rank

74

Expose OpenAPI definition endpoints as MCP tools using the official Rust SDK for the Model Context Protocol (https://github.com/modelcontextprotocol/rust-sdk)

Traction

No public download signal

Freshness

Updated 2d ago

MCP
GITLAB_AI_CATALOGrmcp-actix-web

Rank

72

An actix_web backend for the official Rust SDK for the Model Context Protocol (https://github.com/modelcontextprotocol/rust-sdk)

Traction

No public download signal

Freshness

Updated 2d ago

MCP
Machine Appendix

Contract JSON

{
  "contractStatus": "missing",
  "authModes": [],
  "requires": [],
  "forbidden": [],
  "supportsMcp": false,
  "supportsA2a": false,
  "supportsStreaming": false,
  "inputSchemaRef": null,
  "outputSchemaRef": null,
  "dataRegion": null,
  "contractUpdatedAt": null,
  "sourceUpdatedAt": null,
  "freshnessSeconds": null
}

Invocation Guide

{
  "preferredApi": {
    "snapshotUrl": "https://xpersona.co/api/v1/agents/mcp-shinpr-mcp-local-rag/snapshot",
    "contractUrl": "https://xpersona.co/api/v1/agents/mcp-shinpr-mcp-local-rag/contract",
    "trustUrl": "https://xpersona.co/api/v1/agents/mcp-shinpr-mcp-local-rag/trust"
  },
  "curlExamples": [
    "curl -s \"https://xpersona.co/api/v1/agents/mcp-shinpr-mcp-local-rag/snapshot\"",
    "curl -s \"https://xpersona.co/api/v1/agents/mcp-shinpr-mcp-local-rag/contract\"",
    "curl -s \"https://xpersona.co/api/v1/agents/mcp-shinpr-mcp-local-rag/trust\""
  ],
  "jsonRequestTemplate": {
    "query": "summarize this repo",
    "constraints": {
      "maxLatencyMs": 2000,
      "protocolPreference": [
        "MCP"
      ]
    }
  },
  "jsonResponseTemplate": {
    "ok": true,
    "result": {
      "summary": "...",
      "confidence": 0.9
    },
    "meta": {
      "source": "GITHUB_MCP",
      "generatedAt": "2026-04-17T00:03:21.420Z"
    }
  },
  "retryPolicy": {
    "maxAttempts": 3,
    "backoffMs": [
      500,
      1500,
      3500
    ],
    "retryableConditions": [
      "HTTP_429",
      "HTTP_503",
      "NETWORK_TIMEOUT"
    ]
  }
}

Trust JSON

{
  "status": "unavailable",
  "handshakeStatus": "UNKNOWN",
  "verificationFreshnessHours": null,
  "reputationScore": null,
  "p95LatencyMs": null,
  "successRate30d": null,
  "fallbackRate": null,
  "attempts30d": null,
  "trustUpdatedAt": null,
  "trustConfidence": "unknown",
  "sourceUpdatedAt": null,
  "freshnessSeconds": null
}

Capability Matrix

{
  "rows": [
    {
      "key": "MCP",
      "type": "protocol",
      "support": "unknown",
      "confidenceSource": "profile",
      "notes": "Listed on profile"
    },
    {
      "key": "typescript",
      "type": "capability",
      "support": "supported",
      "confidenceSource": "profile",
      "notes": "Declared in agent profile metadata"
    },
    {
      "key": "mcp",
      "type": "capability",
      "support": "supported",
      "confidenceSource": "profile",
      "notes": "Declared in agent profile metadata"
    },
    {
      "key": "mcp-server",
      "type": "capability",
      "support": "supported",
      "confidenceSource": "profile",
      "notes": "Declared in agent profile metadata"
    },
    {
      "key": "model-context-protocol",
      "type": "capability",
      "support": "supported",
      "confidenceSource": "profile",
      "notes": "Declared in agent profile metadata"
    },
    {
      "key": "rag",
      "type": "capability",
      "support": "supported",
      "confidenceSource": "profile",
      "notes": "Declared in agent profile metadata"
    },
    {
      "key": "vector-search",
      "type": "capability",
      "support": "supported",
      "confidenceSource": "profile",
      "notes": "Declared in agent profile metadata"
    },
    {
      "key": "semantic-search",
      "type": "capability",
      "support": "supported",
      "confidenceSource": "profile",
      "notes": "Declared in agent profile metadata"
    },
    {
      "key": "embeddings",
      "type": "capability",
      "support": "supported",
      "confidenceSource": "profile",
      "notes": "Declared in agent profile metadata"
    },
    {
      "key": "lancedb",
      "type": "capability",
      "support": "supported",
      "confidenceSource": "profile",
      "notes": "Declared in agent profile metadata"
    },
    {
      "key": "transformers",
      "type": "capability",
      "support": "supported",
      "confidenceSource": "profile",
      "notes": "Declared in agent profile metadata"
    },
    {
      "key": "huggingface",
      "type": "capability",
      "support": "supported",
      "confidenceSource": "profile",
      "notes": "Declared in agent profile metadata"
    },
    {
      "key": "local-ai",
      "type": "capability",
      "support": "supported",
      "confidenceSource": "profile",
      "notes": "Declared in agent profile metadata"
    },
    {
      "key": "offline",
      "type": "capability",
      "support": "supported",
      "confidenceSource": "profile",
      "notes": "Declared in agent profile metadata"
    },
    {
      "key": "privacy",
      "type": "capability",
      "support": "supported",
      "confidenceSource": "profile",
      "notes": "Declared in agent profile metadata"
    },
    {
      "key": "document-search",
      "type": "capability",
      "support": "supported",
      "confidenceSource": "profile",
      "notes": "Declared in agent profile metadata"
    },
    {
      "key": "pdf",
      "type": "capability",
      "support": "supported",
      "confidenceSource": "profile",
      "notes": "Declared in agent profile metadata"
    },
    {
      "key": "claude-code",
      "type": "capability",
      "support": "supported",
      "confidenceSource": "profile",
      "notes": "Declared in agent profile metadata"
    },
    {
      "key": "cursor",
      "type": "capability",
      "support": "supported",
      "confidenceSource": "profile",
      "notes": "Declared in agent profile metadata"
    },
    {
      "key": "codex",
      "type": "capability",
      "support": "supported",
      "confidenceSource": "profile",
      "notes": "Declared in agent profile metadata"
    },
    {
      "key": "skills",
      "type": "capability",
      "support": "supported",
      "confidenceSource": "profile",
      "notes": "Declared in agent profile metadata"
    },
    {
      "key": "cli",
      "type": "capability",
      "support": "supported",
      "confidenceSource": "profile",
      "notes": "Declared in agent profile metadata"
    }
  ],
  "flattenedTokens": "protocol:MCP|unknown|profile capability:typescript|supported|profile capability:mcp|supported|profile capability:mcp-server|supported|profile capability:model-context-protocol|supported|profile capability:rag|supported|profile capability:vector-search|supported|profile capability:semantic-search|supported|profile capability:embeddings|supported|profile capability:lancedb|supported|profile capability:transformers|supported|profile capability:huggingface|supported|profile capability:local-ai|supported|profile capability:offline|supported|profile capability:privacy|supported|profile capability:document-search|supported|profile capability:pdf|supported|profile capability:claude-code|supported|profile capability:cursor|supported|profile capability:codex|supported|profile capability:skills|supported|profile capability:cli|supported|profile"
}

Facts JSON

[
  {
    "factKey": "docs_crawl",
    "category": "integration",
    "label": "Crawlable docs",
    "value": "6 indexed pages on the official domain",
    "href": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
    "sourceUrl": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
    "sourceType": "search_document",
    "confidence": "medium",
    "observedAt": "2026-04-15T05:03:46.393Z",
    "isPublic": true
  },
  {
    "factKey": "vendor",
    "category": "vendor",
    "label": "Vendor",
    "value": "Shinpr",
    "href": "https://github.com/shinpr/mcp-local-rag",
    "sourceUrl": "https://github.com/shinpr/mcp-local-rag",
    "sourceType": "profile",
    "confidence": "medium",
    "observedAt": "2026-02-25T03:05:21.042Z",
    "isPublic": true
  },
  {
    "factKey": "protocols",
    "category": "compatibility",
    "label": "Protocol compatibility",
    "value": "MCP",
    "href": "https://xpersona.co/api/v1/agents/mcp-shinpr-mcp-local-rag/contract",
    "sourceUrl": "https://xpersona.co/api/v1/agents/mcp-shinpr-mcp-local-rag/contract",
    "sourceType": "contract",
    "confidence": "medium",
    "observedAt": "2026-02-25T03:05:21.042Z",
    "isPublic": true
  },
  {
    "factKey": "traction",
    "category": "adoption",
    "label": "Adoption signal",
    "value": "132 GitHub stars",
    "href": "https://github.com/shinpr/mcp-local-rag",
    "sourceUrl": "https://github.com/shinpr/mcp-local-rag",
    "sourceType": "profile",
    "confidence": "medium",
    "observedAt": "2026-02-25T03:05:21.042Z",
    "isPublic": true
  },
  {
    "factKey": "handshake_status",
    "category": "security",
    "label": "Handshake status",
    "value": "UNKNOWN",
    "href": "https://xpersona.co/api/v1/agents/mcp-shinpr-mcp-local-rag/trust",
    "sourceUrl": "https://xpersona.co/api/v1/agents/mcp-shinpr-mcp-local-rag/trust",
    "sourceType": "trust",
    "confidence": "medium",
    "observedAt": null,
    "isPublic": true
  }
]

Change Events JSON

[
  {
    "eventType": "docs_update",
    "title": "Docs refreshed: Sign in to GitHub · GitHub",
    "description": "Fresh crawlable documentation was indexed for the official domain.",
    "href": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
    "sourceUrl": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
    "sourceType": "search_document",
    "confidence": "medium",
    "observedAt": "2026-04-15T05:03:46.393Z",
    "isPublic": true
  }
]

Sponsored

Ads related to mcp-local-rag and adjacent AI workflows.