Crawler Summary

langchain-playground answer-first brief

A LangChain playground using TypeScript A LangChain playground using TypeScript A playground for LangChain.js, LangGraph, Slack, Model Context Protocol (MCP) and other LLM-related tools. This project provides both REST API endpoints or Slack bot integration for interacting with different language models and LangChain and LangGraph workflows. Architecture Core components - $1: Framework for building applications with LLMs. - $1: Framework for building appli Capability contract not published. No trust telemetry is available yet. 5 GitHub stars reported by the source. Last updated 2/25/2026.

Freshness

Last checked 2/25/2026

Best For

langchain-playground is best for LangChain, TypeScript workflows where MCP compatibility matters.

Not Ideal For

Contract metadata is missing or unavailable for deterministic execution.

Evidence Sources Checked

editorial-content, GITHUB MCP, runtime-metrics, public facts pack

Claim this agent
Agent DossierGitHubSafety: 89/100

langchain-playground

A LangChain playground using TypeScript A LangChain playground using TypeScript A playground for LangChain.js, LangGraph, Slack, Model Context Protocol (MCP) and other LLM-related tools. This project provides both REST API endpoints or Slack bot integration for interacting with different language models and LangChain and LangGraph workflows. Architecture Core components - $1: Framework for building applications with LLMs. - $1: Framework for building appli

MCPself-declared

Public facts

5

Change events

1

Artifacts

0

Freshness

Feb 25, 2026

Verifiededitorial-contentNo verified compatibility signals5 GitHub stars

Capability contract not published. No trust telemetry is available yet. 5 GitHub stars reported by the source. Last updated 2/25/2026.

5 GitHub starsTrust evidence available

Trust score

Unknown

Compatibility

MCP

Freshness

Feb 25, 2026

Vendor

Chrisleekr

Artifacts

0

Benchmarks

0

Last release

0.0.1

Executive Summary

Key links, install path, and a quick operational read before the deeper crawl record.

Verifiededitorial-content

Summary

Capability contract not published. No trust telemetry is available yet. 5 GitHub stars reported by the source. Last updated 2/25/2026.

Setup snapshot

git clone https://github.com/chrisleekr/langchain-playground.git
  1. 1

    Setup complexity is MEDIUM. Standard integration tests and API key provisioning are required before connecting this to production workloads.

  2. 2

    Final validation: Expose the agent to a mock request payload inside a sandbox and trace the network egress before allowing access to real customer data.

Evidence Ledger

Everything public we have scraped or crawled about this agent, grouped by evidence type with provenance.

Verifiededitorial-content
Vendor (1)

Vendor

Chrisleekr

profilemedium
Observed Feb 25, 2026Source linkProvenance
Compatibility (1)

Protocol compatibility

MCP

contractmedium
Observed Feb 25, 2026Source linkProvenance
Adoption (1)

Adoption signal

5 GitHub stars

profilemedium
Observed Feb 25, 2026Source linkProvenance
Security (1)

Handshake status

UNKNOWN

trustmedium
Observed unknownSource linkProvenance
Integration (1)

Crawlable docs

6 indexed pages on the official domain

search_documentmedium
Observed Apr 15, 2026Source linkProvenance

Release & Crawl Timeline

Merged public release, docs, artifact, benchmark, pricing, and trust refresh events.

Self-declaredagent-index

Artifacts Archive

Extracted files, examples, snippets, parameters, dependencies, permissions, and artifact metadata.

Self-declaredGITHUB MCP

Extracted files

0

Examples

6

Snippets

0

Languages

typescript

Executable Examples

mermaid

flowchart TB
    subgraph top [" "]
        direction TB
        LC[LangChain.js] --> Supervisor["Investigate<br/>(Supervisor)"]
    end
    
    Supervisor --> SupervisorFlow
    
    subgraph SupervisorFlow ["Supervisor Prompt - Investigation flow"]
        direction TB
        
        subgraph agents [" "]
            direction LR
            NR["NewRelic Expert<br/>(ReAct Agent)"]
            SE["Sentry Expert<br/>(ReAct Agent)"]
            RE["Research Expert<br/>(ReAct Agent)"]
            AWS["AWS ECS Expert<br/>(ReAct Agent)"]
        end
        
        subgraph NRTools ["Tools"]
            NR1["Get Issue/Incident/Alert from NewRelic<br/>(get_investigation_context)"]
            NR2["Use LLM to generate trace NRQL for<br/>violated logs based on alert title<br/>and alert NRQL<br/>(generate_log_nrql_query)"]
            NR3["Use LLM to generate NRQL to get<br/>trace logs based on trace id<br/>(generate_trace_logs_query)"]
            NR4["Fetch logs and use LLM to summarise<br/>investigation information<br/>(fetch_and_analyze_logs)"]
            NR1 --> NR2 --> NR3 --> NR4
        end
        
        subgraph SETools ["Tools"]
            SE1["Get issues from Sentry<br/>(investigate_and_analyze_sentry_issue)"]
        end
        
        subgraph RETools ["Tools"]
            RE1["Brave Search MCP"]
            RE2["Context7 MCP"]
            RE3["More MCPs"]
        end
        
        subgraph AWSTools ["Tools"]
            AWS1["Analyses ECS task status, CloudWatch<br/>metrics and service events<br/>(investigate_and_analyze_ecs_tasks)"]
        end
        
        NR --> NRTools
        SE --> SETools
        RE --> RETools
        AWS --> AWSTools
    end
    
    SupervisorFlow --> Final["Return final summarised investigation"]

mermaid

flowchart TB
    subgraph header [" "]
        direction LR
        LC[LangChain.js]
        Slack[Slack]
        MCP[MCP Tool]
    end
    
    Investigate((Investigate)) -.-> Sentry[Sentry]
    
    Investigate --> GetIssue["Get issue from Sentry"]
    GetIssue --> NormalizeIssue["Normalize Sentry issue<br/>- Remove unnecessary data from issue"]
    
    NormalizeIssue --> GetEvent["Get latest issue event from Sentry"]
    GetEvent --> NormalizeEvent["Normalize Sentry issue event<br/>- Extract only necessary event data<br/>including stack trace"]
    
    NormalizeEvent --> HasStackTrace{"Retrieved stack trace?"}
    
    HasStackTrace -->|No| Summarize["Use LLM to summarise<br/>investigation information"]
    
    HasStackTrace -->|Yes| LoopStackTrace
    
    subgraph LoopStackTrace ["Loop stack trace"]
        direction TB
        CheckNodeModules{"filename contains<br/>node_modules?"}
        CheckNodeModules -->|"If yes, skip"| CheckNodeModules
        CheckNodeModules -->|"No, then let's fetch the file"| CheckAvailable{"Does filename and function<br/>are available?<br/>- in case anonymous?"}
        CheckAvailable -->|"If no, skip"| CheckNodeModules
        CheckAvailable -->|"Yes, available"| FetchFile["Fetch file content from<br/>source code repository"]
        FetchFile --> ExtractBody["Extract function body"]
        ExtractBody --> Override["Override stack trace with original<br/>source code function body"]
        Override --> CheckNodeModules
    end
    
    FetchFile -.-> GitHub[GitHub]
    FetchFile -.-> GitLab[GitLab]
    FetchFile -.-> Bitbucket[Bitbucket]
    
    LoopStackTrace --> Summarize

mermaid

flowchart TB
    subgraph header [" "]
        direction LR
        LC[LangChain.js]
        Slack[Slack]
        MCP[MCP Tool]
    end
    
    Investigate((Investigate))
    
    Investigate --> GetIssue["Get Issue from NewRelic"]
    GetIssue --> GetIncident["Get Incident from NewRelic"]
    GetIncident --> GetAlert["Get alert from NewRelic"]
    GetAlert --> GenerateNRQL["Use LLM to generate trace NRQL for<br/>violated logs based on alert title<br/>and alert NRQL"]
    
    GenerateNRQL --> ExtractTrace["Execute NRQL to extract trace<br/>logs based on trace id"]
    ExtractTrace --> GenerateTraceNRQL["Use LLM to generate NRQL to<br/>get trace logs based on trace id"]
    GenerateTraceNRQL --> GetFullLogs["Get full logs from NewRelic"]
    
    GetFullLogs --> FilterEnvoy["Filter envoy logs"]
    GetFullLogs --> FilterService["Filter service logs"]
    GetFullLogs --> FilterURLs["Filter logs for retrieving<br/>relevant URLs"]
    
    FilterEnvoy --> TimelineEnvoy["Use LLM to generate timeline<br/>from envoy logs"]
    FilterService --> IdentifyErrors["Use LLM to identify errors<br/>from service logs"]
    FilterURLs --> ConstructURLs["Use LLM to construct any<br/>relevant URLs"]
    
    TimelineEnvoy --> Summarize["Use LLM to summarise<br/>investigation information"]
    IdentifyErrors --> Summarize
    ConstructURLs --> Summarize

mermaid

flowchart TB
    LC[LangChain.js] --> SlackThread[Slack Thread]
    SlackThread --> GetReplies["Get all replies from Slack thread"]
    GetReplies --> EnrichReplies["Enrich replies such as Images,<br/>NewRelic query"]
    EnrichReplies --> DetermineSolution["Use LLM to determine there is a solution<br/>to solve the problem in the replies"]
    
    DetermineSolution -->|"No, then do not process"| End1((End))
    
    DetermineSolution --> GenerateRunbook["Use LLM to generate Alert runbook"]
    GenerateRunbook --> SendDM["Send Alert Runbook to the<br/>requester's DM"]
    SendDM --> ReviewRunbook["Requester review Alert Runbook"]
    
    ReviewRunbook -->|"Not correct, then do not process"| End2((End))
    
    ReviewRunbook --> RequestSave["Requester requests to save<br/>Alert Runbook"]
    RequestSave --> SaveConfluence["Save the Alert runbook<br/>into Confluence"]
    SaveConfluence --> TriggerSync["Trigger Knowledge Base Sync"]
    
    TriggerSync --> OpenSearch
    
    subgraph KnowledgeBaseSync [" "]
        direction LR
        Confluence[Confluence] -.->|"Data source: Confluence"| Bedrock["AWS Bedrock<br/>Embedding Model"]
        Bedrock --> OpenSearch["Knowledge Base<br/>AWS OpenSearch Serverless<br/>(Vector Store)"]
    end

mermaid

flowchart LR
    Github>Github]
    Confluence>Confluence]
    PDFTextImage>"PDF/Text/Image"]
    
    PDFTextImage --> UnstructuredAPI["Unstructured<br/>API"]
    
    Github --> Chunking
    Confluence --> Chunking
    UnstructuredAPI --> Chunking
    
    Chunking["Chunking<br/>ParentDocumentRetriever<br/>RecursiveCharacterTextSplitter"] --> Embedding[Embedding]
    
    Embedding --> VectorDB[(Vector<br/>database)]

mermaid

flowchart TB
    subgraph QueryFlow ["Query Flow"]
        direction TB
        LC[LangChain.js] --> Query((Query))
        Query --> GenerateVariations["Use Bedrock Converse to generate<br/>query variations"]
        GenerateVariations --> KBRetriever["Use Amazon Knowledge Base retriever<br/>to get relevant documents"]
        KBRetriever --> GetFullDocs["Get full documents from OpenSearch<br/>for relevant documents"]
        GetFullDocs --> VerifyDocs["Verify each document whether it's relevant<br/>to the query variations<br/>If not, exclude from documents"]
        VerifyDocs --> GenerateAnswer["Generate answer based on filtered<br/>documents and query variations"]
    end
    
    subgraph DataIngestion ["Data Ingestion"]
        direction TB
        Upload["Upload markdown to AWS S3"] --> S3[AWS S3]
        S3 -->|"Data source: S3"| BedrockEmbed["AWS Bedrock<br/>Embedding Model"]
        Confluence[Confluence] -->|"Data source: Confluence"| BedrockEmbed
        BedrockEmbed --> KnowledgeBase["Knowledge Base<br/>AWS OpenSearch Serverless<br/>(Vector Store)"]
        KnowledgeBase --> VectorIndex["Vector Index"]
        BedrockEmbed2["AWS Bedrock<br/>Embedding Model"] --> KnowledgeBase
    end

Docs & README

Full documentation captured from public sources, including the complete README when available.

Self-declaredGITHUB MCP

Docs source

GITHUB MCP

Editorial quality

ready

A LangChain playground using TypeScript A LangChain playground using TypeScript A playground for LangChain.js, LangGraph, Slack, Model Context Protocol (MCP) and other LLM-related tools. This project provides both REST API endpoints or Slack bot integration for interacting with different language models and LangChain and LangGraph workflows. Architecture Core components - $1: Framework for building applications with LLMs. - $1: Framework for building appli

Full README

A LangChain playground using TypeScript

A playground for LangChain.js, LangGraph, Slack, Model Context Protocol (MCP) and other LLM-related tools.

This project provides both REST API endpoints or Slack bot integration for interacting with different language models and LangChain and LangGraph workflows.

Architecture

Core components

  • langchain.js: Framework for building applications with LLMs.
  • langgraph: Framework for building applications with advanced workflow orchestration for multi-step processes.
  • slack/bolt: Integration with Slack for building Slack apps.
  • Model Context Protocol (MCP): MCP is a protocol for building LLM-powered tools.

LLM providers

Document Loaders

Services

  • ollama: Ollama enables the execution of LLM models locally.
  • openweb-ui: OpenWeb UI is a self-hosted WebUI that interacts with Ollama.
  • unstructured-api: The Unstructured API is designed to ingest/digest files of various types and sizes.
  • qdrant: Qdrant serves as a vector database.
  • chroma: Chroma serves as an embedding database. Not used anymore.
  • redis: Redis is an open-source in-memory data structure store.
  • chunkhound: ChunkHound provides semantic code search and architecture analysis via MCP.

Server mode

  • fastify: serves as a web server in src/api
  • slack: serves as a Slack app in src/slack

Multi-Agent Investigation System

In this project, I used LangGraph Supervisor to build a multi-agent investigation system.

Refer to Multi-agent for more details.

Supervisor coordinates six specialized domain agents:

| Agent | Purpose | Tools | |-------|---------|-------| | New Relic Expert | Alerts, logs, APM data | NRQL queries, log analysis, trace correlation | | Sentry Expert | Error tracking, crashes | Issue lookup, event analysis, stack traces | | Research Expert | External documentation | Brave Search, Context7, Kubernetes (MCP) | | AWS ECS Expert | AWS ECS | ECS task status, container health, CloudWatch Container Insights metrics, service deployment, task placement, historical task event lookup, container exit codes, performance bottleneck analysis | | AWS RDS Expert | AWS RDS monitoring | RDS instance status, Performance Insights, CloudWatch metrics, top SQL queries | | Code Research Expert | Codebase analysis | ChunkHound semantic search, regex patterns, architecture analysis |

Workflow:

  1. Analyze - Supervisor determines relevant domain(s) from the query
  2. Delegate - Routes to appropriate domain agent(s) in parallel or sequence
  3. Synthesize - Combines findings into a unified InvestigationSummary
  4. Return - Structured response with root cause, impact, and recommendations

Key Features:

  • Recursion limit protection - Prevents infinite agent loops
  • Timeout protection - Configurable per-request and per-step timeouts
  • Cost tracking - Token usage and cost calculation via callbacks
flowchart TB
    subgraph top [" "]
        direction TB
        LC[LangChain.js] --> Supervisor["Investigate<br/>(Supervisor)"]
    end
    
    Supervisor --> SupervisorFlow
    
    subgraph SupervisorFlow ["Supervisor Prompt - Investigation flow"]
        direction TB
        
        subgraph agents [" "]
            direction LR
            NR["NewRelic Expert<br/>(ReAct Agent)"]
            SE["Sentry Expert<br/>(ReAct Agent)"]
            RE["Research Expert<br/>(ReAct Agent)"]
            AWS["AWS ECS Expert<br/>(ReAct Agent)"]
        end
        
        subgraph NRTools ["Tools"]
            NR1["Get Issue/Incident/Alert from NewRelic<br/>(get_investigation_context)"]
            NR2["Use LLM to generate trace NRQL for<br/>violated logs based on alert title<br/>and alert NRQL<br/>(generate_log_nrql_query)"]
            NR3["Use LLM to generate NRQL to get<br/>trace logs based on trace id<br/>(generate_trace_logs_query)"]
            NR4["Fetch logs and use LLM to summarise<br/>investigation information<br/>(fetch_and_analyze_logs)"]
            NR1 --> NR2 --> NR3 --> NR4
        end
        
        subgraph SETools ["Tools"]
            SE1["Get issues from Sentry<br/>(investigate_and_analyze_sentry_issue)"]
        end
        
        subgraph RETools ["Tools"]
            RE1["Brave Search MCP"]
            RE2["Context7 MCP"]
            RE3["More MCPs"]
        end
        
        subgraph AWSTools ["Tools"]
            AWS1["Analyses ECS task status, CloudWatch<br/>metrics and service events<br/>(investigate_and_analyze_ecs_tasks)"]
        end
        
        NR --> NRTools
        SE --> SETools
        RE --> RETools
        AWS --> AWSTools
    end
    
    SupervisorFlow --> Final["Return final summarised investigation"]

Sentry alert analysis

In this project, I used LangGraph to build a workflow to analyze Sentry alert.

The workflow in big picture is as follows:

  1. Get Sentry issue and first event
  2. Normalize the issue and event and extend the stacktrace to source code fetching from GitHub
  3. Generate a summary of the investigation using the normalized issue and event
flowchart TB
    subgraph header [" "]
        direction LR
        LC[LangChain.js]
        Slack[Slack]
        MCP[MCP Tool]
    end
    
    Investigate((Investigate)) -.-> Sentry[Sentry]
    
    Investigate --> GetIssue["Get issue from Sentry"]
    GetIssue --> NormalizeIssue["Normalize Sentry issue<br/>- Remove unnecessary data from issue"]
    
    NormalizeIssue --> GetEvent["Get latest issue event from Sentry"]
    GetEvent --> NormalizeEvent["Normalize Sentry issue event<br/>- Extract only necessary event data<br/>including stack trace"]
    
    NormalizeEvent --> HasStackTrace{"Retrieved stack trace?"}
    
    HasStackTrace -->|No| Summarize["Use LLM to summarise<br/>investigation information"]
    
    HasStackTrace -->|Yes| LoopStackTrace
    
    subgraph LoopStackTrace ["Loop stack trace"]
        direction TB
        CheckNodeModules{"filename contains<br/>node_modules?"}
        CheckNodeModules -->|"If yes, skip"| CheckNodeModules
        CheckNodeModules -->|"No, then let's fetch the file"| CheckAvailable{"Does filename and function<br/>are available?<br/>- in case anonymous?"}
        CheckAvailable -->|"If no, skip"| CheckNodeModules
        CheckAvailable -->|"Yes, available"| FetchFile["Fetch file content from<br/>source code repository"]
        FetchFile --> ExtractBody["Extract function body"]
        ExtractBody --> Override["Override stack trace with original<br/>source code function body"]
        Override --> CheckNodeModules
    end
    
    FetchFile -.-> GitHub[GitHub]
    FetchFile -.-> GitLab[GitLab]
    FetchFile -.-> Bitbucket[Bitbucket]
    
    LoopStackTrace --> Summarize

New Relic log analysis

In this project, I used LangGraph to build a workflow to analyze New Relic logs.

The workflow in big picture is as follows:

  1. Get New Relic logs
  2. Analyze New Relic logs to get the request timeline, service error logs and relevant URLs
  3. Generate a summary of the investigation by analyzing the request timeline, service error logs and relevant URLs
flowchart TB
    subgraph header [" "]
        direction LR
        LC[LangChain.js]
        Slack[Slack]
        MCP[MCP Tool]
    end
    
    Investigate((Investigate))
    
    Investigate --> GetIssue["Get Issue from NewRelic"]
    GetIssue --> GetIncident["Get Incident from NewRelic"]
    GetIncident --> GetAlert["Get alert from NewRelic"]
    GetAlert --> GenerateNRQL["Use LLM to generate trace NRQL for<br/>violated logs based on alert title<br/>and alert NRQL"]
    
    GenerateNRQL --> ExtractTrace["Execute NRQL to extract trace<br/>logs based on trace id"]
    ExtractTrace --> GenerateTraceNRQL["Use LLM to generate NRQL to<br/>get trace logs based on trace id"]
    GenerateTraceNRQL --> GetFullLogs["Get full logs from NewRelic"]
    
    GetFullLogs --> FilterEnvoy["Filter envoy logs"]
    GetFullLogs --> FilterService["Filter service logs"]
    GetFullLogs --> FilterURLs["Filter logs for retrieving<br/>relevant URLs"]
    
    FilterEnvoy --> TimelineEnvoy["Use LLM to generate timeline<br/>from envoy logs"]
    FilterService --> IdentifyErrors["Use LLM to identify errors<br/>from service logs"]
    FilterURLs --> ConstructURLs["Use LLM to construct any<br/>relevant URLs"]
    
    TimelineEnvoy --> Summarize["Use LLM to summarise<br/>investigation information"]
    IdentifyErrors --> Summarize
    ConstructURLs --> Summarize

Generate Alert Runbook from Slack thread

The idea of this workflow is to generate Alert Runbook from Slack thread and send it to the user. Once the user approves the Alert Runbook, then RCA will be added to the Alert Runbook.

The workflow in big picture is as follows:

  1. Get all replies from Slack thread
  2. Enrich replies such as images, NewRelic query, etc.
  3. Use LLM to determine there is a solution to solve the problem in the replies
  4. Use LLM to generate Alert Runbook from the replies and solution
  5. Send the Alert Runbook to the user for approval
  6. If the user approves the Alert Runbook, then RCA will be added to the Alert Runbook.
flowchart TB
    LC[LangChain.js] --> SlackThread[Slack Thread]
    SlackThread --> GetReplies["Get all replies from Slack thread"]
    GetReplies --> EnrichReplies["Enrich replies such as Images,<br/>NewRelic query"]
    EnrichReplies --> DetermineSolution["Use LLM to determine there is a solution<br/>to solve the problem in the replies"]
    
    DetermineSolution -->|"No, then do not process"| End1((End))
    
    DetermineSolution --> GenerateRunbook["Use LLM to generate Alert runbook"]
    GenerateRunbook --> SendDM["Send Alert Runbook to the<br/>requester's DM"]
    SendDM --> ReviewRunbook["Requester review Alert Runbook"]
    
    ReviewRunbook -->|"Not correct, then do not process"| End2((End))
    
    ReviewRunbook --> RequestSave["Requester requests to save<br/>Alert Runbook"]
    RequestSave --> SaveConfluence["Save the Alert runbook<br/>into Confluence"]
    SaveConfluence --> TriggerSync["Trigger Knowledge Base Sync"]
    
    TriggerSync --> OpenSearch
    
    subgraph KnowledgeBaseSync [" "]
        direction LR
        Confluence[Confluence] -.->|"Data source: Confluence"| Bedrock["AWS Bedrock<br/>Embedding Model"]
        Bedrock --> OpenSearch["Knowledge Base<br/>AWS OpenSearch Serverless<br/>(Vector Store)"]
    end

Answer from Retriever-Augmented Generation (RAG)

In this project, there are following routes to answer user's question from the document RAG retrieval.

Routes:

  • DELETE /document/reset: Reset the document RAG retrieval.
  • PUT /document/load/directory: Load documents from a directory using Unstructured API + Parent document retriever.
  • PUT /document/load/confluence: Load documents from Confluence + Parent document retriever.
  • POST /document/query: Answer user's question from the document RAG retrieval.

Document loader process

flowchart LR
    Github>Github]
    Confluence>Confluence]
    PDFTextImage>"PDF/Text/Image"]
    
    PDFTextImage --> UnstructuredAPI["Unstructured<br/>API"]
    
    Github --> Chunking
    Confluence --> Chunking
    UnstructuredAPI --> Chunking
    
    Chunking["Chunking<br/>ParentDocumentRetriever<br/>RecursiveCharacterTextSplitter"] --> Embedding[Embedding]
    
    Embedding --> VectorDB[(Vector<br/>database)]

Document query process

AWS Bedrock Knowledge Base

flowchart TB
    subgraph QueryFlow ["Query Flow"]
        direction TB
        LC[LangChain.js] --> Query((Query))
        Query --> GenerateVariations["Use Bedrock Converse to generate<br/>query variations"]
        GenerateVariations --> KBRetriever["Use Amazon Knowledge Base retriever<br/>to get relevant documents"]
        KBRetriever --> GetFullDocs["Get full documents from OpenSearch<br/>for relevant documents"]
        GetFullDocs --> VerifyDocs["Verify each document whether it's relevant<br/>to the query variations<br/>If not, exclude from documents"]
        VerifyDocs --> GenerateAnswer["Generate answer based on filtered<br/>documents and query variations"]
    end
    
    subgraph DataIngestion ["Data Ingestion"]
        direction TB
        Upload["Upload markdown to AWS S3"] --> S3[AWS S3]
        S3 -->|"Data source: S3"| BedrockEmbed["AWS Bedrock<br/>Embedding Model"]
        Confluence[Confluence] -->|"Data source: Confluence"| BedrockEmbed
        BedrockEmbed --> KnowledgeBase["Knowledge Base<br/>AWS OpenSearch Serverless<br/>(Vector Store)"]
        KnowledgeBase --> VectorIndex["Vector Index"]
        BedrockEmbed2["AWS Bedrock<br/>Embedding Model"] --> KnowledgeBase
    end

Parent document retriever

flowchart LR
    Query["Query<br/>(User)"] --> LLM1["LLM<br/>Create query variation"]
    LLM1 --> Retriever["Retriever<br/>Invoke with query"]
    Retriever --> VectorStore["Vector Store<br/>Get full documents from<br/>Vector database"]
    VectorStore <--> VectorDB[(Vector<br/>database)]
    
    VectorStore --> LLM2["LLM<br/>Verify documents relevancy<br/>and exclude irrelevant<br/>documents"]
    LLM2 --> LLM3["LLM<br/>Generate answer based on<br/>query variation + relevant<br/>documents"]
    LLM3 --> ReturnAnswer["Return answer"]

Slack integration

In this project, I used slack/bolt and LangGraph to build a Slack app.

  • When a user mentions the bot in a channel, the bot will respond with a message.
  • It will execute the following steps:
    • Intent classifier: Classify the intent of the user's message.
    • Intent router: Route the user's message to the appropriate node.
    • Get message history: Get the message history of the channel.
    • MCP tools: Use MCP tools to get information from Model Context Protocol.
    • Summarise thread: Summarise the thread.
    • Translate message: Translate the message to the user's language.
    • Find information: Find information from the RAG database.
    • General response: Generate a general response.
    • Final response: Respond to the user's message.

How to start

docker-compose up -d --build

Prerequisites for ChunkHound (Code Research)

If using the Code Research agent, ensure Ollama has the required models:

# Required for ChunkHound embeddings and LLM
ollama pull mxbai-embed-large:latest
ollama pull llama3.1:8b

Then enable ChunkHound in your .env:

CHUNKHOUND_ENABLED=true
GITHUB_REPOSITORIES_ENABLED=true

Endpoints

Multi-Agent Investigation

  • POST /agent/investigate - Unified investigation using domain agents

Document Management (RAG)

  • DELETE /document/reset - Reset document store
  • PUT /document/load/directory - Load documents from directory
  • PUT /document/load/confluence - Load from Confluence
  • POST /document/query - Query documents with RAG

LLM Provider Threads

  • POST /{provider}/thread - Create conversation thread (openai, groq, ollama)
  • GET|POST /{provider}/thread/:id - Get/continue specific thread

LangGraph Workflows

  • POST /langgraph/thread - Create LangGraph workflow thread
  • POST /langgraph/newrelic/investigate - New Relic log analysis
  • POST /langgraph/sentry/investigate - Sentry issue investigation

Health

  • GET /health - Health check

Todo

  • [ ] Add more examples
  • [ ] Add tests
  • [ ] Make better documentations

Contract & API

Machine endpoints, protocol fit, contract coverage, invocation examples, and guardrails for agent-to-agent use.

MissingGITHUB MCP

Contract coverage

Status

missing

Auth

None

Streaming

No

Data region

Unspecified

Protocol support

MCP: self-declared

Requires: none

Forbidden: none

Guardrails

Operational confidence: low

No positive guardrails captured.
Invocation examples
curl -s "https://xpersona.co/api/v1/agents/mcp-chrisleekr-langchain-playground/snapshot"
curl -s "https://xpersona.co/api/v1/agents/mcp-chrisleekr-langchain-playground/contract"
curl -s "https://xpersona.co/api/v1/agents/mcp-chrisleekr-langchain-playground/trust"

Reliability & Benchmarks

Trust and runtime signals, benchmark suites, failure patterns, and practical risk constraints.

Missingruntime-metrics

Trust signals

Handshake

UNKNOWN

Confidence

unknown

Attempts 30d

unknown

Fallback rate

unknown

Runtime metrics

Observed P50

unknown

Observed P95

unknown

Rate limit

unknown

Estimated cost

unknown

Do not use if

Contract metadata is missing or unavailable for deterministic execution.
No benchmark suites or observed failure patterns are available.

Media & Demo

Every public screenshot, visual asset, demo link, and owner-provided destination tied to this agent.

Missingno-media
No screenshots, media assets, or demo links are available.

Related Agents

Neighboring agents from the same protocol and source ecosystem for comparison and shortlist building.

Self-declaredprotocol-neighbors
GITLAB_AI_CATALOGgitlab-mcp

Rank

83

A Model Context Protocol (MCP) server for GitLab

Traction

No public download signal

Freshness

Updated 2d ago

MCP
GITLAB_PUBLIC_PROJECTSgitlab-mcp

Rank

80

A Model Context Protocol (MCP) server for GitLab

Traction

No public download signal

Freshness

Updated 2d ago

MCP
GITLAB_AI_CATALOGrmcp-openapi

Rank

74

Expose OpenAPI definition endpoints as MCP tools using the official Rust SDK for the Model Context Protocol (https://github.com/modelcontextprotocol/rust-sdk)

Traction

No public download signal

Freshness

Updated 2d ago

MCP
GITLAB_AI_CATALOGrmcp-actix-web

Rank

72

An actix_web backend for the official Rust SDK for the Model Context Protocol (https://github.com/modelcontextprotocol/rust-sdk)

Traction

No public download signal

Freshness

Updated 2d ago

MCP
Machine Appendix

Contract JSON

{
  "contractStatus": "missing",
  "authModes": [],
  "requires": [],
  "forbidden": [],
  "supportsMcp": false,
  "supportsA2a": false,
  "supportsStreaming": false,
  "inputSchemaRef": null,
  "outputSchemaRef": null,
  "dataRegion": null,
  "contractUpdatedAt": null,
  "sourceUpdatedAt": null,
  "freshnessSeconds": null
}

Invocation Guide

{
  "preferredApi": {
    "snapshotUrl": "https://xpersona.co/api/v1/agents/mcp-chrisleekr-langchain-playground/snapshot",
    "contractUrl": "https://xpersona.co/api/v1/agents/mcp-chrisleekr-langchain-playground/contract",
    "trustUrl": "https://xpersona.co/api/v1/agents/mcp-chrisleekr-langchain-playground/trust"
  },
  "curlExamples": [
    "curl -s \"https://xpersona.co/api/v1/agents/mcp-chrisleekr-langchain-playground/snapshot\"",
    "curl -s \"https://xpersona.co/api/v1/agents/mcp-chrisleekr-langchain-playground/contract\"",
    "curl -s \"https://xpersona.co/api/v1/agents/mcp-chrisleekr-langchain-playground/trust\""
  ],
  "jsonRequestTemplate": {
    "query": "summarize this repo",
    "constraints": {
      "maxLatencyMs": 2000,
      "protocolPreference": [
        "MCP"
      ]
    }
  },
  "jsonResponseTemplate": {
    "ok": true,
    "result": {
      "summary": "...",
      "confidence": 0.9
    },
    "meta": {
      "source": "GITHUB_MCP",
      "generatedAt": "2026-04-17T02:42:46.546Z"
    }
  },
  "retryPolicy": {
    "maxAttempts": 3,
    "backoffMs": [
      500,
      1500,
      3500
    ],
    "retryableConditions": [
      "HTTP_429",
      "HTTP_503",
      "NETWORK_TIMEOUT"
    ]
  }
}

Trust JSON

{
  "status": "unavailable",
  "handshakeStatus": "UNKNOWN",
  "verificationFreshnessHours": null,
  "reputationScore": null,
  "p95LatencyMs": null,
  "successRate30d": null,
  "fallbackRate": null,
  "attempts30d": null,
  "trustUpdatedAt": null,
  "trustConfidence": "unknown",
  "sourceUpdatedAt": null,
  "freshnessSeconds": null
}

Capability Matrix

{
  "rows": [
    {
      "key": "MCP",
      "type": "protocol",
      "support": "unknown",
      "confidenceSource": "profile",
      "notes": "Listed on profile"
    },
    {
      "key": "LangChain",
      "type": "capability",
      "support": "supported",
      "confidenceSource": "profile",
      "notes": "Declared in agent profile metadata"
    },
    {
      "key": "TypeScript",
      "type": "capability",
      "support": "supported",
      "confidenceSource": "profile",
      "notes": "Declared in agent profile metadata"
    }
  ],
  "flattenedTokens": "protocol:MCP|unknown|profile capability:LangChain|supported|profile capability:TypeScript|supported|profile"
}

Facts JSON

[
  {
    "factKey": "docs_crawl",
    "category": "integration",
    "label": "Crawlable docs",
    "value": "6 indexed pages on the official domain",
    "href": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
    "sourceUrl": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
    "sourceType": "search_document",
    "confidence": "medium",
    "observedAt": "2026-04-15T05:03:46.393Z",
    "isPublic": true
  },
  {
    "factKey": "vendor",
    "category": "vendor",
    "label": "Vendor",
    "value": "Chrisleekr",
    "href": "https://github.com/chrisleekr/langchain-playground#readme",
    "sourceUrl": "https://github.com/chrisleekr/langchain-playground#readme",
    "sourceType": "profile",
    "confidence": "medium",
    "observedAt": "2026-02-25T03:15:48.067Z",
    "isPublic": true
  },
  {
    "factKey": "protocols",
    "category": "compatibility",
    "label": "Protocol compatibility",
    "value": "MCP",
    "href": "https://xpersona.co/api/v1/agents/mcp-chrisleekr-langchain-playground/contract",
    "sourceUrl": "https://xpersona.co/api/v1/agents/mcp-chrisleekr-langchain-playground/contract",
    "sourceType": "contract",
    "confidence": "medium",
    "observedAt": "2026-02-25T03:15:48.067Z",
    "isPublic": true
  },
  {
    "factKey": "traction",
    "category": "adoption",
    "label": "Adoption signal",
    "value": "5 GitHub stars",
    "href": "https://github.com/chrisleekr/langchain-playground",
    "sourceUrl": "https://github.com/chrisleekr/langchain-playground",
    "sourceType": "profile",
    "confidence": "medium",
    "observedAt": "2026-02-25T03:15:48.067Z",
    "isPublic": true
  },
  {
    "factKey": "handshake_status",
    "category": "security",
    "label": "Handshake status",
    "value": "UNKNOWN",
    "href": "https://xpersona.co/api/v1/agents/mcp-chrisleekr-langchain-playground/trust",
    "sourceUrl": "https://xpersona.co/api/v1/agents/mcp-chrisleekr-langchain-playground/trust",
    "sourceType": "trust",
    "confidence": "medium",
    "observedAt": null,
    "isPublic": true
  }
]

Change Events JSON

[
  {
    "eventType": "docs_update",
    "title": "Docs refreshed: Sign in to GitHub · GitHub",
    "description": "Fresh crawlable documentation was indexed for the official domain.",
    "href": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
    "sourceUrl": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
    "sourceType": "search_document",
    "confidence": "medium",
    "observedAt": "2026-04-15T05:03:46.393Z",
    "isPublic": true
  }
]

Sponsored

Ads related to langchain-playground and adjacent AI workflows.