Claim this agent
Agent DossierGITHUB OPENCLEWSafety 80/100

Xpersona Agent

developing-llamaindex-systems

Production-grade agentic system development with LlamaIndex in Python. Covers semantic ingestion (SemanticSplitterNodeParser, CodeSplitter, IngestionPipeline), retrieval strategies (BM25Retriever, hybrid search, alpha weighting), PropertyGraphIndex with graph stores (Neo4j), context RAG (RouterQueryEngine, SubQuestionQueryEngine, LLMRerank), agentic orchestration (ReAct, Workflows, FunctionTool), and observability (Arize Phoenix). Use when asked to "build a LlamaIndex agent", "set up semantic chunking", "index source code", "implement hybrid search", "create a knowledge graph with LlamaIndex", "implement query routing", "debug RAG pipeline", "add Phoenix observability", or "create an event-driven workflow". Triggers on "PropertyGraphIndex", "SemanticSplitterNodeParser", "CodeSplitter", "BM25Retriever", "hybrid search", "ReAct agent", "Workflow pattern", "LLMRerank", "Text-to-Cypher". --- name: developing-llamaindex-systems description: >- Production-grade agentic system development with LlamaIndex in Python. Covers semantic ingestion (SemanticSplitterNodeParser, CodeSplitter, IngestionPipeline), retrieval strategies (BM25Retriever, hybrid search, alpha weighting), PropertyGraphIndex with graph stores (Neo4j), context RAG (RouterQueryEngine, SubQuestionQueryEngine, LLMRerank), agentic orchestratio

OpenClaw · self-declared
3 GitHub starsTrust evidence available
git clone https://github.com/SpillwaveSolutions/developing-llamaindex-systems.git

Overall rank

#26

Adoption

3 GitHub stars

Trust

Unknown

Freshness

Apr 15, 2026

Freshness

Last checked Apr 15, 2026

Best For

developing-llamaindex-systems is best for general automation workflows where OpenClaw compatibility matters.

Not Ideal For

Contract metadata is missing or unavailable for deterministic execution.

Evidence Sources Checked

editorial-content, GITHUB OPENCLEW, runtime-metrics, public facts pack

Overview

Key links, install path, reliability highlights, and the shortest practical read before diving into the crawl record.

Verifiededitorial-content

Overview

Executive Summary

Production-grade agentic system development with LlamaIndex in Python. Covers semantic ingestion (SemanticSplitterNodeParser, CodeSplitter, IngestionPipeline), retrieval strategies (BM25Retriever, hybrid search, alpha weighting), PropertyGraphIndex with graph stores (Neo4j), context RAG (RouterQueryEngine, SubQuestionQueryEngine, LLMRerank), agentic orchestration (ReAct, Workflows, FunctionTool), and observability (Arize Phoenix). Use when asked to "build a LlamaIndex agent", "set up semantic chunking", "index source code", "implement hybrid search", "create a knowledge graph with LlamaIndex", "implement query routing", "debug RAG pipeline", "add Phoenix observability", or "create an event-driven workflow". Triggers on "PropertyGraphIndex", "SemanticSplitterNodeParser", "CodeSplitter", "BM25Retriever", "hybrid search", "ReAct agent", "Workflow pattern", "LLMRerank", "Text-to-Cypher". --- name: developing-llamaindex-systems description: >- Production-grade agentic system development with LlamaIndex in Python. Covers semantic ingestion (SemanticSplitterNodeParser, CodeSplitter, IngestionPipeline), retrieval strategies (BM25Retriever, hybrid search, alpha weighting), PropertyGraphIndex with graph stores (Neo4j), context RAG (RouterQueryEngine, SubQuestionQueryEngine, LLMRerank), agentic orchestratio Capability contract not published. No trust telemetry is available yet. 3 GitHub stars reported by the source. Last updated 4/15/2026.

No verified compatibility signals3 GitHub stars

Trust score

Unknown

Compatibility

OpenClaw

Freshness

Apr 15, 2026

Vendor

Spillwavesolutions

Artifacts

0

Benchmarks

0

Last release

Unpublished

Install & run

Setup Snapshot

git clone https://github.com/SpillwaveSolutions/developing-llamaindex-systems.git
  1. 1

    Setup complexity is LOW. This package is likely designed for quick installation with minimal external side-effects.

  2. 2

    Final validation: Expose the agent to a mock request payload inside a sandbox and trace the network egress before allowing access to real customer data.

Evidence & Timeline

Public facts grouped by evidence type, plus release and crawl events with provenance and freshness.

Verifiededitorial-content

Public facts

Evidence Ledger

Vendor (1)

Vendor

Spillwavesolutions

profilemedium
Observed Apr 15, 2026Source linkProvenance
Compatibility (1)

Protocol compatibility

OpenClaw

contractmedium
Observed Apr 15, 2026Source linkProvenance
Adoption (1)

Adoption signal

3 GitHub stars

profilemedium
Observed Apr 15, 2026Source linkProvenance
Security (1)

Handshake status

UNKNOWN

trustmedium
Observed unknownSource linkProvenance
Integration (1)

Crawlable docs

6 indexed pages on the official domain

search_documentmedium
Observed Apr 15, 2026Source linkProvenance

Artifacts & Docs

Parameters, dependencies, examples, extracted files, editorial overview, and the complete README when available.

Self-declaredGITHUB OPENCLEW

Captured outputs

Artifacts Archive

Extracted files

0

Examples

6

Snippets

0

Languages

typescript

Parameters

Executable Examples

bash

pip install llama-index-core>=0.10.0 llama-index-llms-openai llama-index-embeddings-openai arize-phoenix

python

from llama_index.core import SimpleDirectoryReader
from llama_index.core.node_parser import SemanticSplitterNodeParser
from llama_index.embeddings.openai import OpenAIEmbedding

embed_model = OpenAIEmbedding(model_name="text-embedding-3-small")
splitter = SemanticSplitterNodeParser(
    buffer_size=1,
    breakpoint_percentile_threshold=95,
    embed_model=embed_model
)

docs = SimpleDirectoryReader(input_files=["data.pdf"]).load_data()
nodes = splitter.get_nodes_from_documents(docs)

python

from llama_index.core import VectorStoreIndex

index = VectorStoreIndex(nodes, embed_model=embed_model)
index.storage_context.persist(persist_dir="./storage")

python

# Confirm index built correctly
print(f"Indexed {len(index.docstore.docs)} document chunks")

# Preview a sample node
sample = list(index.docstore.docs.values())[0]
print(f"Sample chunk: {sample.text[:200]}...")

python

query_engine = index.as_query_engine(similarity_top_k=5)
response = query_engine.query("What are the key concepts?")
print(response)

python

import phoenix as px
import llama_index.core

px.launch_app()
llama_index.core.set_global_handler("arize_phoenix")
# All subsequent queries are now traced

Editorial read

Docs & README

Docs source

GITHUB OPENCLEW

Editorial quality

ready

Production-grade agentic system development with LlamaIndex in Python. Covers semantic ingestion (SemanticSplitterNodeParser, CodeSplitter, IngestionPipeline), retrieval strategies (BM25Retriever, hybrid search, alpha weighting), PropertyGraphIndex with graph stores (Neo4j), context RAG (RouterQueryEngine, SubQuestionQueryEngine, LLMRerank), agentic orchestration (ReAct, Workflows, FunctionTool), and observability (Arize Phoenix). Use when asked to "build a LlamaIndex agent", "set up semantic chunking", "index source code", "implement hybrid search", "create a knowledge graph with LlamaIndex", "implement query routing", "debug RAG pipeline", "add Phoenix observability", or "create an event-driven workflow". Triggers on "PropertyGraphIndex", "SemanticSplitterNodeParser", "CodeSplitter", "BM25Retriever", "hybrid search", "ReAct agent", "Workflow pattern", "LLMRerank", "Text-to-Cypher". --- name: developing-llamaindex-systems description: >- Production-grade agentic system development with LlamaIndex in Python. Covers semantic ingestion (SemanticSplitterNodeParser, CodeSplitter, IngestionPipeline), retrieval strategies (BM25Retriever, hybrid search, alpha weighting), PropertyGraphIndex with graph stores (Neo4j), context RAG (RouterQueryEngine, SubQuestionQueryEngine, LLMRerank), agentic orchestratio

Full README

name: developing-llamaindex-systems description: >- Production-grade agentic system development with LlamaIndex in Python. Covers semantic ingestion (SemanticSplitterNodeParser, CodeSplitter, IngestionPipeline), retrieval strategies (BM25Retriever, hybrid search, alpha weighting), PropertyGraphIndex with graph stores (Neo4j), context RAG (RouterQueryEngine, SubQuestionQueryEngine, LLMRerank), agentic orchestration (ReAct, Workflows, FunctionTool), and observability (Arize Phoenix). Use when asked to "build a LlamaIndex agent", "set up semantic chunking", "index source code", "implement hybrid search", "create a knowledge graph with LlamaIndex", "implement query routing", "debug RAG pipeline", "add Phoenix observability", or "create an event-driven workflow". Triggers on "PropertyGraphIndex", "SemanticSplitterNodeParser", "CodeSplitter", "BM25Retriever", "hybrid search", "ReAct agent", "Workflow pattern", "LLMRerank", "Text-to-Cypher". allowed-tools:

  • Read
  • Write
  • Bash
  • WebFetch
  • Grep
  • Glob metadata: version: 1.2.0 last-updated: 2025-12-28 category: frameworks python-version: ">=3.9"

LlamaIndex Agentic Systems

Build production-grade agentic RAG systems with semantic ingestion, knowledge graphs, dynamic routing, and observability.

Quick Start

Build a working agent in 6 steps:

Step 1: Install Dependencies

pip install llama-index-core>=0.10.0 llama-index-llms-openai llama-index-embeddings-openai arize-phoenix

See scripts/requirements.txt for full pinned dependencies.

Step 2: Ingest with Semantic Chunking

from llama_index.core import SimpleDirectoryReader
from llama_index.core.node_parser import SemanticSplitterNodeParser
from llama_index.embeddings.openai import OpenAIEmbedding

embed_model = OpenAIEmbedding(model_name="text-embedding-3-small")
splitter = SemanticSplitterNodeParser(
    buffer_size=1,
    breakpoint_percentile_threshold=95,
    embed_model=embed_model
)

docs = SimpleDirectoryReader(input_files=["data.pdf"]).load_data()
nodes = splitter.get_nodes_from_documents(docs)

Step 3: Build Index

from llama_index.core import VectorStoreIndex

index = VectorStoreIndex(nodes, embed_model=embed_model)
index.storage_context.persist(persist_dir="./storage")

Step 4: Verify Index

# Confirm index built correctly
print(f"Indexed {len(index.docstore.docs)} document chunks")

# Preview a sample node
sample = list(index.docstore.docs.values())[0]
print(f"Sample chunk: {sample.text[:200]}...")

Step 5: Create Query Engine

query_engine = index.as_query_engine(similarity_top_k=5)
response = query_engine.query("What are the key concepts?")
print(response)

Step 6: Enable Observability

import phoenix as px
import llama_index.core

px.launch_app()
llama_index.core.set_global_handler("arize_phoenix")
# All subsequent queries are now traced

For production script, run: python scripts/ingest_semantic.py


Architecture Overview

Six pillars for agentic systems:

| Pillar | Purpose | Reference | |--------|---------|-----------| | Ingestion | Semantic chunking, code splitting, metadata | references/ingestion.md | | Retrieval | BM25 keyword search, hybrid fusion | references/retrieval-strategies.md | | Property Graphs | Knowledge graphs + vector hybrid | references/property-graphs.md | | Context RAG | Query routing, decomposition, reranking | references/context-rag.md | | Orchestration | ReAct agents, event-driven Workflows | references/orchestration.md | | Observability | Tracing, debugging, evaluation | references/observability.md |


Decision Trees

Which Node Parser?

Is the content source code?
├─ Yes → CodeSplitter
│        language="python" (or typescript, javascript, java, go)
│        chunk_lines=40, chunk_lines_overlap=15
│        → See: references/ingestion.md#codesplitter
│
└─ No, it's documents:
    ├─ Need semantic coherence (legal, technical docs)?
    │   └─ Yes → SemanticSplitterNodeParser
    │            buffer_size=1 (sensitive), 3 (stable)
    │            breakpoint_percentile_threshold=95 (fewer), 70 (more)
    │            → See: references/ingestion.md#semanticsplitternodeparser
    │
    ├─ Prioritize speed → SentenceSplitter
    │        chunk_size=1024, chunk_overlap=20
    │        → See: references/ingestion.md#sentencesplitter
    │
    └─ Need fine-grained retrieval → SentenceWindowNodeParser
             window_size=3 (surrounding sentences in metadata)
             → See: references/ingestion.md#sentencewindownodeparser

Trade-off: Semantic chunking requires embedding calls during ingestion (cost + latency).

Which Retrieval Mode?

Query contains exact terms (function names, error codes, IDs)?
├─ Yes, exact match critical → BM25
│        retriever = BM25Retriever.from_defaults(nodes=nodes)
│        → See: references/retrieval-strategies.md#bm25retriever
│
├─ Conceptual/semantic query → Vector
│        retriever = index.as_retriever(similarity_top_k=5)
│        → See: references/context-rag.md
│
└─ Mixed or unknown query type → Hybrid (recommended default)
         alpha=0.5 (equal weight), 0.3 (favor BM25), 0.7 (favor vector)
         → See: references/retrieval-strategies.md#hybrid-search

Trade-off: Hybrid adds BM25 index overhead but provides most robust retrieval.

Which Graph Extractor?

Need document navigation only (prev/next/parent)?
├─ Yes → ImplicitPathExtractor (no LLM, zero cost)
│        → See: references/property-graphs.md#implicitpathextractor
│
└─ No, need semantic relationships:
    ├─ Fixed ontology required (regulated domain)?
    │   └─ Yes → SchemaLLMPathExtractor
    │            Pass schema: {"PERSON": ["WORKS_AT"], "COMPANY": ["LOCATED_IN"]}
    │            → See: references/property-graphs.md#schemallmpathextractor
    │
    └─ No, discovery/exploration:
        └─ SimpleLLMPathExtractor
           max_paths_per_chunk=10 (control noise)
           → See: references/property-graphs.md#simplellmpathextractor

Which Graph Retriever?

Need SQL-like aggregations (COUNT, SUM)?
├─ Yes, trusted environment → TextToCypherRetriever
│        Risk: LLM syntax errors, injection
│        → See: references/property-graphs.md#texttocypherretriever
│
├─ Yes, need safety → CypherTemplateRetriever
│        Pre-define: MATCH (p:Person {name: $name}) RETURN p
│        LLM only extracts parameters
│        → See: references/property-graphs.md#cyphertemplateretriever
│
└─ No, robustness priority → VectorContextRetriever
         Vector search → graph traversal (path_depth=2)
         Most reliable, no code generation
         → See: references/property-graphs.md#vectorcontextretriever

Which Agent Pattern?

Simple tool loop sufficient?
├─ Yes → ReAct Agent (FunctionCallingAgent)
│        Tools via FunctionTool or ToolSpec
│        → See: references/orchestration.md#react-agent-pattern
│
└─ No, need:
    ├─ Branching/cycles → Workflow
    │   → See: references/orchestration.md#branching
    ├─ Human-in-the-loop → Workflow (suspend/resume)
    │   → See: references/orchestration.md#human-in-the-loop
    ├─ Multi-agent handoff → Workflow + Concierge pattern
    │   → See: references/orchestration.md#concierge-multi-agent
    └─ Parallel execution → Workflow with multiple event emissions
        → See: references/orchestration.md#workflows

Common Patterns

Pattern 1: Metadata-Enriched Ingestion

from llama_index.core.extractors import TitleExtractor, SummaryExtractor, KeywordExtractor
from llama_index.core.ingestion import IngestionPipeline

pipeline = IngestionPipeline(
    transformations=[
        splitter,
        TitleExtractor(),
        SummaryExtractor(),
        KeywordExtractor(keywords=5),
        embed_model,
    ]
)
nodes = pipeline.run(documents=docs)

Pattern 2: PropertyGraphIndex with Hybrid Retrieval

from llama_index.core import PropertyGraphIndex
from llama_index.core.indices.property_graph import SimpleLLMPathExtractor

index = PropertyGraphIndex.from_documents(
    docs,
    embed_model=embed_model,
    kg_extractors=[SimpleLLMPathExtractor(max_paths_per_chunk=10)],
)

# Hybrid: vector search + graph traversal
retriever = index.as_retriever(include_text=True)

Pattern 3: Router with Multiple Engines

from llama_index.core.query_engine import RouterQueryEngine
from llama_index.core.selectors import LLMSingleSelector
from llama_index.core.tools import QueryEngineTool

tools = [
    QueryEngineTool.from_defaults(
        query_engine=summary_engine,
        description="High-level summaries and overviews"
    ),
    QueryEngineTool.from_defaults(
        query_engine=detail_engine,
        description="Specific facts, numbers, and details"
    ),
]

router = RouterQueryEngine(
    selector=LLMSingleSelector.from_defaults(),
    query_engine_tools=tools,
)

Pattern 4: Event-Driven Workflow

from llama_index.core.workflow import Workflow, step, StartEvent, StopEvent, Event

class QueryEvent(Event):
    query: str

class MyAgent(Workflow):
    @step
    async def classify(self, ev: StartEvent) -> QueryEvent:
        return QueryEvent(query=ev.get("query"))

    @step
    async def respond(self, ev: QueryEvent) -> StopEvent:
        result = self.query_engine.query(ev.query)
        return StopEvent(result=str(result))

# Run
agent = MyAgent(timeout=60)
result = await agent.run(query="What is X?")

Pattern 5: Reranking Pipeline

from llama_index.core.postprocessor import SimilarityPostprocessor, LLMRerank

query_engine = index.as_query_engine(
    similarity_top_k=10,  # Retrieve more
    node_postprocessors=[
        SimilarityPostprocessor(similarity_cutoff=0.7),
        LLMRerank(top_n=3),  # Rerank to top 3
    ]
)

Script Reference

| Script | Purpose | Usage | |--------|---------|-------| | scripts/ingest_semantic.py | Build index with semantic chunking + graph | python scripts/ingest_semantic.py --doc path/to/file.pdf | | scripts/agent_workflow.py | Event-driven agent template | python scripts/agent_workflow.py | | scripts/requirements.txt | Pinned dependencies | pip install -r scripts/requirements.txt |

Adapt scripts by modifying configuration variables at the top of each file.


Reference Index

Load references based on task:

| Task | Load Reference | |------|----------------| | Configure chunking strategy | references/ingestion.md | | Add metadata extractors | references/ingestion.md | | Build knowledge graph | references/property-graphs.md | | Choose graph store (Neo4j, etc.) | references/property-graphs.md | | Implement query routing | references/context-rag.md | | Decompose complex queries | references/context-rag.md | | Add reranking | references/context-rag.md | | Build ReAct agent | references/orchestration.md | | Create Workflow | references/orchestration.md | | Multi-agent system | references/orchestration.md | | Setup Phoenix tracing | references/observability.md | | Debug retrieval failures | references/observability.md | | Evaluate agent quality | references/observability.md |


Troubleshooting

Agent says "I don't know" with relevant data

Diagnose:

# Open Phoenix UI at http://localhost:6006
# Navigate to Traces → Select query → Retrieval span → Retrieved Nodes

Fix:

# 1. Increase retrieval candidates
query_engine = index.as_query_engine(similarity_top_k=10)  # was 5

# 2. Add reranking to improve precision
from llama_index.core.postprocessor import LLMRerank
query_engine = index.as_query_engine(
    similarity_top_k=10,
    node_postprocessors=[LLMRerank(top_n=3)]
)

Verify: Re-run query, check Phoenix shows improved relevance scores (>0.7).

Semantic chunking too slow

Diagnose:

# Time the ingestion
import time
start = time.time()
nodes = splitter.get_nodes_from_documents(docs)
print(f"Chunking took {time.time() - start:.1f}s for {len(docs)} docs")

Fix:

# Option 1: Use local embeddings (no API calls)
from llama_index.embeddings.huggingface import HuggingFaceEmbedding
embed_model = HuggingFaceEmbedding(model_name="BAAI/bge-small-en-v1.5")

# Option 2: Hybrid strategy for large corpora
bulk_nodes = SentenceSplitter().get_nodes_from_documents(bulk_docs)
critical_nodes = SemanticSplitterNodeParser(...).get_nodes_from_documents(critical_docs)

Verify: Re-run with show_progress=True, confirm <1s per document.

Graph extraction producing noise

Diagnose:

# Check extracted triples
for node in index.property_graph_store.get_triplets():
    print(node)  # Look for irrelevant or duplicate relationships

Fix:

# Option 1: Reduce paths per chunk
SimpleLLMPathExtractor(max_paths_per_chunk=5)  # was 10

# Option 2: Use strict schema
SchemaLLMPathExtractor(
    possible_entities=["PERSON", "COMPANY"],
    possible_relations=["WORKS_AT", "FOUNDED"],
    strict=True
)

Verify: Re-index, confirm triplet count reduced and relationships are relevant.

Workflow step not triggering

Diagnose:

# Enable verbose mode
agent = MyWorkflow(timeout=60, verbose=True)
result = await agent.run(query="test")
# Check console for: [Step Name] Received event: EventType

Fix:

# Verify type hints match exactly
class MyEvent(Event):
    query: str

@step
async def my_step(self, ev: MyEvent) -> StopEvent:  # Type hint must be MyEvent
    ...

Verify: Verbose output shows [my_step] Received event: MyEvent.

Phoenix not showing traces

Diagnose:

import phoenix as px
session = px.launch_app()
print(f"Phoenix URL: {session.url}")  # Should print http://localhost:6006

Fix:

# MUST call BEFORE any LlamaIndex imports/operations
import phoenix as px
px.launch_app()

import llama_index.core
llama_index.core.set_global_handler("arize_phoenix")

# Now import and use LlamaIndex
from llama_index.core import VectorStoreIndex

Verify: Make a query, refresh Phoenix UI, trace appears within 5 seconds.


When Not to Use This Skill

This skill is specific to LlamaIndex in Python. Do not use for:

  • LangChain projects — Different framework, different APIs
  • Pure vector search without agents — Simpler solutions exist
  • Non-Python environments — All examples are Python 3.9+
  • Local-only / offline setups — Scripts default to OpenAI APIs; modification required for local models
  • Simple Q&A bots — Overkill if you don't need graphs, routing, or workflows

If unsure: Check if your use case involves semantic chunking, knowledge graphs, query routing, or multi-step agents. If yes, this skill applies.


Glossary

| Term | Definition | |------|------------| | Node | Chunk of text with metadata, the atomic unit of retrieval | | PropertyGraphIndex | Index combining vector embeddings with labeled property graph | | Extractor | Component that generates graph triples from text | | Retriever | Component that fetches relevant nodes/context | | Postprocessor | Filters or reranks nodes after retrieval | | Workflow | Event-driven state machine for agent orchestration | | Span | Duration-tracked operation in observability |

API & Reliability

Machine endpoints, contract coverage, trust signals, runtime metrics, benchmarks, and guardrails for agent-to-agent use.

MissingGITHUB OPENCLEW

Machine interfaces

Contract & API

Contract coverage

Status

missing

Auth

None

Streaming

No

Data region

Unspecified

Protocol support

OpenClaw: self-declared

Requires: none

Forbidden: none

Guardrails

Operational confidence: low

No positive guardrails captured.
Invocation examples
curl -s "https://xpersona.co/api/v1/agents/spillwavesolutions-developing-llamaindex-systems/snapshot"
curl -s "https://xpersona.co/api/v1/agents/spillwavesolutions-developing-llamaindex-systems/contract"
curl -s "https://xpersona.co/api/v1/agents/spillwavesolutions-developing-llamaindex-systems/trust"

Operational fit

Reliability & Benchmarks

Trust signals

Handshake

UNKNOWN

Confidence

unknown

Attempts 30d

unknown

Fallback rate

unknown

Runtime metrics

Observed P50

unknown

Observed P95

unknown

Rate limit

unknown

Estimated cost

unknown

Do not use if

Contract metadata is missing or unavailable for deterministic execution.
No benchmark suites or observed failure patterns are available.

Machine Appendix

Raw contract, invocation, trust, capability, facts, and change-event payloads for machine-side inspection.

MissingGITHUB OPENCLEW

Contract JSON

{
  "contractStatus": "missing",
  "authModes": [],
  "requires": [],
  "forbidden": [],
  "supportsMcp": false,
  "supportsA2a": false,
  "supportsStreaming": false,
  "inputSchemaRef": null,
  "outputSchemaRef": null,
  "dataRegion": null,
  "contractUpdatedAt": null,
  "sourceUpdatedAt": null,
  "freshnessSeconds": null
}

Invocation Guide

{
  "preferredApi": {
    "snapshotUrl": "https://xpersona.co/api/v1/agents/spillwavesolutions-developing-llamaindex-systems/snapshot",
    "contractUrl": "https://xpersona.co/api/v1/agents/spillwavesolutions-developing-llamaindex-systems/contract",
    "trustUrl": "https://xpersona.co/api/v1/agents/spillwavesolutions-developing-llamaindex-systems/trust"
  },
  "curlExamples": [
    "curl -s \"https://xpersona.co/api/v1/agents/spillwavesolutions-developing-llamaindex-systems/snapshot\"",
    "curl -s \"https://xpersona.co/api/v1/agents/spillwavesolutions-developing-llamaindex-systems/contract\"",
    "curl -s \"https://xpersona.co/api/v1/agents/spillwavesolutions-developing-llamaindex-systems/trust\""
  ],
  "jsonRequestTemplate": {
    "query": "summarize this repo",
    "constraints": {
      "maxLatencyMs": 2000,
      "protocolPreference": [
        "OPENCLEW"
      ]
    }
  },
  "jsonResponseTemplate": {
    "ok": true,
    "result": {
      "summary": "...",
      "confidence": 0.9
    },
    "meta": {
      "source": "GITHUB_OPENCLEW",
      "generatedAt": "2026-04-17T05:53:18.862Z"
    }
  },
  "retryPolicy": {
    "maxAttempts": 3,
    "backoffMs": [
      500,
      1500,
      3500
    ],
    "retryableConditions": [
      "HTTP_429",
      "HTTP_503",
      "NETWORK_TIMEOUT"
    ]
  }
}

Trust JSON

{
  "status": "unavailable",
  "handshakeStatus": "UNKNOWN",
  "verificationFreshnessHours": null,
  "reputationScore": null,
  "p95LatencyMs": null,
  "successRate30d": null,
  "fallbackRate": null,
  "attempts30d": null,
  "trustUpdatedAt": null,
  "trustConfidence": "unknown",
  "sourceUpdatedAt": null,
  "freshnessSeconds": null
}

Capability Matrix

{
  "rows": [
    {
      "key": "OPENCLEW",
      "type": "protocol",
      "support": "unknown",
      "confidenceSource": "profile",
      "notes": "Listed on profile"
    }
  ],
  "flattenedTokens": "protocol:OPENCLEW|unknown|profile"
}

Facts JSON

[
  {
    "factKey": "vendor",
    "category": "vendor",
    "label": "Vendor",
    "value": "Spillwavesolutions",
    "href": "https://github.com/SpillwaveSolutions/developing-llamaindex-systems",
    "sourceUrl": "https://github.com/SpillwaveSolutions/developing-llamaindex-systems",
    "sourceType": "profile",
    "confidence": "medium",
    "observedAt": "2026-04-15T05:21:22.124Z",
    "isPublic": true
  },
  {
    "factKey": "protocols",
    "category": "compatibility",
    "label": "Protocol compatibility",
    "value": "OpenClaw",
    "href": "https://xpersona.co/api/v1/agents/spillwavesolutions-developing-llamaindex-systems/contract",
    "sourceUrl": "https://xpersona.co/api/v1/agents/spillwavesolutions-developing-llamaindex-systems/contract",
    "sourceType": "contract",
    "confidence": "medium",
    "observedAt": "2026-04-15T05:21:22.124Z",
    "isPublic": true
  },
  {
    "factKey": "traction",
    "category": "adoption",
    "label": "Adoption signal",
    "value": "3 GitHub stars",
    "href": "https://github.com/SpillwaveSolutions/developing-llamaindex-systems",
    "sourceUrl": "https://github.com/SpillwaveSolutions/developing-llamaindex-systems",
    "sourceType": "profile",
    "confidence": "medium",
    "observedAt": "2026-04-15T05:21:22.124Z",
    "isPublic": true
  },
  {
    "factKey": "docs_crawl",
    "category": "integration",
    "label": "Crawlable docs",
    "value": "6 indexed pages on the official domain",
    "href": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
    "sourceUrl": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
    "sourceType": "search_document",
    "confidence": "medium",
    "observedAt": "2026-04-15T05:03:46.393Z",
    "isPublic": true
  },
  {
    "factKey": "handshake_status",
    "category": "security",
    "label": "Handshake status",
    "value": "UNKNOWN",
    "href": "https://xpersona.co/api/v1/agents/spillwavesolutions-developing-llamaindex-systems/trust",
    "sourceUrl": "https://xpersona.co/api/v1/agents/spillwavesolutions-developing-llamaindex-systems/trust",
    "sourceType": "trust",
    "confidence": "medium",
    "observedAt": null,
    "isPublic": true
  }
]

Change Events JSON

[
  {
    "eventType": "docs_update",
    "title": "Docs refreshed: Sign in to GitHub · GitHub",
    "description": "Fresh crawlable documentation was indexed for the official domain.",
    "href": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
    "sourceUrl": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
    "sourceType": "search_document",
    "confidence": "medium",
    "observedAt": "2026-04-15T05:03:46.393Z",
    "isPublic": true
  }
]

Sponsored

Ads related to developing-llamaindex-systems and adjacent AI workflows.