Rank
70
AI Agents & MCPs & AI Workflow Automation • (~400 MCP servers for AI agents) • AI Automation / AI Agent with MCPs • AI Workflows & AI Agents • MCPs for AI Agents
Traction
No public download signal
Freshness
Updated 2d ago
Xpersona Agent
Production-grade agentic system development with LlamaIndex in Python. Covers semantic ingestion (SemanticSplitterNodeParser, CodeSplitter, IngestionPipeline), retrieval strategies (BM25Retriever, hybrid search, alpha weighting), PropertyGraphIndex with graph stores (Neo4j), context RAG (RouterQueryEngine, SubQuestionQueryEngine, LLMRerank), agentic orchestration (ReAct, Workflows, FunctionTool), and observability (Arize Phoenix). Use when asked to "build a LlamaIndex agent", "set up semantic chunking", "index source code", "implement hybrid search", "create a knowledge graph with LlamaIndex", "implement query routing", "debug RAG pipeline", "add Phoenix observability", or "create an event-driven workflow". Triggers on "PropertyGraphIndex", "SemanticSplitterNodeParser", "CodeSplitter", "BM25Retriever", "hybrid search", "ReAct agent", "Workflow pattern", "LLMRerank", "Text-to-Cypher". --- name: developing-llamaindex-systems description: >- Production-grade agentic system development with LlamaIndex in Python. Covers semantic ingestion (SemanticSplitterNodeParser, CodeSplitter, IngestionPipeline), retrieval strategies (BM25Retriever, hybrid search, alpha weighting), PropertyGraphIndex with graph stores (Neo4j), context RAG (RouterQueryEngine, SubQuestionQueryEngine, LLMRerank), agentic orchestratio
git clone https://github.com/SpillwaveSolutions/developing-llamaindex-systems.gitOverall rank
#26
Adoption
3 GitHub stars
Trust
Unknown
Freshness
Apr 15, 2026
Freshness
Last checked Apr 15, 2026
Best For
developing-llamaindex-systems is best for general automation workflows where OpenClaw compatibility matters.
Not Ideal For
Contract metadata is missing or unavailable for deterministic execution.
Evidence Sources Checked
editorial-content, GITHUB OPENCLEW, runtime-metrics, public facts pack
Key links, install path, reliability highlights, and the shortest practical read before diving into the crawl record.
Overview
Production-grade agentic system development with LlamaIndex in Python. Covers semantic ingestion (SemanticSplitterNodeParser, CodeSplitter, IngestionPipeline), retrieval strategies (BM25Retriever, hybrid search, alpha weighting), PropertyGraphIndex with graph stores (Neo4j), context RAG (RouterQueryEngine, SubQuestionQueryEngine, LLMRerank), agentic orchestration (ReAct, Workflows, FunctionTool), and observability (Arize Phoenix). Use when asked to "build a LlamaIndex agent", "set up semantic chunking", "index source code", "implement hybrid search", "create a knowledge graph with LlamaIndex", "implement query routing", "debug RAG pipeline", "add Phoenix observability", or "create an event-driven workflow". Triggers on "PropertyGraphIndex", "SemanticSplitterNodeParser", "CodeSplitter", "BM25Retriever", "hybrid search", "ReAct agent", "Workflow pattern", "LLMRerank", "Text-to-Cypher". --- name: developing-llamaindex-systems description: >- Production-grade agentic system development with LlamaIndex in Python. Covers semantic ingestion (SemanticSplitterNodeParser, CodeSplitter, IngestionPipeline), retrieval strategies (BM25Retriever, hybrid search, alpha weighting), PropertyGraphIndex with graph stores (Neo4j), context RAG (RouterQueryEngine, SubQuestionQueryEngine, LLMRerank), agentic orchestratio Capability contract not published. No trust telemetry is available yet. 3 GitHub stars reported by the source. Last updated 4/15/2026.
Trust score
Unknown
Compatibility
OpenClaw
Freshness
Apr 15, 2026
Vendor
Spillwavesolutions
Artifacts
0
Benchmarks
0
Last release
Unpublished
Install & run
git clone https://github.com/SpillwaveSolutions/developing-llamaindex-systems.gitSetup complexity is LOW. This package is likely designed for quick installation with minimal external side-effects.
Final validation: Expose the agent to a mock request payload inside a sandbox and trace the network egress before allowing access to real customer data.
Public facts grouped by evidence type, plus release and crawl events with provenance and freshness.
Public facts
Vendor
Spillwavesolutions
Protocol compatibility
OpenClaw
Adoption signal
3 GitHub stars
Handshake status
UNKNOWN
Crawlable docs
6 indexed pages on the official domain
Parameters, dependencies, examples, extracted files, editorial overview, and the complete README when available.
Captured outputs
Extracted files
0
Examples
6
Snippets
0
Languages
typescript
Parameters
bash
pip install llama-index-core>=0.10.0 llama-index-llms-openai llama-index-embeddings-openai arize-phoenix
python
from llama_index.core import SimpleDirectoryReader
from llama_index.core.node_parser import SemanticSplitterNodeParser
from llama_index.embeddings.openai import OpenAIEmbedding
embed_model = OpenAIEmbedding(model_name="text-embedding-3-small")
splitter = SemanticSplitterNodeParser(
buffer_size=1,
breakpoint_percentile_threshold=95,
embed_model=embed_model
)
docs = SimpleDirectoryReader(input_files=["data.pdf"]).load_data()
nodes = splitter.get_nodes_from_documents(docs)python
from llama_index.core import VectorStoreIndex index = VectorStoreIndex(nodes, embed_model=embed_model) index.storage_context.persist(persist_dir="./storage")
python
# Confirm index built correctly
print(f"Indexed {len(index.docstore.docs)} document chunks")
# Preview a sample node
sample = list(index.docstore.docs.values())[0]
print(f"Sample chunk: {sample.text[:200]}...")python
query_engine = index.as_query_engine(similarity_top_k=5)
response = query_engine.query("What are the key concepts?")
print(response)python
import phoenix as px
import llama_index.core
px.launch_app()
llama_index.core.set_global_handler("arize_phoenix")
# All subsequent queries are now tracedEditorial read
Docs source
GITHUB OPENCLEW
Editorial quality
ready
Production-grade agentic system development with LlamaIndex in Python. Covers semantic ingestion (SemanticSplitterNodeParser, CodeSplitter, IngestionPipeline), retrieval strategies (BM25Retriever, hybrid search, alpha weighting), PropertyGraphIndex with graph stores (Neo4j), context RAG (RouterQueryEngine, SubQuestionQueryEngine, LLMRerank), agentic orchestration (ReAct, Workflows, FunctionTool), and observability (Arize Phoenix). Use when asked to "build a LlamaIndex agent", "set up semantic chunking", "index source code", "implement hybrid search", "create a knowledge graph with LlamaIndex", "implement query routing", "debug RAG pipeline", "add Phoenix observability", or "create an event-driven workflow". Triggers on "PropertyGraphIndex", "SemanticSplitterNodeParser", "CodeSplitter", "BM25Retriever", "hybrid search", "ReAct agent", "Workflow pattern", "LLMRerank", "Text-to-Cypher". --- name: developing-llamaindex-systems description: >- Production-grade agentic system development with LlamaIndex in Python. Covers semantic ingestion (SemanticSplitterNodeParser, CodeSplitter, IngestionPipeline), retrieval strategies (BM25Retriever, hybrid search, alpha weighting), PropertyGraphIndex with graph stores (Neo4j), context RAG (RouterQueryEngine, SubQuestionQueryEngine, LLMRerank), agentic orchestratio
name: developing-llamaindex-systems description: >- Production-grade agentic system development with LlamaIndex in Python. Covers semantic ingestion (SemanticSplitterNodeParser, CodeSplitter, IngestionPipeline), retrieval strategies (BM25Retriever, hybrid search, alpha weighting), PropertyGraphIndex with graph stores (Neo4j), context RAG (RouterQueryEngine, SubQuestionQueryEngine, LLMRerank), agentic orchestration (ReAct, Workflows, FunctionTool), and observability (Arize Phoenix). Use when asked to "build a LlamaIndex agent", "set up semantic chunking", "index source code", "implement hybrid search", "create a knowledge graph with LlamaIndex", "implement query routing", "debug RAG pipeline", "add Phoenix observability", or "create an event-driven workflow". Triggers on "PropertyGraphIndex", "SemanticSplitterNodeParser", "CodeSplitter", "BM25Retriever", "hybrid search", "ReAct agent", "Workflow pattern", "LLMRerank", "Text-to-Cypher". allowed-tools:
Build production-grade agentic RAG systems with semantic ingestion, knowledge graphs, dynamic routing, and observability.
Build a working agent in 6 steps:
pip install llama-index-core>=0.10.0 llama-index-llms-openai llama-index-embeddings-openai arize-phoenix
See scripts/requirements.txt for full pinned dependencies.
from llama_index.core import SimpleDirectoryReader
from llama_index.core.node_parser import SemanticSplitterNodeParser
from llama_index.embeddings.openai import OpenAIEmbedding
embed_model = OpenAIEmbedding(model_name="text-embedding-3-small")
splitter = SemanticSplitterNodeParser(
buffer_size=1,
breakpoint_percentile_threshold=95,
embed_model=embed_model
)
docs = SimpleDirectoryReader(input_files=["data.pdf"]).load_data()
nodes = splitter.get_nodes_from_documents(docs)
from llama_index.core import VectorStoreIndex
index = VectorStoreIndex(nodes, embed_model=embed_model)
index.storage_context.persist(persist_dir="./storage")
# Confirm index built correctly
print(f"Indexed {len(index.docstore.docs)} document chunks")
# Preview a sample node
sample = list(index.docstore.docs.values())[0]
print(f"Sample chunk: {sample.text[:200]}...")
query_engine = index.as_query_engine(similarity_top_k=5)
response = query_engine.query("What are the key concepts?")
print(response)
import phoenix as px
import llama_index.core
px.launch_app()
llama_index.core.set_global_handler("arize_phoenix")
# All subsequent queries are now traced
For production script, run: python scripts/ingest_semantic.py
Six pillars for agentic systems:
| Pillar | Purpose | Reference | |--------|---------|-----------| | Ingestion | Semantic chunking, code splitting, metadata | references/ingestion.md | | Retrieval | BM25 keyword search, hybrid fusion | references/retrieval-strategies.md | | Property Graphs | Knowledge graphs + vector hybrid | references/property-graphs.md | | Context RAG | Query routing, decomposition, reranking | references/context-rag.md | | Orchestration | ReAct agents, event-driven Workflows | references/orchestration.md | | Observability | Tracing, debugging, evaluation | references/observability.md |
Is the content source code?
├─ Yes → CodeSplitter
│ language="python" (or typescript, javascript, java, go)
│ chunk_lines=40, chunk_lines_overlap=15
│ → See: references/ingestion.md#codesplitter
│
└─ No, it's documents:
├─ Need semantic coherence (legal, technical docs)?
│ └─ Yes → SemanticSplitterNodeParser
│ buffer_size=1 (sensitive), 3 (stable)
│ breakpoint_percentile_threshold=95 (fewer), 70 (more)
│ → See: references/ingestion.md#semanticsplitternodeparser
│
├─ Prioritize speed → SentenceSplitter
│ chunk_size=1024, chunk_overlap=20
│ → See: references/ingestion.md#sentencesplitter
│
└─ Need fine-grained retrieval → SentenceWindowNodeParser
window_size=3 (surrounding sentences in metadata)
→ See: references/ingestion.md#sentencewindownodeparser
Trade-off: Semantic chunking requires embedding calls during ingestion (cost + latency).
Query contains exact terms (function names, error codes, IDs)?
├─ Yes, exact match critical → BM25
│ retriever = BM25Retriever.from_defaults(nodes=nodes)
│ → See: references/retrieval-strategies.md#bm25retriever
│
├─ Conceptual/semantic query → Vector
│ retriever = index.as_retriever(similarity_top_k=5)
│ → See: references/context-rag.md
│
└─ Mixed or unknown query type → Hybrid (recommended default)
alpha=0.5 (equal weight), 0.3 (favor BM25), 0.7 (favor vector)
→ See: references/retrieval-strategies.md#hybrid-search
Trade-off: Hybrid adds BM25 index overhead but provides most robust retrieval.
Need document navigation only (prev/next/parent)?
├─ Yes → ImplicitPathExtractor (no LLM, zero cost)
│ → See: references/property-graphs.md#implicitpathextractor
│
└─ No, need semantic relationships:
├─ Fixed ontology required (regulated domain)?
│ └─ Yes → SchemaLLMPathExtractor
│ Pass schema: {"PERSON": ["WORKS_AT"], "COMPANY": ["LOCATED_IN"]}
│ → See: references/property-graphs.md#schemallmpathextractor
│
└─ No, discovery/exploration:
└─ SimpleLLMPathExtractor
max_paths_per_chunk=10 (control noise)
→ See: references/property-graphs.md#simplellmpathextractor
Need SQL-like aggregations (COUNT, SUM)?
├─ Yes, trusted environment → TextToCypherRetriever
│ Risk: LLM syntax errors, injection
│ → See: references/property-graphs.md#texttocypherretriever
│
├─ Yes, need safety → CypherTemplateRetriever
│ Pre-define: MATCH (p:Person {name: $name}) RETURN p
│ LLM only extracts parameters
│ → See: references/property-graphs.md#cyphertemplateretriever
│
└─ No, robustness priority → VectorContextRetriever
Vector search → graph traversal (path_depth=2)
Most reliable, no code generation
→ See: references/property-graphs.md#vectorcontextretriever
Simple tool loop sufficient?
├─ Yes → ReAct Agent (FunctionCallingAgent)
│ Tools via FunctionTool or ToolSpec
│ → See: references/orchestration.md#react-agent-pattern
│
└─ No, need:
├─ Branching/cycles → Workflow
│ → See: references/orchestration.md#branching
├─ Human-in-the-loop → Workflow (suspend/resume)
│ → See: references/orchestration.md#human-in-the-loop
├─ Multi-agent handoff → Workflow + Concierge pattern
│ → See: references/orchestration.md#concierge-multi-agent
└─ Parallel execution → Workflow with multiple event emissions
→ See: references/orchestration.md#workflows
from llama_index.core.extractors import TitleExtractor, SummaryExtractor, KeywordExtractor
from llama_index.core.ingestion import IngestionPipeline
pipeline = IngestionPipeline(
transformations=[
splitter,
TitleExtractor(),
SummaryExtractor(),
KeywordExtractor(keywords=5),
embed_model,
]
)
nodes = pipeline.run(documents=docs)
from llama_index.core import PropertyGraphIndex
from llama_index.core.indices.property_graph import SimpleLLMPathExtractor
index = PropertyGraphIndex.from_documents(
docs,
embed_model=embed_model,
kg_extractors=[SimpleLLMPathExtractor(max_paths_per_chunk=10)],
)
# Hybrid: vector search + graph traversal
retriever = index.as_retriever(include_text=True)
from llama_index.core.query_engine import RouterQueryEngine
from llama_index.core.selectors import LLMSingleSelector
from llama_index.core.tools import QueryEngineTool
tools = [
QueryEngineTool.from_defaults(
query_engine=summary_engine,
description="High-level summaries and overviews"
),
QueryEngineTool.from_defaults(
query_engine=detail_engine,
description="Specific facts, numbers, and details"
),
]
router = RouterQueryEngine(
selector=LLMSingleSelector.from_defaults(),
query_engine_tools=tools,
)
from llama_index.core.workflow import Workflow, step, StartEvent, StopEvent, Event
class QueryEvent(Event):
query: str
class MyAgent(Workflow):
@step
async def classify(self, ev: StartEvent) -> QueryEvent:
return QueryEvent(query=ev.get("query"))
@step
async def respond(self, ev: QueryEvent) -> StopEvent:
result = self.query_engine.query(ev.query)
return StopEvent(result=str(result))
# Run
agent = MyAgent(timeout=60)
result = await agent.run(query="What is X?")
from llama_index.core.postprocessor import SimilarityPostprocessor, LLMRerank
query_engine = index.as_query_engine(
similarity_top_k=10, # Retrieve more
node_postprocessors=[
SimilarityPostprocessor(similarity_cutoff=0.7),
LLMRerank(top_n=3), # Rerank to top 3
]
)
| Script | Purpose | Usage |
|--------|---------|-------|
| scripts/ingest_semantic.py | Build index with semantic chunking + graph | python scripts/ingest_semantic.py --doc path/to/file.pdf |
| scripts/agent_workflow.py | Event-driven agent template | python scripts/agent_workflow.py |
| scripts/requirements.txt | Pinned dependencies | pip install -r scripts/requirements.txt |
Adapt scripts by modifying configuration variables at the top of each file.
Load references based on task:
| Task | Load Reference | |------|----------------| | Configure chunking strategy | references/ingestion.md | | Add metadata extractors | references/ingestion.md | | Build knowledge graph | references/property-graphs.md | | Choose graph store (Neo4j, etc.) | references/property-graphs.md | | Implement query routing | references/context-rag.md | | Decompose complex queries | references/context-rag.md | | Add reranking | references/context-rag.md | | Build ReAct agent | references/orchestration.md | | Create Workflow | references/orchestration.md | | Multi-agent system | references/orchestration.md | | Setup Phoenix tracing | references/observability.md | | Debug retrieval failures | references/observability.md | | Evaluate agent quality | references/observability.md |
Diagnose:
# Open Phoenix UI at http://localhost:6006
# Navigate to Traces → Select query → Retrieval span → Retrieved Nodes
Fix:
# 1. Increase retrieval candidates
query_engine = index.as_query_engine(similarity_top_k=10) # was 5
# 2. Add reranking to improve precision
from llama_index.core.postprocessor import LLMRerank
query_engine = index.as_query_engine(
similarity_top_k=10,
node_postprocessors=[LLMRerank(top_n=3)]
)
Verify: Re-run query, check Phoenix shows improved relevance scores (>0.7).
Diagnose:
# Time the ingestion
import time
start = time.time()
nodes = splitter.get_nodes_from_documents(docs)
print(f"Chunking took {time.time() - start:.1f}s for {len(docs)} docs")
Fix:
# Option 1: Use local embeddings (no API calls)
from llama_index.embeddings.huggingface import HuggingFaceEmbedding
embed_model = HuggingFaceEmbedding(model_name="BAAI/bge-small-en-v1.5")
# Option 2: Hybrid strategy for large corpora
bulk_nodes = SentenceSplitter().get_nodes_from_documents(bulk_docs)
critical_nodes = SemanticSplitterNodeParser(...).get_nodes_from_documents(critical_docs)
Verify: Re-run with show_progress=True, confirm <1s per document.
Diagnose:
# Check extracted triples
for node in index.property_graph_store.get_triplets():
print(node) # Look for irrelevant or duplicate relationships
Fix:
# Option 1: Reduce paths per chunk
SimpleLLMPathExtractor(max_paths_per_chunk=5) # was 10
# Option 2: Use strict schema
SchemaLLMPathExtractor(
possible_entities=["PERSON", "COMPANY"],
possible_relations=["WORKS_AT", "FOUNDED"],
strict=True
)
Verify: Re-index, confirm triplet count reduced and relationships are relevant.
Diagnose:
# Enable verbose mode
agent = MyWorkflow(timeout=60, verbose=True)
result = await agent.run(query="test")
# Check console for: [Step Name] Received event: EventType
Fix:
# Verify type hints match exactly
class MyEvent(Event):
query: str
@step
async def my_step(self, ev: MyEvent) -> StopEvent: # Type hint must be MyEvent
...
Verify: Verbose output shows [my_step] Received event: MyEvent.
Diagnose:
import phoenix as px
session = px.launch_app()
print(f"Phoenix URL: {session.url}") # Should print http://localhost:6006
Fix:
# MUST call BEFORE any LlamaIndex imports/operations
import phoenix as px
px.launch_app()
import llama_index.core
llama_index.core.set_global_handler("arize_phoenix")
# Now import and use LlamaIndex
from llama_index.core import VectorStoreIndex
Verify: Make a query, refresh Phoenix UI, trace appears within 5 seconds.
This skill is specific to LlamaIndex in Python. Do not use for:
If unsure: Check if your use case involves semantic chunking, knowledge graphs, query routing, or multi-step agents. If yes, this skill applies.
| Term | Definition | |------|------------| | Node | Chunk of text with metadata, the atomic unit of retrieval | | PropertyGraphIndex | Index combining vector embeddings with labeled property graph | | Extractor | Component that generates graph triples from text | | Retriever | Component that fetches relevant nodes/context | | Postprocessor | Filters or reranks nodes after retrieval | | Workflow | Event-driven state machine for agent orchestration | | Span | Duration-tracked operation in observability |
Machine endpoints, contract coverage, trust signals, runtime metrics, benchmarks, and guardrails for agent-to-agent use.
Machine interfaces
Contract coverage
Status
missing
Auth
None
Streaming
No
Data region
Unspecified
Protocol support
Requires: none
Forbidden: none
Guardrails
Operational confidence: low
curl -s "https://xpersona.co/api/v1/agents/spillwavesolutions-developing-llamaindex-systems/snapshot"
curl -s "https://xpersona.co/api/v1/agents/spillwavesolutions-developing-llamaindex-systems/contract"
curl -s "https://xpersona.co/api/v1/agents/spillwavesolutions-developing-llamaindex-systems/trust"
Operational fit
Trust signals
Handshake
UNKNOWN
Confidence
unknown
Attempts 30d
unknown
Fallback rate
unknown
Runtime metrics
Observed P50
unknown
Observed P95
unknown
Rate limit
unknown
Estimated cost
unknown
Do not use if
Raw contract, invocation, trust, capability, facts, and change-event payloads for machine-side inspection.
Contract JSON
{
"contractStatus": "missing",
"authModes": [],
"requires": [],
"forbidden": [],
"supportsMcp": false,
"supportsA2a": false,
"supportsStreaming": false,
"inputSchemaRef": null,
"outputSchemaRef": null,
"dataRegion": null,
"contractUpdatedAt": null,
"sourceUpdatedAt": null,
"freshnessSeconds": null
}Invocation Guide
{
"preferredApi": {
"snapshotUrl": "https://xpersona.co/api/v1/agents/spillwavesolutions-developing-llamaindex-systems/snapshot",
"contractUrl": "https://xpersona.co/api/v1/agents/spillwavesolutions-developing-llamaindex-systems/contract",
"trustUrl": "https://xpersona.co/api/v1/agents/spillwavesolutions-developing-llamaindex-systems/trust"
},
"curlExamples": [
"curl -s \"https://xpersona.co/api/v1/agents/spillwavesolutions-developing-llamaindex-systems/snapshot\"",
"curl -s \"https://xpersona.co/api/v1/agents/spillwavesolutions-developing-llamaindex-systems/contract\"",
"curl -s \"https://xpersona.co/api/v1/agents/spillwavesolutions-developing-llamaindex-systems/trust\""
],
"jsonRequestTemplate": {
"query": "summarize this repo",
"constraints": {
"maxLatencyMs": 2000,
"protocolPreference": [
"OPENCLEW"
]
}
},
"jsonResponseTemplate": {
"ok": true,
"result": {
"summary": "...",
"confidence": 0.9
},
"meta": {
"source": "GITHUB_OPENCLEW",
"generatedAt": "2026-04-17T05:53:18.862Z"
}
},
"retryPolicy": {
"maxAttempts": 3,
"backoffMs": [
500,
1500,
3500
],
"retryableConditions": [
"HTTP_429",
"HTTP_503",
"NETWORK_TIMEOUT"
]
}
}Trust JSON
{
"status": "unavailable",
"handshakeStatus": "UNKNOWN",
"verificationFreshnessHours": null,
"reputationScore": null,
"p95LatencyMs": null,
"successRate30d": null,
"fallbackRate": null,
"attempts30d": null,
"trustUpdatedAt": null,
"trustConfidence": "unknown",
"sourceUpdatedAt": null,
"freshnessSeconds": null
}Capability Matrix
{
"rows": [
{
"key": "OPENCLEW",
"type": "protocol",
"support": "unknown",
"confidenceSource": "profile",
"notes": "Listed on profile"
}
],
"flattenedTokens": "protocol:OPENCLEW|unknown|profile"
}Facts JSON
[
{
"factKey": "vendor",
"category": "vendor",
"label": "Vendor",
"value": "Spillwavesolutions",
"href": "https://github.com/SpillwaveSolutions/developing-llamaindex-systems",
"sourceUrl": "https://github.com/SpillwaveSolutions/developing-llamaindex-systems",
"sourceType": "profile",
"confidence": "medium",
"observedAt": "2026-04-15T05:21:22.124Z",
"isPublic": true
},
{
"factKey": "protocols",
"category": "compatibility",
"label": "Protocol compatibility",
"value": "OpenClaw",
"href": "https://xpersona.co/api/v1/agents/spillwavesolutions-developing-llamaindex-systems/contract",
"sourceUrl": "https://xpersona.co/api/v1/agents/spillwavesolutions-developing-llamaindex-systems/contract",
"sourceType": "contract",
"confidence": "medium",
"observedAt": "2026-04-15T05:21:22.124Z",
"isPublic": true
},
{
"factKey": "traction",
"category": "adoption",
"label": "Adoption signal",
"value": "3 GitHub stars",
"href": "https://github.com/SpillwaveSolutions/developing-llamaindex-systems",
"sourceUrl": "https://github.com/SpillwaveSolutions/developing-llamaindex-systems",
"sourceType": "profile",
"confidence": "medium",
"observedAt": "2026-04-15T05:21:22.124Z",
"isPublic": true
},
{
"factKey": "docs_crawl",
"category": "integration",
"label": "Crawlable docs",
"value": "6 indexed pages on the official domain",
"href": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
"sourceUrl": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
"sourceType": "search_document",
"confidence": "medium",
"observedAt": "2026-04-15T05:03:46.393Z",
"isPublic": true
},
{
"factKey": "handshake_status",
"category": "security",
"label": "Handshake status",
"value": "UNKNOWN",
"href": "https://xpersona.co/api/v1/agents/spillwavesolutions-developing-llamaindex-systems/trust",
"sourceUrl": "https://xpersona.co/api/v1/agents/spillwavesolutions-developing-llamaindex-systems/trust",
"sourceType": "trust",
"confidence": "medium",
"observedAt": null,
"isPublic": true
}
]Change Events JSON
[
{
"eventType": "docs_update",
"title": "Docs refreshed: Sign in to GitHub · GitHub",
"description": "Fresh crawlable documentation was indexed for the official domain.",
"href": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
"sourceUrl": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
"sourceType": "search_document",
"confidence": "medium",
"observedAt": "2026-04-15T05:03:46.393Z",
"isPublic": true
}
]Sponsored
Ads related to developing-llamaindex-systems and adjacent AI workflows.