Crawler Summary

openclaw-vector-memory answer-first brief

Vector Memory Skill Vector Memory Skill **Semantic memory system using ChromaDB + Sentence Transformers to reduce token usage by 75-88%** Why Vector Memory? OpenClaw's current memory system sends full conversation history to the LLM on every request. This consumes tokens unnecessarily and costs money. Vector memory solves this by: 1. **Indexing conversations** in a vector database (ChromaDB) 2. **Retrieving only relevant context** via s Capability contract not published. No trust telemetry is available yet. Last updated 2/25/2026.

Freshness

Last checked 2/25/2026

Best For

openclaw-vector-memory is best for organize workflows where OpenClaw compatibility matters.

Not Ideal For

Contract metadata is missing or unavailable for deterministic execution.

Evidence Sources Checked

editorial-content, GITHUB OPENCLEW, runtime-metrics, public facts pack

Claim this agent
Agent DossierGitHubSafety: 89/100

openclaw-vector-memory

Vector Memory Skill Vector Memory Skill **Semantic memory system using ChromaDB + Sentence Transformers to reduce token usage by 75-88%** Why Vector Memory? OpenClaw's current memory system sends full conversation history to the LLM on every request. This consumes tokens unnecessarily and costs money. Vector memory solves this by: 1. **Indexing conversations** in a vector database (ChromaDB) 2. **Retrieving only relevant context** via s

OpenClawself-declared

Public facts

4

Change events

1

Artifacts

0

Freshness

Feb 25, 2026

Verifiededitorial-contentNo verified compatibility signals

Capability contract not published. No trust telemetry is available yet. Last updated 2/25/2026.

Trust evidence available

Trust score

Unknown

Compatibility

OpenClaw

Freshness

Feb 25, 2026

Vendor

Zanderh Code

Artifacts

0

Benchmarks

0

Last release

Unpublished

Executive Summary

Key links, install path, and a quick operational read before the deeper crawl record.

Verifiededitorial-content

Summary

Capability contract not published. No trust telemetry is available yet. Last updated 2/25/2026.

Setup snapshot

git clone https://github.com/ZanderH-code/openclaw-vector-memory.git
  1. 1

    Setup complexity is LOW. This package is likely designed for quick installation with minimal external side-effects.

  2. 2

    Final validation: Expose the agent to a mock request payload inside a sandbox and trace the network egress before allowing access to real customer data.

Evidence Ledger

Everything public we have scraped or crawled about this agent, grouped by evidence type with provenance.

Verifiededitorial-content
Vendor (1)

Vendor

Zanderh Code

profilemedium
Observed Feb 25, 2026Source linkProvenance
Compatibility (1)

Protocol compatibility

OpenClaw

contractmedium
Observed Feb 25, 2026Source linkProvenance
Security (1)

Handshake status

UNKNOWN

trustmedium
Observed unknownSource linkProvenance
Integration (1)

Crawlable docs

6 indexed pages on the official domain

search_documentmedium
Observed Apr 15, 2026Source linkProvenance

Release & Crawl Timeline

Merged public release, docs, artifact, benchmark, pricing, and trust refresh events.

Self-declaredagent-index

Artifacts Archive

Extracted files, examples, snippets, parameters, dependencies, permissions, and artifact metadata.

Self-declaredGITHUB OPENCLEW

Extracted files

0

Examples

6

Snippets

0

Languages

typescript

Parameters

Executable Examples

text

┌─────────────────┐    ┌──────────────────┐    ┌────────────────────┐
│   Conversation  │───▶│  Embedding Model │───▶│   ChromaDB Vector  │
│     History     │    │ (sentence-bert)  │    │     Database       │
└─────────────────┘    └──────────────────┘    └────────────────────┘
                                                                │
┌─────────────────┐    ┌──────────────────┐    ┌────────────────────┐
│     Query       │───▶│  Semantic Search │◀───│  Relevant Context  │
│ (User question) │    │     Engine       │    │     Retrieval      │
└─────────────────┘    └──────────────────┘    └────────────────────┘
                                                                │
┌─────────────────┐    ┌──────────────────┐    ┌────────────────────┐
│   Final Prompt  │◀───│   LLM Response   │◀───│    Reduced Tokens  │
│ (Concise input) │    │  (Full context)  │    │    (75-88% less)   │
└─────────────────┘    └──────────────────┘    └────────────────────┘

bash

# In your OpenClaw project directory
pip install chromadb sentence-transformers numpy

bash

mkdir -p ~/.openclaw/vector-memory
mkdir -p ~/.openclaw/vector-memory/storage

python

from real_vector_memory import OpenClawVectorMemory

# Initialize
memory = OpenClawVectorMemory(
    storage_path="~/.openclaw/vector-memory/storage",
    model_name="all-MiniLM-L6-v2"
)
memory.initialize()

# Index existing conversations
memory.index_file("~/.openclaw/workspace/memory/MEMORY.md")
memory.index_file("~/.openclaw/workspace/memory/2024-01-01.md")
memory.index_directory("~/.openclaw/workspace/memory/")

# Add new conversation
memory.add_memory(
    text="User asked about reducing token usage with vector databases",
    metadata={
        "type": "conversation",
        "date": "2024-01-01",
        "topic": "vector-db"
    }
)

# Search for relevant context
results = memory.search_memory(
    query="How to reduce token usage?",
    max_results=5,
    max_tokens=1500
)

print(f"Retrieved {len(results['documents'])} relevant snippets")
print(f"Token savings: {results['token_savings']:,} tokens")

python

# vector_memory_integration.py
import os
from real_vector_memory import OpenClawVectorMemory

class OpenClawVectorIntegration:
    def __init__(self):
        self.memory = None
        self.initialized = False
        
    def initialize(self):
        """Initialize vector memory system"""
        try:
            self.memory = OpenClawVectorMemory(
                storage_path=os.path.expanduser("~/.openclaw/vector-memory/storage")
            )
            self.memory.initialize()
            self.initialized = True
            return True
        except Exception as e:
            print(f"Vector memory initialization failed: {e}")
            return False
    
    def add_conversation(self, text, metadata=None):
        """Add conversation to vector memory"""
        if not self.initialized:
            return False
        
        metadata = metadata or {}
        metadata.setdefault("type", "conversation")
        
        return self.memory.add_memory(text, metadata)
    
    def search_memory(self, query, max_tokens=1500):
        """Search for relevant context"""
        if not self.initialized:
            return ""
        
        results = self.memory.search_memory(
            query=query,
            max_tokens=max_tokens
        )
        
        return results.get("combined_text", "")
    
    def index_workspace(self):
        """Index all OpenClaw workspace files"""
        if not self.initialized:
            return False
        
        # Index memory files
        workspace_dir = os.path.expanduser("~/.openclaw/workspace")
        
        # MEMORY.md
        memory_file = os.path.join(workspace_dir, "MEMORY.md")
        if os.path.exists(memory_file):
            self.memory.index_file(memory_file)
        
        # Daily memory files
        memory_dir = os.path.join(workspace_dir, "memory")
        if os.path.exists(memory_dir):
            self.memory.index_directory(memory_dir)
        
        return True

python

# Example configuration
VECTOR_MEMORY_CONFIG = {
    "storage_path": "~/.openclaw/vector-memory/storage",
    "model_name": "all-MiniLM-L6-v2",
    "chunk_size": 1000,  # Characters per chunk
    "chunk_overlap": 200,
    "max_tokens_per_query": 1500,
    "embedding_dimension": 384,  # all-MiniLM-L6-v2 uses 384 dimensions
    "distance_metric": "cosine",
    "collection_name": "openclaw_conversations"
}

Docs & README

Full documentation captured from public sources, including the complete README when available.

Self-declaredGITHUB OPENCLEW

Docs source

GITHUB OPENCLEW

Editorial quality

ready

Vector Memory Skill Vector Memory Skill **Semantic memory system using ChromaDB + Sentence Transformers to reduce token usage by 75-88%** Why Vector Memory? OpenClaw's current memory system sends full conversation history to the LLM on every request. This consumes tokens unnecessarily and costs money. Vector memory solves this by: 1. **Indexing conversations** in a vector database (ChromaDB) 2. **Retrieving only relevant context** via s

Full README

Vector Memory Skill

Semantic memory system using ChromaDB + Sentence Transformers to reduce token usage by 75-88%

Why Vector Memory?

OpenClaw's current memory system sends full conversation history to the LLM on every request. This consumes tokens unnecessarily and costs money. Vector memory solves this by:

  1. Indexing conversations in a vector database (ChromaDB)
  2. Retrieving only relevant context via semantic search
  3. Dramatically reducing token usage from 164k → ~20k tokens per request
  4. Saving costs by 75-88%

Architecture

┌─────────────────┐    ┌──────────────────┐    ┌────────────────────┐
│   Conversation  │───▶│  Embedding Model │───▶│   ChromaDB Vector  │
│     History     │    │ (sentence-bert)  │    │     Database       │
└─────────────────┘    └──────────────────┘    └────────────────────┘
                                                                │
┌─────────────────┐    ┌──────────────────┐    ┌────────────────────┐
│     Query       │───▶│  Semantic Search │◀───│  Relevant Context  │
│ (User question) │    │     Engine       │    │     Retrieval      │
└─────────────────┘    └──────────────────┘    └────────────────────┘
                                                                │
┌─────────────────┐    ┌──────────────────┐    ┌────────────────────┐
│   Final Prompt  │◀───│   LLM Response   │◀───│    Reduced Tokens  │
│ (Concise input) │    │  (Full context)  │    │    (75-88% less)   │
└─────────────────┘    └──────────────────┘    └────────────────────┘

Performance Benefits

| Metric | Before (Full Context) | After (Vector Memory) | Savings | |--------|------------------------|----------------------|----------| | Context tokens | 164,000 | 20,000 | 144,000 (87.8%) | | Cost per request | $0.082 | $0.010 | $0.072 (87.8%) | | Monthly (50 queries/day) | $123.00 | $15.00 | $108.00 | | Latency | Higher (full context) | Lower (semantic search) | 30-50% faster | | Accuracy | Complete context | Relevant context | Similar or better |

Installation

1. Install Dependencies

# In your OpenClaw project directory
pip install chromadb sentence-transformers numpy

2. Copy Vector Memory Files

Copy the following files from projects/vector-memory-poc/ to your workspace:

  • real_vector_memory.py - Core vector memory implementation
  • simple_test.py - Quick test script
  • requirements.txt - Dependencies

3. Setup Directory Structure

mkdir -p ~/.openclaw/vector-memory
mkdir -p ~/.openclaw/vector-memory/storage

Usage

Quick Start

from real_vector_memory import OpenClawVectorMemory

# Initialize
memory = OpenClawVectorMemory(
    storage_path="~/.openclaw/vector-memory/storage",
    model_name="all-MiniLM-L6-v2"
)
memory.initialize()

# Index existing conversations
memory.index_file("~/.openclaw/workspace/memory/MEMORY.md")
memory.index_file("~/.openclaw/workspace/memory/2024-01-01.md")
memory.index_directory("~/.openclaw/workspace/memory/")

# Add new conversation
memory.add_memory(
    text="User asked about reducing token usage with vector databases",
    metadata={
        "type": "conversation",
        "date": "2024-01-01",
        "topic": "vector-db"
    }
)

# Search for relevant context
results = memory.search_memory(
    query="How to reduce token usage?",
    max_results=5,
    max_tokens=1500
)

print(f"Retrieved {len(results['documents'])} relevant snippets")
print(f"Token savings: {results['token_savings']:,} tokens")

Integration with OpenClaw

Create an integration script:

# vector_memory_integration.py
import os
from real_vector_memory import OpenClawVectorMemory

class OpenClawVectorIntegration:
    def __init__(self):
        self.memory = None
        self.initialized = False
        
    def initialize(self):
        """Initialize vector memory system"""
        try:
            self.memory = OpenClawVectorMemory(
                storage_path=os.path.expanduser("~/.openclaw/vector-memory/storage")
            )
            self.memory.initialize()
            self.initialized = True
            return True
        except Exception as e:
            print(f"Vector memory initialization failed: {e}")
            return False
    
    def add_conversation(self, text, metadata=None):
        """Add conversation to vector memory"""
        if not self.initialized:
            return False
        
        metadata = metadata or {}
        metadata.setdefault("type", "conversation")
        
        return self.memory.add_memory(text, metadata)
    
    def search_memory(self, query, max_tokens=1500):
        """Search for relevant context"""
        if not self.initialized:
            return ""
        
        results = self.memory.search_memory(
            query=query,
            max_tokens=max_tokens
        )
        
        return results.get("combined_text", "")
    
    def index_workspace(self):
        """Index all OpenClaw workspace files"""
        if not self.initialized:
            return False
        
        # Index memory files
        workspace_dir = os.path.expanduser("~/.openclaw/workspace")
        
        # MEMORY.md
        memory_file = os.path.join(workspace_dir, "MEMORY.md")
        if os.path.exists(memory_file):
            self.memory.index_file(memory_file)
        
        # Daily memory files
        memory_dir = os.path.join(workspace_dir, "memory")
        if os.path.exists(memory_dir):
            self.memory.index_directory(memory_dir)
        
        return True

Configuration

Memory Settings

Configure in your OpenClaw setup:

# Example configuration
VECTOR_MEMORY_CONFIG = {
    "storage_path": "~/.openclaw/vector-memory/storage",
    "model_name": "all-MiniLM-L6-v2",
    "chunk_size": 1000,  # Characters per chunk
    "chunk_overlap": 200,
    "max_tokens_per_query": 1500,
    "embedding_dimension": 384,  # all-MiniLM-L6-v2 uses 384 dimensions
    "distance_metric": "cosine",
    "collection_name": "openclaw_conversations"
}

Automation

Add to heartbeat or cron:

# In HEARTBEAT.md or cron job
def vector_memory_maintenance():
    """Periodic vector memory maintenance"""
    integration = OpenClawVectorIntegration()
    if integration.initialize():
        # Index new conversations
        integration.index_workspace()
        
        # Clean up old entries
        integration.memory.cleanup_old_memories(days_old=30)
        
        # Report statistics
        stats = integration.memory.get_statistics()
        print(f"Vector memory: {stats['count']} memories, {stats['storage_mb']:.2f} MB")

Token Savings Analysis

Calculations

# Token savings calculation
def calculate_token_savings():
    full_context = 164000  # DeepSeek V3.2 full context
    vector_context = 20000  # Vector memory typical context
    savings = full_context - vector_context
    savings_pct = savings / full_context
    
    cost_per_million = 0.50  # DeepSeek V3.2 cost per million tokens
    cost_before = (full_context / 1_000_000) * cost_per_million
    cost_after = (vector_context / 1_000_000) * cost_per_million
    cost_savings = cost_before - cost_after
    
    return {
        "token_savings": savings,
        "savings_percentage": savings_pct,
        "cost_savings_per_request": cost_savings,
        "monthly_savings": cost_savings * 50 * 30  # 50 queries/day, 30 days
    }

Advanced Features

1. Multi-collection Support

Organize memories by type:

# Separate collections for different purposes
memory.create_collection("conversations")
memory.create_collection("documentation")
memory.create_collection("code_examples")

# Query specific collections
code_results = memory.search_memory(
    query="Python API example",
    collection_name="code_examples"
)

2. Hybrid Search

Combine semantic + keyword search:

def hybrid_search(query, alpha=0.7):
    """Combine semantic and keyword search"""
    semantic_results = memory.search_memory(query)
    keyword_results = keyword_search(query)  # Traditional search
    
    # Weighted combination
    combined = []
    for sem_result in semantic_results:
        combined.append({
            "text": sem_result["text"],
            "score": sem_result["score"] * alpha
        })
    
    for kw_result in keyword_results:
        combined.append({
            "text": kw_result["text"],
            "score": kw_result["score"] * (1 - alpha)
        })
    
    # Sort by combined score
    combined.sort(key=lambda x: x["score"], reverse=True)
    return combined[:10]

3. Memory Optimization

Optimize vector storage:

# Prune old memories
memory.prune_memories(
    min_score=0.3,  # Minimum similarity score to keep
    max_age_days=90  # Remove memories older than 90 days
)

# Compress vectors
memory.compress_vectors(
    method="pq",  # Product quantization
    bits=8  # 8-bit compression
)

Troubleshooting

Common Issues

  1. Import errors: Ensure all dependencies are installed

    pip install chromadb sentence-transformers numpy
    
  2. Out of memory: Reduce chunk size or use smaller model

    memory = OpenClawVectorMemory(chunk_size=500)
    
  3. Slow performance: Cache embeddings or use GPU

    # Use GPU if available
    model = SentenceTransformer('all-MiniLM-L6-v2', device='cuda')
    
  4. Windows symlink warning: Set environment variable

    set HF_HUB_DISABLE_SYMLINKS_WARNING=1
    

Performance Tuning

  • Chunk size: 500-1000 characters works best
  • Overlap: 10-20% of chunk size
  • Model: all-MiniLM-L6-v2 (384D) balances speed/accuracy
  • Results: 3-5 results per query is optimal

Best Practices

1. Regular Maintenance

# Weekly cleanup
0 2 * * 0 python vector_memory_maintenance.py

2. Backup Strategy

def backup_vector_memory():
    """Backup vector memory database"""
    backup_dir = "~/.openclaw/vector-memory/backups"
    timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
    backup_path = f"{backup_dir}/chromadb_backup_{timestamp}"
    
    memory.backup(backup_path)
    print(f"Backup created: {backup_path}")

3. Monitoring

Track usage metrics:

def monitor_vector_memory():
    """Monitor vector memory performance"""
    stats = memory.get_statistics()
    
    metrics = {
        "memory_count": stats["count"],
        "storage_mb": stats["storage_mb"],
        "avg_query_time": stats["avg_query_time"],
        "cache_hit_rate": stats["cache_hit_rate"]
    }
    
    # Log to monitoring system
    log_metrics(metrics)

Deployment Checklist

  • [ ] Install dependencies
  • [ ] Initialize vector memory
  • [ ] Index existing conversations
  • [ ] Test search functionality
  • [ ] Integrate with OpenClaw
  • [ ] Set up automated indexing
  • [ ] Monitor performance
  • [ ] Backup strategy in place

Resources

License

MIT License - Free to use, modify, and distribute.


Ready to reduce your token usage by 75-88%? Start using vector memory today!

Contract & API

Machine endpoints, protocol fit, contract coverage, invocation examples, and guardrails for agent-to-agent use.

MissingGITHUB OPENCLEW

Contract coverage

Status

missing

Auth

None

Streaming

No

Data region

Unspecified

Protocol support

OpenClaw: self-declared

Requires: none

Forbidden: none

Guardrails

Operational confidence: low

No positive guardrails captured.
Invocation examples
curl -s "https://xpersona.co/api/v1/agents/zanderh-code-openclaw-vector-memory/snapshot"
curl -s "https://xpersona.co/api/v1/agents/zanderh-code-openclaw-vector-memory/contract"
curl -s "https://xpersona.co/api/v1/agents/zanderh-code-openclaw-vector-memory/trust"

Reliability & Benchmarks

Trust and runtime signals, benchmark suites, failure patterns, and practical risk constraints.

Missingruntime-metrics

Trust signals

Handshake

UNKNOWN

Confidence

unknown

Attempts 30d

unknown

Fallback rate

unknown

Runtime metrics

Observed P50

unknown

Observed P95

unknown

Rate limit

unknown

Estimated cost

unknown

Do not use if

Contract metadata is missing or unavailable for deterministic execution.
No benchmark suites or observed failure patterns are available.

Media & Demo

Every public screenshot, visual asset, demo link, and owner-provided destination tied to this agent.

Missingno-media
No screenshots, media assets, or demo links are available.

Related Agents

Neighboring agents from the same protocol and source ecosystem for comparison and shortlist building.

Self-declaredprotocol-neighbors
GITHUB_REPOSactivepieces

Rank

70

AI Agents & MCPs & AI Workflow Automation • (~400 MCP servers for AI agents) • AI Automation / AI Agent with MCPs • AI Workflows & AI Agents • MCPs for AI Agents

Traction

No public download signal

Freshness

Updated 2d ago

OPENCLAW
GITHUB_REPOScherry-studio

Rank

70

AI productivity studio with smart chat, autonomous agents, and 300+ assistants. Unified access to frontier LLMs

Traction

No public download signal

Freshness

Updated 5d ago

MCPOPENCLAW
GITHUB_REPOSAionUi

Rank

70

Free, local, open-source 24/7 Cowork app and OpenClaw for Gemini CLI, Claude Code, Codex, OpenCode, Qwen Code, Goose CLI, Auggie, and more | 🌟 Star if you like it!

Traction

No public download signal

Freshness

Updated 6d ago

MCPOPENCLAW
GITHUB_REPOSCopilotKit

Rank

70

The Frontend for Agents & Generative UI. React + Angular

Traction

No public download signal

Freshness

Updated 23d ago

OPENCLAW
Machine Appendix

Contract JSON

{
  "contractStatus": "missing",
  "authModes": [],
  "requires": [],
  "forbidden": [],
  "supportsMcp": false,
  "supportsA2a": false,
  "supportsStreaming": false,
  "inputSchemaRef": null,
  "outputSchemaRef": null,
  "dataRegion": null,
  "contractUpdatedAt": null,
  "sourceUpdatedAt": null,
  "freshnessSeconds": null
}

Invocation Guide

{
  "preferredApi": {
    "snapshotUrl": "https://xpersona.co/api/v1/agents/zanderh-code-openclaw-vector-memory/snapshot",
    "contractUrl": "https://xpersona.co/api/v1/agents/zanderh-code-openclaw-vector-memory/contract",
    "trustUrl": "https://xpersona.co/api/v1/agents/zanderh-code-openclaw-vector-memory/trust"
  },
  "curlExamples": [
    "curl -s \"https://xpersona.co/api/v1/agents/zanderh-code-openclaw-vector-memory/snapshot\"",
    "curl -s \"https://xpersona.co/api/v1/agents/zanderh-code-openclaw-vector-memory/contract\"",
    "curl -s \"https://xpersona.co/api/v1/agents/zanderh-code-openclaw-vector-memory/trust\""
  ],
  "jsonRequestTemplate": {
    "query": "summarize this repo",
    "constraints": {
      "maxLatencyMs": 2000,
      "protocolPreference": [
        "OPENCLEW"
      ]
    }
  },
  "jsonResponseTemplate": {
    "ok": true,
    "result": {
      "summary": "...",
      "confidence": 0.9
    },
    "meta": {
      "source": "GITHUB_OPENCLEW",
      "generatedAt": "2026-04-17T00:21:04.665Z"
    }
  },
  "retryPolicy": {
    "maxAttempts": 3,
    "backoffMs": [
      500,
      1500,
      3500
    ],
    "retryableConditions": [
      "HTTP_429",
      "HTTP_503",
      "NETWORK_TIMEOUT"
    ]
  }
}

Trust JSON

{
  "status": "unavailable",
  "handshakeStatus": "UNKNOWN",
  "verificationFreshnessHours": null,
  "reputationScore": null,
  "p95LatencyMs": null,
  "successRate30d": null,
  "fallbackRate": null,
  "attempts30d": null,
  "trustUpdatedAt": null,
  "trustConfidence": "unknown",
  "sourceUpdatedAt": null,
  "freshnessSeconds": null
}

Capability Matrix

{
  "rows": [
    {
      "key": "OPENCLEW",
      "type": "protocol",
      "support": "unknown",
      "confidenceSource": "profile",
      "notes": "Listed on profile"
    },
    {
      "key": "organize",
      "type": "capability",
      "support": "supported",
      "confidenceSource": "profile",
      "notes": "Declared in agent profile metadata"
    }
  ],
  "flattenedTokens": "protocol:OPENCLEW|unknown|profile capability:organize|supported|profile"
}

Facts JSON

[
  {
    "factKey": "docs_crawl",
    "category": "integration",
    "label": "Crawlable docs",
    "value": "6 indexed pages on the official domain",
    "href": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
    "sourceUrl": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
    "sourceType": "search_document",
    "confidence": "medium",
    "observedAt": "2026-04-15T05:03:46.393Z",
    "isPublic": true
  },
  {
    "factKey": "vendor",
    "category": "vendor",
    "label": "Vendor",
    "value": "Zanderh Code",
    "href": "https://github.com/ZanderH-code/openclaw-vector-memory",
    "sourceUrl": "https://github.com/ZanderH-code/openclaw-vector-memory",
    "sourceType": "profile",
    "confidence": "medium",
    "observedAt": "2026-02-25T02:24:35.892Z",
    "isPublic": true
  },
  {
    "factKey": "protocols",
    "category": "compatibility",
    "label": "Protocol compatibility",
    "value": "OpenClaw",
    "href": "https://xpersona.co/api/v1/agents/zanderh-code-openclaw-vector-memory/contract",
    "sourceUrl": "https://xpersona.co/api/v1/agents/zanderh-code-openclaw-vector-memory/contract",
    "sourceType": "contract",
    "confidence": "medium",
    "observedAt": "2026-02-25T02:24:35.892Z",
    "isPublic": true
  },
  {
    "factKey": "handshake_status",
    "category": "security",
    "label": "Handshake status",
    "value": "UNKNOWN",
    "href": "https://xpersona.co/api/v1/agents/zanderh-code-openclaw-vector-memory/trust",
    "sourceUrl": "https://xpersona.co/api/v1/agents/zanderh-code-openclaw-vector-memory/trust",
    "sourceType": "trust",
    "confidence": "medium",
    "observedAt": null,
    "isPublic": true
  }
]

Change Events JSON

[
  {
    "eventType": "docs_update",
    "title": "Docs refreshed: Sign in to GitHub · GitHub",
    "description": "Fresh crawlable documentation was indexed for the official domain.",
    "href": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
    "sourceUrl": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
    "sourceType": "search_document",
    "confidence": "medium",
    "observedAt": "2026-04-15T05:03:46.393Z",
    "isPublic": true
  }
]

Sponsored

Ads related to openclaw-vector-memory and adjacent AI workflows.