Rank
70
AI Agents & MCPs & AI Workflow Automation โข (~400 MCP servers for AI agents) โข AI Automation / AI Agent with MCPs โข AI Workflows & AI Agents โข MCPs for AI Agents
Traction
No public download signal
Freshness
Updated 2d ago
Crawler Summary
Production-ready persistent memory for AI agents. Works with LangChain, CrewAI, AutoGen, and raw Anthropic/OpenAI SDKs โ in 3 lines of code. agentmemory ๐ง **Your AI agent forgets everything. AgentMemory fixes that in 3 lines.** $1 $1 $1 $1 **Claude Code / Cursor users** โ give your AI coding assistant a permanent memory for your codebase in 2 minutes. $1 --- The Problem Every time your agent starts a new session, it starts from zero. This isn't an AI limitation. It's a missing infrastructure layer. --- The Solution **That's it.** Memory persists to disk. Capability contract not published. No trust telemetry is available yet. 5 GitHub stars reported by the source. Last updated 4/15/2026.
Freshness
Last checked 4/15/2026
Best For
agent-memory is best for crewai, multi-agent workflows where OpenClaw compatibility matters.
Not Ideal For
Contract metadata is missing or unavailable for deterministic execution.
Evidence Sources Checked
editorial-content, GITHUB REPOS, runtime-metrics, public facts pack
Production-ready persistent memory for AI agents. Works with LangChain, CrewAI, AutoGen, and raw Anthropic/OpenAI SDKs โ in 3 lines of code. agentmemory ๐ง **Your AI agent forgets everything. AgentMemory fixes that in 3 lines.** $1 $1 $1 $1 **Claude Code / Cursor users** โ give your AI coding assistant a permanent memory for your codebase in 2 minutes. $1 --- The Problem Every time your agent starts a new session, it starts from zero. This isn't an AI limitation. It's a missing infrastructure layer. --- The Solution **That's it.** Memory persists to disk.
Public facts
5
Change events
1
Artifacts
0
Freshness
Apr 15, 2026
Capability contract not published. No trust telemetry is available yet. 5 GitHub stars reported by the source. Last updated 4/15/2026.
Trust score
Unknown
Compatibility
OpenClaw
Freshness
Apr 15, 2026
Vendor
Pinexai
Artifacts
0
Benchmarks
0
Last release
Unpublished
Key links, install path, and a quick operational read before the deeper crawl record.
Summary
Capability contract not published. No trust telemetry is available yet. 5 GitHub stars reported by the source. Last updated 4/15/2026.
Setup snapshot
Setup complexity is LOW. This package is likely designed for quick installation with minimal external side-effects.
Final validation: Expose the agent to a mock request payload inside a sandbox and trace the network egress before allowing access to real customer data.
Everything public we have scraped or crawled about this agent, grouped by evidence type with provenance.
Vendor
Pinexai
Protocol compatibility
OpenClaw
Adoption signal
5 GitHub stars
Handshake status
UNKNOWN
Crawlable docs
6 indexed pages on the official domain
Merged public release, docs, artifact, benchmark, pricing, and trust refresh events.
Extracted files, examples, snippets, parameters, dependencies, permissions, and artifact metadata.
Extracted files
0
Examples
6
Snippets
0
Languages
python
python
# What happens today โ every single time
agent = MyAgent()
agent.chat("Hi, I'm Alice and I'm building a fraud detection system")
# โ "Nice to meet you, Alice!"
# Next session...
agent = MyAgent()
agent.chat("What's my name?")
# โ "I don't know your name โ could you tell me?" โpython
from agentmemory import MemoryStore
memory = MemoryStore(agent_id="my-agent")
memory.remember("User's name is Alice, building a fraud detection system in Python")
context = memory.get_context("What do we know about the user?")
# โ "[Memory Context]\n- User's name is Alice, building a fraud detection system in Python"bash
# Minimal install (SQLite episodic memory only, no external dependencies) pip install agentcortex # With semantic search + local embeddings (recommended) pip install "agentcortex[chromadb,local]" # Batteries included pip install "agentcortex[all]"
python
from agentmemory import MemoryStore
import anthropic
memory = MemoryStore(agent_id="my-agent")
client = anthropic.Anthropic()
def chat(user_input: str) -> str:
memory.add_message("user", user_input)
response = client.messages.create(
model="claude-opus-4-6",
max_tokens=1024,
system=f"You are a helpful assistant.\n\n{memory.get_context(user_input)}",
messages=memory.get_messages(),
)
reply = response.content[0].text
memory.add_message("assistant", reply)
return reply
chat("Hi, I'm Alice and I'm building a fraud detection system")
chat("I prefer concise code examples")
# ... restart Python ...
chat("What do you know about me?")
# โ "You're Alice, and you're building a fraud detection system in Python.
# You prefer concise code examples." โ
python
from agentmemory.adapters.openai import MemoryOpenAI
client = MemoryOpenAI(agent_id="my-agent")
client.chat("Hi, I'm Alice")
client.chat("I'm building a fraud detection system")
# Next session...
client.chat("What's my name?") # โ "Your name is Alice." โ
python
from agentmemory import MemoryStore
from agentmemory.adapters.langchain import MemoryHistory, inject_memory_context
from langchain_anthropic import ChatAnthropic
memory = MemoryStore(agent_id="my-agent")
history = MemoryHistory(memory_store=memory)
llm = ChatAnthropic(model="claude-opus-4-6")
history.add_user_message("Hello, I'm Alice")
messages = inject_memory_context(history.messages, memory, query="Alice")
response = llm.invoke(messages)Full documentation captured from public sources, including the complete README when available.
Docs source
GITHUB REPOS
Editorial quality
ready
Production-ready persistent memory for AI agents. Works with LangChain, CrewAI, AutoGen, and raw Anthropic/OpenAI SDKs โ in 3 lines of code. agentmemory ๐ง **Your AI agent forgets everything. AgentMemory fixes that in 3 lines.** $1 $1 $1 $1 **Claude Code / Cursor users** โ give your AI coding assistant a permanent memory for your codebase in 2 minutes. $1 --- The Problem Every time your agent starts a new session, it starts from zero. This isn't an AI limitation. It's a missing infrastructure layer. --- The Solution **That's it.** Memory persists to disk.
Your AI agent forgets everything. AgentMemory fixes that in 3 lines.
Claude Code / Cursor users โ give your AI coding assistant a permanent memory for your codebase in 2 minutes. Jump to MCP setup โ
Every time your agent starts a new session, it starts from zero.
# What happens today โ every single time
agent = MyAgent()
agent.chat("Hi, I'm Alice and I'm building a fraud detection system")
# โ "Nice to meet you, Alice!"
# Next session...
agent = MyAgent()
agent.chat("What's my name?")
# โ "I don't know your name โ could you tell me?" โ
This isn't an AI limitation. It's a missing infrastructure layer.
from agentmemory import MemoryStore
memory = MemoryStore(agent_id="my-agent")
memory.remember("User's name is Alice, building a fraud detection system in Python")
context = memory.get_context("What do we know about the user?")
# โ "[Memory Context]\n- User's name is Alice, building a fraud detection system in Python"
That's it. Memory persists to disk. It's there next session, and the one after that.
# Minimal install (SQLite episodic memory only, no external dependencies)
pip install agentcortex
# With semantic search + local embeddings (recommended)
pip install "agentcortex[chromadb,local]"
# Batteries included
pip install "agentcortex[all]"
from agentmemory import MemoryStore
import anthropic
memory = MemoryStore(agent_id="my-agent")
client = anthropic.Anthropic()
def chat(user_input: str) -> str:
memory.add_message("user", user_input)
response = client.messages.create(
model="claude-opus-4-6",
max_tokens=1024,
system=f"You are a helpful assistant.\n\n{memory.get_context(user_input)}",
messages=memory.get_messages(),
)
reply = response.content[0].text
memory.add_message("assistant", reply)
return reply
chat("Hi, I'm Alice and I'm building a fraud detection system")
chat("I prefer concise code examples")
# ... restart Python ...
chat("What do you know about me?")
# โ "You're Alice, and you're building a fraud detection system in Python.
# You prefer concise code examples." โ
from agentmemory.adapters.openai import MemoryOpenAI
client = MemoryOpenAI(agent_id="my-agent")
client.chat("Hi, I'm Alice")
client.chat("I'm building a fraud detection system")
# Next session...
client.chat("What's my name?") # โ "Your name is Alice." โ
from agentmemory import MemoryStore
from agentmemory.adapters.langchain import MemoryHistory, inject_memory_context
from langchain_anthropic import ChatAnthropic
memory = MemoryStore(agent_id="my-agent")
history = MemoryHistory(memory_store=memory)
llm = ChatAnthropic(model="claude-opus-4-6")
history.add_user_message("Hello, I'm Alice")
messages = inject_memory_context(history.messages, memory, query="Alice")
response = llm.invoke(messages)
from agentmemory import MemoryStore
from agentmemory.adapters.crewai import CrewMemoryCallback, get_memory_context_for_agent
from crewai import Agent, Task
memory = MemoryStore(agent_id="research-crew")
agent = Agent(
role="Researcher",
goal="Research AI topics",
backstory=get_memory_context_for_agent(memory, "Researcher") + "\nExpert researcher.",
)
task = Task(
description="Research memory systems for AI agents",
expected_output="Structured research findings",
agent=agent,
callback=CrewMemoryCallback(memory), # Auto-stores task output
)
AgentMemory uses a three-tier architecture that mirrors how human memory works:
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ Your LLM / Agent โ
โโโโโโโโโโโโโโโโโโโโโโโฌโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ get_context() / add_message()
โโโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ MemoryStore โ
โ โ
โ โโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโโ โ
โ โ Working โ โ Episodic โ โ Semantic โ โ
โ โ Memory โ โ Memory โ โ Memory โ โ
โ โ โ โ โ โ โ โ
โ โ Current โ โ Recent โ โ Long-term โ โ
โ โ session โ โ history โ โ knowledge โ โ
โ โ (in-RAM) โ โ (SQLite) โ โ (ChromaDB) โ โ
โ โ โ โ โ โ โ โ
โ โ Auto- โ โ Persists โ โ Semantic โ โ
โ โ compresses โ โ forever โ โ search โ โ
โ โโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโโ โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
Working Memory โ the current conversation window. Automatically compresses old messages into summaries when it nears the token limit.
Episodic Memory โ recent interactions stored in SQLite. No setup required. Evicts least-important entries when full.
Semantic Memory โ long-term facts stored as vector embeddings (ChromaDB). Retrieved by meaning, not keyword.
MemoryStore(agent_id="x") and you're runningMemoryStoreMemoryStore(
agent_id: str, # Unique ID โ memories are namespaced by this
persist_dir: str = "~/.agentmemory", # Where to store memories
max_working_tokens: int = 4096, # Token budget before compression triggers
semantic_backend: str = "chromadb", # "chromadb" | "qdrant"
embedding_provider: str = "sentence-transformers", # "sentence-transformers" | "openai"
llm_provider: str = "anthropic", # LLM for compression: "anthropic" | "openai"
enable_dedup: bool = True, # Deduplicate before storing
auto_compress: bool = True, # Auto-compress when window fills
)
| Method | Description |
|---|---|
| memory.remember(content, importance=5) | Store a fact in episodic + semantic memory |
| memory.recall(query, n=5) | Retrieve top-n relevant memories by meaning |
| memory.get_context(query, max_tokens=500) | Get formatted context string for system prompt |
| memory.add_message(role, content) | Track a conversation turn in working memory |
| memory.get_messages() | Get current working memory as [{role, content}] |
| memory.compress() | Manually trigger compression of working memory |
| memory.stats() | Get memory usage stats across all tiers |
| memory.clear(tiers=None) | Clear specific or all memory tiers |
Stop re-explaining your codebase every session. Claude will remember architecture decisions, bug fixes, and your preferences โ automatically.
The problem: Every time you open Claude Code, it starts from zero. You repeat the same context, re-explain the same constraints, watch it make the same mistakes.
The fix: 2-minute setup. Claude permanently remembers everything it learns about your project.
Step 1 โ Install:
pip install "agentcortex[mcp]"
Step 2 โ Create .mcp.json in your project root:
{
"mcpServers": {
"agentmemory": {
"type": "stdio",
"command": "python",
"args": ["-m", "agentmemory.mcp_server"],
"env": {
"AGENTMEMORY_AGENT_ID": "your-project-name"
}
}
}
}
Step 3 โ Open Claude Code and run /mcp โ you'll see agentmemory connected with 5 tools. Done.
Session 1 โ You: "Fix the race condition in payment/process_transaction.py"
Claude fixes it, then stores:
remember("payment/process_transaction.py: race condition fixed with DB-level
lock. NEVER use in-memory locks โ they don't survive multiple workers.",
importance=9)
โโ one week later โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
Session 2 โ You: "Add retry logic to the payment module"
Claude automatically calls: get_context("payment module retry logic")
Retrieves: "process_transaction.py: use DB-level locks, not in-memory"
Claude: "I remember this module had a concurrency issue. I'll make sure
the retry logic respects the DB-level lock..."
No re-explaining. No repeated mistakes. Claude gets smarter about your codebase over time.
| Tool | What it does |
|---|---|
| get_context(query, max_tokens) | Returns relevant memories for the current task โ call at session start |
| remember(content, importance) | Store a fact, decision, or gotcha (importance 1โ10) |
| recall(query, n) | Semantic search over all stored memories |
| memory_stats() | Show memory counts across working / episodic / semantic tiers |
| clear_memory(tiers) | Reset memories |
| Variable | Default | Description |
|---|---|---|
| AGENTMEMORY_AGENT_ID | "default" | Memory namespace โ one per project |
| AGENTMEMORY_PERSIST_DIR | ~/.agentmemory | Where memories are stored on disk |
| AGENTMEMORY_LLM_PROVIDER | "anthropic" | LLM for auto-compression: "anthropic" or "openai" |
Works with Claude Code, Cursor, and any MCP-compatible AI coding assistant.
Give AutoGen agents persistent memory that survives across sessions.
from agentmemory import MemoryStore
from agentmemory.adapters.autogen import AutoGenMemoryHook, get_autogen_memory_context
import autogen
memory = MemoryStore(agent_id="my-autogen-agent")
# Inject past context into the agent's system_message
context = get_autogen_memory_context(memory, role="Research Assistant",
goal="literature review on LLMs")
assistant = autogen.AssistantAgent(
name="researcher",
system_message=context + "\nYou are a helpful research assistant.",
llm_config={"model": "gpt-4o-mini"},
)
# Hook captures every reply and stores it in memory
hook = AutoGenMemoryHook(memory, importance=6)
assistant.register_reply(
trigger=autogen.ConversableAgent,
reply_func=hook.on_agent_reply,
position=0,
)
Install: pip install "agentcortex[autogen]"
Scale to millions of vectors with a dedicated vector database.
from agentmemory import MemoryStore
# docker run -p 6333:6333 qdrant/qdrant
memory = MemoryStore(
agent_id="my-agent",
semantic_backend="qdrant",
qdrant_url="http://localhost:6333", # or Qdrant Cloud URL
embedding_provider="sentence-transformers",
)
memory.remember("Production architecture uses microservices", importance=8)
results = memory.recall("architecture")
Install: pip install "agentcortex[qdrant]"
Back up and restore episodic memories across machines or agent instances.
from agentmemory import MemoryStore
memory = MemoryStore(agent_id="my-agent")
memory.remember("PostgreSQL is our main database", importance=8)
# Export to JSON file
memory.export_json("backup.json")
# Restore on another machine / new agent
new_memory = MemoryStore(agent_id="new-agent")
count = new_memory.import_json("backup.json")
print(f"Imported {count} memories")
# Merge instead of replacing
new_memory.import_json("backup.json", merge=True)
# Or work with the dict directly
data = memory.export_json() # no path โ returns dict only
new_memory.import_json(data)
Inspect and manage memories from the command line.
# Inspect stored memories
agentmemory inspect --agent-id my-project
# agentmemory โ agent: my-project
# โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
# EPISODIC MEMORY (3 entries)
# โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
# # IMP Created Content
# 1 9 2026-02-28 14:23:01 We use PostgreSQL for relational...
# 2 7 2026-02-27 09:14:55 payment/process_transaction.py h...
# 3 5 2026-02-26 18:30:12 User prefers functional style ove...
# Export memories to JSON
agentmemory export --agent-id my-project --output memories.json
# Import memories (restores; use --merge to add alongside existing)
agentmemory import memories.json --agent-id new-project --merge
Install: pip install agentcortex (the CLI is always included)
Use agentmemory in FastAPI, aiohttp, or any async Python application.
import asyncio
from agentmemory import AsyncMemoryStore
async def main():
# Identical API to MemoryStore โ just add await
memory = AsyncMemoryStore(agent_id="my-async-agent")
await memory.remember("User prefers Python over JavaScript", importance=7)
results = await memory.recall("tech stack")
context = await memory.get_context("What do we know?")
# Export / import work the same way
data = await memory.export_json()
await memory.import_json(data)
memory.close()
# Or use as an async context manager
async def with_context_manager():
async with AsyncMemoryStore(agent_id="my-agent") as memory:
await memory.remember("Context manager closes executor automatically")
ctx = await memory.get_context()
print(ctx)
asyncio.run(main())
Install: pip install agentcortex (AsyncMemoryStore is always included)
| | MemGPT | LangChain Memory | AgentMemory | |---|---|---|---| | Framework | MemGPT only | LangChain only | Any framework | | Composable library | No | Partial | Yes | | Local-first | Partial | No | Yes | | Auto-compression | Yes | No | Yes | | Semantic search | Yes | Partial | Yes | | Deduplication | No | No | Yes | | PyPI installable | No | Yes | Yes | | Zero config | No | Partial | Yes |
pip install "agentcortex[autogen]")pip install "agentcortex[qdrant]")memory.export_json() / memory.import_json()agentmemory inspect / export / importAsyncMemoryStore with full await APIpip install "agentcortex[mcp]")Contributions are welcome. See CONTRIBUTING.md.
git clone https://github.com/pinakimishra95/agent-memory
cd agent-memory
pip install -e ".[dev]"
pytest tests/
MIT. See LICENSE.
Star this repo if you're tired of your agents forgetting everything. ๐
Machine endpoints, protocol fit, contract coverage, invocation examples, and guardrails for agent-to-agent use.
Contract coverage
Status
missing
Auth
None
Streaming
No
Data region
Unspecified
Protocol support
Requires: none
Forbidden: none
Guardrails
Operational confidence: low
curl -s "https://xpersona.co/api/v1/agents/crewai-pinexai-agent-memory/snapshot"
curl -s "https://xpersona.co/api/v1/agents/crewai-pinexai-agent-memory/contract"
curl -s "https://xpersona.co/api/v1/agents/crewai-pinexai-agent-memory/trust"
Trust and runtime signals, benchmark suites, failure patterns, and practical risk constraints.
Trust signals
Handshake
UNKNOWN
Confidence
unknown
Attempts 30d
unknown
Fallback rate
unknown
Runtime metrics
Observed P50
unknown
Observed P95
unknown
Rate limit
unknown
Estimated cost
unknown
Do not use if
Every public screenshot, visual asset, demo link, and owner-provided destination tied to this agent.
Neighboring agents from the same protocol and source ecosystem for comparison and shortlist building.
Rank
70
AI Agents & MCPs & AI Workflow Automation โข (~400 MCP servers for AI agents) โข AI Automation / AI Agent with MCPs โข AI Workflows & AI Agents โข MCPs for AI Agents
Traction
No public download signal
Freshness
Updated 2d ago
Rank
70
AI productivity studio with smart chat, autonomous agents, and 300+ assistants. Unified access to frontier LLMs
Traction
No public download signal
Freshness
Updated 5d ago
Rank
70
Free, local, open-source 24/7 Cowork app and OpenClaw for Gemini CLI, Claude Code, Codex, OpenCode, Qwen Code, Goose CLI, Auggie, and more | ๐ Star if you like it!
Traction
No public download signal
Freshness
Updated 6d ago
Rank
70
The Frontend for Agents & Generative UI. React + Angular
Traction
No public download signal
Freshness
Updated 23d ago
Contract JSON
{
"contractStatus": "missing",
"authModes": [],
"requires": [],
"forbidden": [],
"supportsMcp": false,
"supportsA2a": false,
"supportsStreaming": false,
"inputSchemaRef": null,
"outputSchemaRef": null,
"dataRegion": null,
"contractUpdatedAt": null,
"sourceUpdatedAt": null,
"freshnessSeconds": null
}Invocation Guide
{
"preferredApi": {
"snapshotUrl": "https://xpersona.co/api/v1/agents/crewai-pinexai-agent-memory/snapshot",
"contractUrl": "https://xpersona.co/api/v1/agents/crewai-pinexai-agent-memory/contract",
"trustUrl": "https://xpersona.co/api/v1/agents/crewai-pinexai-agent-memory/trust"
},
"curlExamples": [
"curl -s \"https://xpersona.co/api/v1/agents/crewai-pinexai-agent-memory/snapshot\"",
"curl -s \"https://xpersona.co/api/v1/agents/crewai-pinexai-agent-memory/contract\"",
"curl -s \"https://xpersona.co/api/v1/agents/crewai-pinexai-agent-memory/trust\""
],
"jsonRequestTemplate": {
"query": "summarize this repo",
"constraints": {
"maxLatencyMs": 2000,
"protocolPreference": [
"OPENCLEW"
]
}
},
"jsonResponseTemplate": {
"ok": true,
"result": {
"summary": "...",
"confidence": 0.9
},
"meta": {
"source": "GITHUB_REPOS",
"generatedAt": "2026-04-17T00:36:58.622Z"
}
},
"retryPolicy": {
"maxAttempts": 3,
"backoffMs": [
500,
1500,
3500
],
"retryableConditions": [
"HTTP_429",
"HTTP_503",
"NETWORK_TIMEOUT"
]
}
}Trust JSON
{
"status": "unavailable",
"handshakeStatus": "UNKNOWN",
"verificationFreshnessHours": null,
"reputationScore": null,
"p95LatencyMs": null,
"successRate30d": null,
"fallbackRate": null,
"attempts30d": null,
"trustUpdatedAt": null,
"trustConfidence": "unknown",
"sourceUpdatedAt": null,
"freshnessSeconds": null
}Capability Matrix
{
"rows": [
{
"key": "OPENCLEW",
"type": "protocol",
"support": "unknown",
"confidenceSource": "profile",
"notes": "Listed on profile"
},
{
"key": "crewai",
"type": "capability",
"support": "supported",
"confidenceSource": "profile",
"notes": "Declared in agent profile metadata"
},
{
"key": "multi-agent",
"type": "capability",
"support": "supported",
"confidenceSource": "profile",
"notes": "Declared in agent profile metadata"
}
],
"flattenedTokens": "protocol:OPENCLEW|unknown|profile capability:crewai|supported|profile capability:multi-agent|supported|profile"
}Facts JSON
[
{
"factKey": "vendor",
"category": "vendor",
"label": "Vendor",
"value": "Pinexai",
"href": "https://github.com/pinexai/agent-memory",
"sourceUrl": "https://github.com/pinexai/agent-memory",
"sourceType": "profile",
"confidence": "medium",
"observedAt": "2026-04-15T06:04:38.893Z",
"isPublic": true
},
{
"factKey": "protocols",
"category": "compatibility",
"label": "Protocol compatibility",
"value": "OpenClaw",
"href": "https://xpersona.co/api/v1/agents/crewai-pinexai-agent-memory/contract",
"sourceUrl": "https://xpersona.co/api/v1/agents/crewai-pinexai-agent-memory/contract",
"sourceType": "contract",
"confidence": "medium",
"observedAt": "2026-04-15T06:04:38.893Z",
"isPublic": true
},
{
"factKey": "traction",
"category": "adoption",
"label": "Adoption signal",
"value": "5 GitHub stars",
"href": "https://github.com/pinexai/agent-memory",
"sourceUrl": "https://github.com/pinexai/agent-memory",
"sourceType": "profile",
"confidence": "medium",
"observedAt": "2026-04-15T06:04:38.893Z",
"isPublic": true
},
{
"factKey": "docs_crawl",
"category": "integration",
"label": "Crawlable docs",
"value": "6 indexed pages on the official domain",
"href": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
"sourceUrl": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
"sourceType": "search_document",
"confidence": "medium",
"observedAt": "2026-04-15T05:03:46.393Z",
"isPublic": true
},
{
"factKey": "handshake_status",
"category": "security",
"label": "Handshake status",
"value": "UNKNOWN",
"href": "https://xpersona.co/api/v1/agents/crewai-pinexai-agent-memory/trust",
"sourceUrl": "https://xpersona.co/api/v1/agents/crewai-pinexai-agent-memory/trust",
"sourceType": "trust",
"confidence": "medium",
"observedAt": null,
"isPublic": true
}
]Change Events JSON
[
{
"eventType": "docs_update",
"title": "Docs refreshed: Sign in to GitHub ยท GitHub",
"description": "Fresh crawlable documentation was indexed for the official domain.",
"href": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
"sourceUrl": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
"sourceType": "search_document",
"confidence": "medium",
"observedAt": "2026-04-15T05:03:46.393Z",
"isPublic": true
}
]Sponsored
Ads related to agent-memory and adjacent AI workflows.