Claim this agent
Agent DossierGITHUB OPENCLEWSafety 94/100

Xpersona Agent

memorylake

Search, retrieve, and analyze data from a MemoryLake Streamable HTTP MCP Server — the memory layer for AI Agents that provides intelligent unstructured file content retrieval and data analysis. Access the server directly via HTTP/curl (not via pre-configured MCP tools). Use this skill when the user wants to: (1) search for information across uploaded files in MemoryLake, (2) retrieve specific documents or data from MemoryLake, (3) analyze data stored in MemoryLake using Python code execution, (4) explore what's available in a MemoryLake memorylake, (5) ask natural-language questions about their files, or (6) perform data analysis, aggregation, or comparison across MemoryLake documents. Trigger phrases include: "search my files", "find in memorylake", "analyze my data", "what files do I have", "look up", "summarize my documents", "compare data across files", "run analysis on my data". --- name: memorylake description: > Search, retrieve, and analyze data from a MemoryLake Streamable HTTP MCP Server — the memory layer for AI Agents that provides intelligent unstructured file content retrieval and data analysis. Access the server directly via HTTP/curl (not via pre-configured MCP tools). Use this skill when the user wants to: (1) search for information across uploaded files in MemoryLake, (2) retrie

MCP · self-declared
4 GitHub starsTrust evidence available
git clone https://github.com/memorylake-ai/memorylake-skills.git

Overall rank

#33

Adoption

4 GitHub stars

Trust

Unknown

Freshness

Apr 15, 2026

Freshness

Last checked Apr 15, 2026

Best For

memorylake is best for create, expire workflows where MCP compatibility matters.

Not Ideal For

Contract metadata is missing or unavailable for deterministic execution.

Evidence Sources Checked

editorial-content, GITHUB OPENCLEW, runtime-metrics, public facts pack

Overview

Key links, install path, reliability highlights, and the shortest practical read before diving into the crawl record.

Verifiededitorial-content

Overview

Executive Summary

Search, retrieve, and analyze data from a MemoryLake Streamable HTTP MCP Server — the memory layer for AI Agents that provides intelligent unstructured file content retrieval and data analysis. Access the server directly via HTTP/curl (not via pre-configured MCP tools). Use this skill when the user wants to: (1) search for information across uploaded files in MemoryLake, (2) retrieve specific documents or data from MemoryLake, (3) analyze data stored in MemoryLake using Python code execution, (4) explore what's available in a MemoryLake memorylake, (5) ask natural-language questions about their files, or (6) perform data analysis, aggregation, or comparison across MemoryLake documents. Trigger phrases include: "search my files", "find in memorylake", "analyze my data", "what files do I have", "look up", "summarize my documents", "compare data across files", "run analysis on my data". --- name: memorylake description: > Search, retrieve, and analyze data from a MemoryLake Streamable HTTP MCP Server — the memory layer for AI Agents that provides intelligent unstructured file content retrieval and data analysis. Access the server directly via HTTP/curl (not via pre-configured MCP tools). Use this skill when the user wants to: (1) search for information across uploaded files in MemoryLake, (2) retrie Capability contract not published. No trust telemetry is available yet. 4 GitHub stars reported by the source. Last updated 4/15/2026.

No verified compatibility signals4 GitHub stars

Trust score

Unknown

Compatibility

MCP

Freshness

Apr 15, 2026

Vendor

Memorylake Ai

Artifacts

0

Benchmarks

0

Last release

Unpublished

Install & run

Setup Snapshot

git clone https://github.com/memorylake-ai/memorylake-skills.git
  1. 1

    Setup complexity is LOW. This package is likely designed for quick installation with minimal external side-effects.

  2. 2

    Final validation: Expose the agent to a mock request payload inside a sandbox and trace the network egress before allowing access to real customer data.

Evidence & Timeline

Public facts grouped by evidence type, plus release and crawl events with provenance and freshness.

Verifiededitorial-content

Public facts

Evidence Ledger

Vendor (1)

Vendor

Memorylake Ai

profilemedium
Observed Apr 15, 2026Source linkProvenance
Compatibility (1)

Protocol compatibility

MCP

contractmedium
Observed Apr 15, 2026Source linkProvenance
Adoption (1)

Adoption signal

4 GitHub stars

profilemedium
Observed Apr 15, 2026Source linkProvenance
Security (1)

Handshake status

UNKNOWN

trustmedium
Observed unknownSource linkProvenance
Integration (1)

Crawlable docs

6 indexed pages on the official domain

search_documentmedium
Observed Apr 15, 2026Source linkProvenance

Artifacts & Docs

Parameters, dependencies, examples, extracted files, editorial overview, and the complete README when available.

Self-declaredGITHUB OPENCLEW

Captured outputs

Artifacts Archive

Extracted files

0

Examples

6

Snippets

0

Languages

typescript

Parameters

Executable Examples

text

https://app.memorylake.ai/openapi/memorylake

text

https://ai.data.cloud/memorylake/mcp/v1?apikey=<secret>

bash

export MEMORYLAKE_BASE_URL="https://app.memorylake.ai/openapi/memorylake"
export MEMORYLAKE_API_KEY="<your api key>"
export MEMORYLAKE_USER_ID="<your user id>"

bash

# Initialize a session (required before any tool calls)
SESSION=$(./scripts/memorylake_client.sh "$MCP_URL" init)

# Call any tool
./scripts/memorylake_client.sh "$MCP_URL" "$SESSION" <tool_name> ['<json_arguments>']

bash

./scripts/memorylake_rest_client.sh projects:create '{
  "name": "My Research Project",
  "description": "Optional description"
}'

bash

./scripts/memorylake_rest_client.sh projects:list

Editorial read

Docs & README

Docs source

GITHUB OPENCLEW

Editorial quality

ready

Search, retrieve, and analyze data from a MemoryLake Streamable HTTP MCP Server — the memory layer for AI Agents that provides intelligent unstructured file content retrieval and data analysis. Access the server directly via HTTP/curl (not via pre-configured MCP tools). Use this skill when the user wants to: (1) search for information across uploaded files in MemoryLake, (2) retrieve specific documents or data from MemoryLake, (3) analyze data stored in MemoryLake using Python code execution, (4) explore what's available in a MemoryLake memorylake, (5) ask natural-language questions about their files, or (6) perform data analysis, aggregation, or comparison across MemoryLake documents. Trigger phrases include: "search my files", "find in memorylake", "analyze my data", "what files do I have", "look up", "summarize my documents", "compare data across files", "run analysis on my data". --- name: memorylake description: > Search, retrieve, and analyze data from a MemoryLake Streamable HTTP MCP Server — the memory layer for AI Agents that provides intelligent unstructured file content retrieval and data analysis. Access the server directly via HTTP/curl (not via pre-configured MCP tools). Use this skill when the user wants to: (1) search for information across uploaded files in MemoryLake, (2) retrie

Full README

name: memorylake description: > Search, retrieve, and analyze data from a MemoryLake Streamable HTTP MCP Server — the memory layer for AI Agents that provides intelligent unstructured file content retrieval and data analysis. Access the server directly via HTTP/curl (not via pre-configured MCP tools). Use this skill when the user wants to: (1) search for information across uploaded files in MemoryLake, (2) retrieve specific documents or data from MemoryLake, (3) analyze data stored in MemoryLake using Python code execution, (4) explore what's available in a MemoryLake memorylake, (5) ask natural-language questions about their files, or (6) perform data analysis, aggregation, or comparison across MemoryLake documents. Trigger phrases include: "search my files", "find in memorylake", "analyze my data", "what files do I have", "look up", "summarize my documents", "compare data across files", "run analysis on my data".

MemoryLake Skill

MemoryLake is the memory layer for AI Agents. It ingests unstructured files (Excel, PDF, text, etc.), chunks and indexes them, and exposes them through a Streamable HTTP MCP Server for intelligent retrieval and analysis.

This repo also includes an up-to-date OpenAPI spec for MemoryLake's Project/Drive APIs (see references/memorylake-openapi.json).

Prerequisites

1) Get a MemoryLake API key

  1. Go to https://app.memorylake.ai/ and apply for a MemoryLake API key.
  2. Use the REST API base URL:
https://app.memorylake.ai/openapi/memorylake
  1. Authenticate requests with:
  • Authorization: Bearer <your API key>
  • X-User-ID: <your user id> (required for most endpoints)

2) (Later) Get a Streamable HTTP MCP secret

After you create a project, you can create a project API key that becomes a Streamable HTTP MCP secret:

https://ai.data.cloud/memorylake/mcp/v1?apikey=<secret>

Client Scripts

REST API client (projects, uploads, documents)

Use scripts/memorylake_rest_client.sh to:

  • Create/list projects
  • Create a project API key (MCP secret)
  • Upload documents (multipart)
  • Quick-add documents to a project
  • Poll project documents until status=okay

It expects env vars:

export MEMORYLAKE_BASE_URL="https://app.memorylake.ai/openapi/memorylake"
export MEMORYLAKE_API_KEY="<your api key>"
export MEMORYLAKE_USER_ID="<your user id>"

MCP client (search + fetch + code runner)

Use scripts/memorylake_client.sh for Streamable HTTP MCP interactions. It handles MCP session initialization, JSON-RPC protocol, and SSE response parsing.

# Initialize a session (required before any tool calls)
SESSION=$(./scripts/memorylake_client.sh "$MCP_URL" init)

# Call any tool
./scripts/memorylake_client.sh "$MCP_URL" "$SESSION" <tool_name> ['<json_arguments>']

Session management: Sessions can expire if idle. If a call returns empty or an error, re-initialize with init before retrying. Minimize delay between init and the first tool call.

Available Tools

| Tool | Arguments | Purpose | |------|-----------|---------| | get_memorylake_metadata | (none) | Explore memorylake: file counts by type, sample memories | | search_memory | {"parsed_query":{...}} | Semantic + keyword search across all files | | fetch_memory | {"memory_ids":["id1",...]} | Detailed metadata for specific memories | | create_memory_code_runner | (none) | Create a Python executor, returns executor_id | | run_memory_code | {"executor_id":"...","code":"..."} | Execute Python code against data |

See:

  • references/mcp-tools.md for detailed MCP tool parameters and response formats.
  • references/memorylake-openapi.json for the REST API surface (Projects/Drives/Connectors/etc.).

Note: The REST API requires X-User-ID on most endpoints (per OpenAPI spec).

Typical End-to-End Workflow (REST → MCP)

Follow this flow to create a project, ingest documents, then query/analyze them via MCP.

1) Create a project (REST)

./scripts/memorylake_rest_client.sh projects:create '{
  "name": "My Research Project",
  "description": "Optional description"
}'

2) List projects (REST)

./scripts/memorylake_rest_client.sh projects:list

3) Create a project API key (this becomes the MCP secret) (REST)

./scripts/memorylake_rest_client.sh projects:create-apikey <project_id> '{"description":"mcp"}'

Save the returned secret locally. That secret is used like:

https://ai.data.cloud/memorylake/mcp/v1?apikey=<secret>

4) Upload a document (multipart) (REST)

# 1) Ask server for presigned part upload URLs (file_size in bytes)
./scripts/memorylake_rest_client.sh upload:create-multipart '{"file_size": 123456}' > upload.json

# 2) Upload parts to presigned URLs, then complete multipart
./scripts/memorylake_rest_client.sh upload:complete-multipart upload.json /path/to/file.pdf

You will end up with an object_key (from create-multipart), which is the server-side key for the uploaded file.

5) Add the uploaded document into the project (quick-add) (REST)

./scripts/memorylake_rest_client.sh projects:quick-add <project_id> '{
  "object_key": "<object_key>",
  "file_name": "file.pdf"
}'

If you have multiple documents, upload + quick-add one by one.

6) Poll project documents until processed (REST)

Check:

./scripts/memorylake_rest_client.sh projects:list-documents <project_id>

Document status values: error, okay, running, pending.

Recommended polling interval: 5s until all documents are okay.

7) Use Streamable HTTP MCP to search/retrieve/analyze

MCP_URL="https://ai.data.cloud/memorylake/mcp/v1?apikey=<secret>"
SESSION=$(./scripts/memorylake_client.sh "$MCP_URL" init)

./scripts/memorylake_client.sh "$MCP_URL" "$SESSION" get_memorylake_metadata

Then do search/fetch/code-runner as usual.


MCP Workflow (inside the MCP phase)

1. Initialize session and orient

MCP_URL="<user-provided-url>"
SESSION=$(./scripts/memorylake_client.sh "$MCP_URL" init)
./scripts/memorylake_client.sh "$MCP_URL" "$SESSION" get_memorylake_metadata

2. Search for relevant content

Build a structured query with both BM25 keywords and a semantic dense query:

./scripts/memorylake_client.sh "$MCP_URL" "$SESSION" search_memory '{
  "parsed_query": {
    "bm25_cleaned_query": "recruitment positions master degree",
    "named_entities": [],
    "bm25_keywords": ["recruitment", "positions", "master", "degree"],
    "bm25_boost_keywords": ["master", "recruitment"],
    "rewritten_query_for_dense_model": "Job positions requiring a master degree or higher"
  }
}'

Query construction tips:

  • Extract all named entities into named_entities and bm25_keywords
  • Clean BM25 query: remove stop words, punctuation, normalize spaces
  • Dense query: rewrite to capture intent, expand with synonyms
  • Boost keywords: 3-5 most distinctive terms

3. Fetch memory details

./scripts/memorylake_client.sh "$MCP_URL" "$SESSION" fetch_memory '{"memory_ids":["ds-abc123"]}'

4. Analyze with code execution

# Create executor (once per session)
./scripts/memorylake_client.sh "$MCP_URL" "$SESSION" create_memory_code_runner

# Run code (use executor_id from above)
./scripts/memorylake_client.sh "$MCP_URL" "$SESSION" run_memory_code '{
  "executor_id": "executor-...",
  "code": "import pandas as pd\npath = get_memory_path(\"ds-abc\", \"file.xlsx\")\ndf = pd.read_excel(path)\nprint(df.describe())"
}'

Available packages: pandas, numpy, openpyxl, xlrd, scipy, scikit-learn, xgboost. Always print() results — not an interactive environment. matplotlib is NOT available.

Parsing Responses

The script outputs JSON-RPC result lines. Extract data with:

# Parse with python
RESULT=$(./scripts/memorylake_client.sh "$MCP_URL" "$SESSION" get_memorylake_metadata)
echo "$RESULT" | python3 -c "import sys,json; print(json.dumps(json.load(sys.stdin)['result']['structuredContent'], indent=2))"

The structuredContent field contains the typed response object. The content[0].text field contains the same data as a JSON string.

Best Practices

  • Start broad, then narrow. Use get_memorylake_metadata first, then targeted searches.
  • Reuse sessions. Initialize once, call multiple tools. Re-init only if session expires.
  • Handle multilingual content. Write search queries in the language matching the data.
  • Combine search + code. Search to find files, then analyze with code execution.
  • Reuse executor_id. Create one code runner and reuse for all code calls to maintain state.

API & Reliability

Machine endpoints, contract coverage, trust signals, runtime metrics, benchmarks, and guardrails for agent-to-agent use.

MissingGITHUB OPENCLEW

Machine interfaces

Contract & API

Contract coverage

Status

missing

Auth

None

Streaming

No

Data region

Unspecified

Protocol support

MCP: self-declared

Requires: none

Forbidden: none

Guardrails

Operational confidence: low

No positive guardrails captured.
Invocation examples
curl -s "https://xpersona.co/api/v1/agents/memorylake-ai-memorylake-skills/snapshot"
curl -s "https://xpersona.co/api/v1/agents/memorylake-ai-memorylake-skills/contract"
curl -s "https://xpersona.co/api/v1/agents/memorylake-ai-memorylake-skills/trust"

Operational fit

Reliability & Benchmarks

Trust signals

Handshake

UNKNOWN

Confidence

unknown

Attempts 30d

unknown

Fallback rate

unknown

Runtime metrics

Observed P50

unknown

Observed P95

unknown

Rate limit

unknown

Estimated cost

unknown

Do not use if

Contract metadata is missing or unavailable for deterministic execution.
No benchmark suites or observed failure patterns are available.

Machine Appendix

Raw contract, invocation, trust, capability, facts, and change-event payloads for machine-side inspection.

MissingGITHUB OPENCLEW

Contract JSON

{
  "contractStatus": "missing",
  "authModes": [],
  "requires": [],
  "forbidden": [],
  "supportsMcp": false,
  "supportsA2a": false,
  "supportsStreaming": false,
  "inputSchemaRef": null,
  "outputSchemaRef": null,
  "dataRegion": null,
  "contractUpdatedAt": null,
  "sourceUpdatedAt": null,
  "freshnessSeconds": null
}

Invocation Guide

{
  "preferredApi": {
    "snapshotUrl": "https://xpersona.co/api/v1/agents/memorylake-ai-memorylake-skills/snapshot",
    "contractUrl": "https://xpersona.co/api/v1/agents/memorylake-ai-memorylake-skills/contract",
    "trustUrl": "https://xpersona.co/api/v1/agents/memorylake-ai-memorylake-skills/trust"
  },
  "curlExamples": [
    "curl -s \"https://xpersona.co/api/v1/agents/memorylake-ai-memorylake-skills/snapshot\"",
    "curl -s \"https://xpersona.co/api/v1/agents/memorylake-ai-memorylake-skills/contract\"",
    "curl -s \"https://xpersona.co/api/v1/agents/memorylake-ai-memorylake-skills/trust\""
  ],
  "jsonRequestTemplate": {
    "query": "summarize this repo",
    "constraints": {
      "maxLatencyMs": 2000,
      "protocolPreference": [
        "MCP"
      ]
    }
  },
  "jsonResponseTemplate": {
    "ok": true,
    "result": {
      "summary": "...",
      "confidence": 0.9
    },
    "meta": {
      "source": "GITHUB_OPENCLEW",
      "generatedAt": "2026-04-17T04:55:18.004Z"
    }
  },
  "retryPolicy": {
    "maxAttempts": 3,
    "backoffMs": [
      500,
      1500,
      3500
    ],
    "retryableConditions": [
      "HTTP_429",
      "HTTP_503",
      "NETWORK_TIMEOUT"
    ]
  }
}

Trust JSON

{
  "status": "unavailable",
  "handshakeStatus": "UNKNOWN",
  "verificationFreshnessHours": null,
  "reputationScore": null,
  "p95LatencyMs": null,
  "successRate30d": null,
  "fallbackRate": null,
  "attempts30d": null,
  "trustUpdatedAt": null,
  "trustConfidence": "unknown",
  "sourceUpdatedAt": null,
  "freshnessSeconds": null
}

Capability Matrix

{
  "rows": [
    {
      "key": "MCP",
      "type": "protocol",
      "support": "unknown",
      "confidenceSource": "profile",
      "notes": "Listed on profile"
    },
    {
      "key": "create",
      "type": "capability",
      "support": "supported",
      "confidenceSource": "profile",
      "notes": "Declared in agent profile metadata"
    },
    {
      "key": "expire",
      "type": "capability",
      "support": "supported",
      "confidenceSource": "profile",
      "notes": "Declared in agent profile metadata"
    }
  ],
  "flattenedTokens": "protocol:MCP|unknown|profile capability:create|supported|profile capability:expire|supported|profile"
}

Facts JSON

[
  {
    "factKey": "vendor",
    "category": "vendor",
    "label": "Vendor",
    "value": "Memorylake Ai",
    "href": "https://github.com/memorylake-ai/memorylake-skills",
    "sourceUrl": "https://github.com/memorylake-ai/memorylake-skills",
    "sourceType": "profile",
    "confidence": "medium",
    "observedAt": "2026-04-15T05:21:22.124Z",
    "isPublic": true
  },
  {
    "factKey": "protocols",
    "category": "compatibility",
    "label": "Protocol compatibility",
    "value": "MCP",
    "href": "https://xpersona.co/api/v1/agents/memorylake-ai-memorylake-skills/contract",
    "sourceUrl": "https://xpersona.co/api/v1/agents/memorylake-ai-memorylake-skills/contract",
    "sourceType": "contract",
    "confidence": "medium",
    "observedAt": "2026-04-15T05:21:22.124Z",
    "isPublic": true
  },
  {
    "factKey": "traction",
    "category": "adoption",
    "label": "Adoption signal",
    "value": "4 GitHub stars",
    "href": "https://github.com/memorylake-ai/memorylake-skills",
    "sourceUrl": "https://github.com/memorylake-ai/memorylake-skills",
    "sourceType": "profile",
    "confidence": "medium",
    "observedAt": "2026-04-15T05:21:22.124Z",
    "isPublic": true
  },
  {
    "factKey": "docs_crawl",
    "category": "integration",
    "label": "Crawlable docs",
    "value": "6 indexed pages on the official domain",
    "href": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
    "sourceUrl": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
    "sourceType": "search_document",
    "confidence": "medium",
    "observedAt": "2026-04-15T05:03:46.393Z",
    "isPublic": true
  },
  {
    "factKey": "handshake_status",
    "category": "security",
    "label": "Handshake status",
    "value": "UNKNOWN",
    "href": "https://xpersona.co/api/v1/agents/memorylake-ai-memorylake-skills/trust",
    "sourceUrl": "https://xpersona.co/api/v1/agents/memorylake-ai-memorylake-skills/trust",
    "sourceType": "trust",
    "confidence": "medium",
    "observedAt": null,
    "isPublic": true
  }
]

Change Events JSON

[
  {
    "eventType": "docs_update",
    "title": "Docs refreshed: Sign in to GitHub · GitHub",
    "description": "Fresh crawlable documentation was indexed for the official domain.",
    "href": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
    "sourceUrl": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
    "sourceType": "search_document",
    "confidence": "medium",
    "observedAt": "2026-04-15T05:03:46.393Z",
    "isPublic": true
  }
]

Sponsored

Ads related to memorylake and adjacent AI workflows.