Rank
83
A Model Context Protocol (MCP) server for GitLab
Traction
No public download signal
Freshness
Updated 2d ago
Crawler Summary
Use when creating, configuring, or extending AI agents with MiniMax M2.1 model - provides complete framework patterns, tool integration, memory systems, and MCP connectivity for building production-ready agents --- name: minimax-agent-creator description: Use when creating, configuring, or extending AI agents with MiniMax M2.1 model - provides complete framework patterns, tool integration, memory systems, and MCP connectivity for building production-ready agents license: MIT --- MiniMax Agent Creator Overview Framework completo para crear agentes AI usando el modelo MiniMax M2.1 con API compatible con Anthropic. Integra her Capability contract not published. No trust telemetry is available yet. Last updated 4/15/2026.
Freshness
Last checked 4/15/2026
Best For
minimax-agent-creator is best for general automation workflows where MCP compatibility matters.
Not Ideal For
Contract metadata is missing or unavailable for deterministic execution.
Evidence Sources Checked
editorial-content, GITHUB OPENCLEW, runtime-metrics, public facts pack
Use when creating, configuring, or extending AI agents with MiniMax M2.1 model - provides complete framework patterns, tool integration, memory systems, and MCP connectivity for building production-ready agents --- name: minimax-agent-creator description: Use when creating, configuring, or extending AI agents with MiniMax M2.1 model - provides complete framework patterns, tool integration, memory systems, and MCP connectivity for building production-ready agents license: MIT --- MiniMax Agent Creator Overview Framework completo para crear agentes AI usando el modelo MiniMax M2.1 con API compatible con Anthropic. Integra her
Public facts
4
Change events
1
Artifacts
0
Freshness
Apr 15, 2026
Capability contract not published. No trust telemetry is available yet. Last updated 4/15/2026.
Trust score
Unknown
Compatibility
MCP
Freshness
Apr 15, 2026
Vendor
Alfredolopez80
Artifacts
0
Benchmarks
0
Last release
Unpublished
Key links, install path, and a quick operational read before the deeper crawl record.
Summary
Capability contract not published. No trust telemetry is available yet. Last updated 4/15/2026.
Setup snapshot
git clone https://github.com/alfredolopez80/minimax-agent-creator.gitSetup complexity is LOW. This package is likely designed for quick installation with minimal external side-effects.
Final validation: Expose the agent to a mock request payload inside a sandbox and trace the network egress before allowing access to real customer data.
Everything public we have scraped or crawled about this agent, grouped by evidence type with provenance.
Vendor
Alfredolopez80
Protocol compatibility
MCP
Handshake status
UNKNOWN
Crawlable docs
6 indexed pages on the official domain
Merged public release, docs, artifact, benchmark, pricing, and trust refresh events.
Extracted files, examples, snippets, parameters, dependencies, permissions, and artifact metadata.
Extracted files
0
Examples
6
Snippets
0
Languages
typescript
Parameters
python
from mini_agent import LLMClient, Agent
from mini_agent.tools import ReadTool, WriteTool, EditTool, BashTool
async def create_simple_agent():
# 1. Configurar cliente LLM
llm_client = LLMClient(
api_key="tu-api-key",
api_base="https://api.minimax.io",
model="MiniMax-M2.1",
)
# 2. Definir herramientas
tools = [
ReadTool(workspace_dir="./workspace"),
WriteTool(workspace_dir="./workspace"),
EditTool(workspace_dir="./workspace"),
BashTool(),
]
# 3. Crear sistema de memoria
from mini_agent.tools import SessionNoteTool, RecallNoteTool
# 4. Crear agente
agent = Agent(
llm_client=llm_client,
system_prompt="Eres un asistente útil y preciso.",
tools=tools,
max_steps=50,
workspace_dir="./workspace",
)
# 5. Ejecutar tarea
agent.add_user_message("Crea un archivo hello.py que imprima 'Hola Mundo'")
result = await agent.run()
return resultyaml
# config.yaml api_key: "YOUR_API_KEY" api_base: "https://api.minimax.io" model: "MiniMax-M2.1" agent: max_steps: 100 workspace_dir: "./workspace" token_limit: 80000 tools: enable_file_tools: true enable_bash: true enable_mcp: true enable_skills: true
text
┌─────────────────────────────────────────────────────────────┐ │ Agent │ ├─────────────────────────────────────────────────────────────┤ │ ┌─────────────┐ ┌─────────────┐ ┌─────────────────────┐ │ │ │ LLM Client │ │ Tool System │ │ Memory System │ │ │ │ (M2.1) │ │ (File/Bash) │ │ (Session Notes) │ │ │ └─────────────┘ └─────────────┘ └─────────────────────┘ │ ├─────────────────────────────────────────────────────────────┤ │ Workspace Dir │ └─────────────────────────────────────────────────────────────┘
text
┌─────────────────────────────────────────────────────────────┐ │ Agent │ ├─────────────────────────────────────────────────────────────┤ │ ┌─────────────┐ ┌─────────────┐ ┌─────────────────────┐ │ │ │ LLM Client │ │ Tools │ │ Memory System │ │ │ │ M2.1 │──│─┬──────────┐ │ │ │ │ │ │ │ │ │ File │ │ │ ┌───────────────┐ │ │ │ │ │ │ │ Bash │ │──│ │Session Notes │ │ │ │ │ │ │ │ MCP │ │ │ │ Recall │ │ │ │ │ │ │ │ Skills │ │ │ └───────────────┘ │ │ │ │ │ │ └──────────┘ │ └─────────────────────┘ │ │ └─────────────┘ └─────────────┘ │ ├─────────────────────────────────────────────────────────────┤ │ Context Management (Auto-Summarization) │ └─────────────────────────────────────────────────────────────┘
python
# Agent con sub-agentes o pipeline de tareas
from mini_agent import Agent
class WorkflowAgent:
def __init__(self):
self.planner = self._create_agent("planner")
self.executor = self._create_agent("executor")
self.validator = self._create_agent("validator")
async def run(self, task: str) -> dict:
plan = await self.planner.run(f"Planifica: {task}")
results = await self.executor.run(plan)
validation = await self.validator.run(f"Valida: {results}")
return validationpython
from mini_agent.tools.base import Tool, ToolResult
from typing import Dict, Any
class APITool(Tool):
"""Custom tool para llamadas a API."""
@property
def name(self) -> str:
return "api_call"
@property
def description(self) -> str:
return "Realiza llamadas HTTP a APIs REST con soporte para GET, POST, PUT, DELETE"
@property
def parameters(self) -> Dict[str, Any]:
return {
"type": "object",
"properties": {
"method": {
"type": "string",
"enum": ["GET", "POST", "PUT", "DELETE"],
"description": "HTTP method"
},
"url": {
"type": "string",
"description": "Endpoint URL"
},
"headers": {
"type": "object",
"description": "Request headers",
"default": {}
},
"json": {
"type": "object",
"description": "JSON body para POST/PUT"
}
},
"required": ["method", "url"]
}
async def execute(self, method: str, url: str,
headers: Dict = None, json: Dict = None) -> ToolResult:
try:
import httpx
async with httpx.AsyncClient() as client:
response = await client.request(
method, url, headers=headers, json=json
)
return ToolResult(
success=response.status_code < 400,
content=response.text,
error=f"Status: {response.status_code}" if response.status_code >= 400 else None
)
except Exception as e:
return ToolResult(success=False, error=str(e))
# Registro en agente
tools = [
# ... otras herramientas
APITool(),
]Full documentation captured from public sources, including the complete README when available.
Docs source
GITHUB OPENCLEW
Editorial quality
ready
Use when creating, configuring, or extending AI agents with MiniMax M2.1 model - provides complete framework patterns, tool integration, memory systems, and MCP connectivity for building production-ready agents --- name: minimax-agent-creator description: Use when creating, configuring, or extending AI agents with MiniMax M2.1 model - provides complete framework patterns, tool integration, memory systems, and MCP connectivity for building production-ready agents license: MIT --- MiniMax Agent Creator Overview Framework completo para crear agentes AI usando el modelo MiniMax M2.1 con API compatible con Anthropic. Integra her
Framework completo para crear agentes AI usando el modelo MiniMax M2.1 con API compatible con Anthropic. Integra herramientas del sistema, memoria de sesión, Claude Skills, y servidores MCP para construir agentes robustos y extensible.
Actívame cuando el usuario quiera:
No me uses para:
from mini_agent import LLMClient, Agent
from mini_agent.tools import ReadTool, WriteTool, EditTool, BashTool
async def create_simple_agent():
# 1. Configurar cliente LLM
llm_client = LLMClient(
api_key="tu-api-key",
api_base="https://api.minimax.io",
model="MiniMax-M2.1",
)
# 2. Definir herramientas
tools = [
ReadTool(workspace_dir="./workspace"),
WriteTool(workspace_dir="./workspace"),
EditTool(workspace_dir="./workspace"),
BashTool(),
]
# 3. Crear sistema de memoria
from mini_agent.tools import SessionNoteTool, RecallNoteTool
# 4. Crear agente
agent = Agent(
llm_client=llm_client,
system_prompt="Eres un asistente útil y preciso.",
tools=tools,
max_steps=50,
workspace_dir="./workspace",
)
# 5. Ejecutar tarea
agent.add_user_message("Crea un archivo hello.py que imprima 'Hola Mundo'")
result = await agent.run()
return result
# config.yaml
api_key: "YOUR_API_KEY"
api_base: "https://api.minimax.io"
model: "MiniMax-M2.1"
agent:
max_steps: 100
workspace_dir: "./workspace"
token_limit: 80000
tools:
enable_file_tools: true
enable_bash: true
enable_mcp: true
enable_skills: true
┌─────────────────────────────────────────────────────────────┐
│ Agent │
├─────────────────────────────────────────────────────────────┤
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────────────┐ │
│ │ LLM Client │ │ Tool System │ │ Memory System │ │
│ │ (M2.1) │ │ (File/Bash) │ │ (Session Notes) │ │
│ └─────────────┘ └─────────────┘ └─────────────────────┘ │
├─────────────────────────────────────────────────────────────┤
│ Workspace Dir │
└─────────────────────────────────────────────────────────────┘
Casos de uso: Tareas autonomous simples, scripting, automatización.
┌─────────────────────────────────────────────────────────────┐
│ Agent │
├─────────────────────────────────────────────────────────────┤
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────────────┐ │
│ │ LLM Client │ │ Tools │ │ Memory System │ │
│ │ M2.1 │──│─┬──────────┐ │ │ │ │
│ │ │ │ │ File │ │ │ ┌───────────────┐ │ │
│ │ │ │ │ Bash │ │──│ │Session Notes │ │ │
│ │ │ │ │ MCP │ │ │ │ Recall │ │ │
│ │ │ │ │ Skills │ │ │ └───────────────┘ │ │
│ │ │ │ └──────────┘ │ └─────────────────────┘ │
│ └─────────────┘ └─────────────┘ │
├─────────────────────────────────────────────────────────────┤
│ Context Management (Auto-Summarization) │
└─────────────────────────────────────────────────────────────┘
Casos de uso: Investigación, desarrollo complejo, análisis de datos.
# Agent con sub-agentes o pipeline de tareas
from mini_agent import Agent
class WorkflowAgent:
def __init__(self):
self.planner = self._create_agent("planner")
self.executor = self._create_agent("executor")
self.validator = self._create_agent("validator")
async def run(self, task: str) -> dict:
plan = await self.planner.run(f"Planifica: {task}")
results = await self.executor.run(plan)
validation = await self.validator.run(f"Valida: {results}")
return validation
Casos de uso: Tareas multi-etapa, QA automation, pipelines de datos.
| Herramienta | Descripción | Uso Principal |
|-------------|-------------|---------------|
| ReadTool | Lee archivos con contexto | Análisis de código, revisión |
| WriteTool | Crea/sobrescribe archivos | Generación de código, docs |
| EditTool | Reemplazo exacto de texto | Refactoring, patches |
| BashTool | Ejecuta comandos shell | Compilación, tests, git |
| BashOutputTool | Obtiene salida de procesos | Monitoreo async |
| BashKillTool | Termina procesos | Cleanup de procesos |
| SessionNoteTool | Memoria persistente | Contexto entre sesiones |
| RecallNoteTool | Recupera memoria | Acceso a contexto previo |
| GetSkillTool | Carga Claude Skills | Herramientas advanced |
from mini_agent.tools.base import Tool, ToolResult
from typing import Dict, Any
class APITool(Tool):
"""Custom tool para llamadas a API."""
@property
def name(self) -> str:
return "api_call"
@property
def description(self) -> str:
return "Realiza llamadas HTTP a APIs REST con soporte para GET, POST, PUT, DELETE"
@property
def parameters(self) -> Dict[str, Any]:
return {
"type": "object",
"properties": {
"method": {
"type": "string",
"enum": ["GET", "POST", "PUT", "DELETE"],
"description": "HTTP method"
},
"url": {
"type": "string",
"description": "Endpoint URL"
},
"headers": {
"type": "object",
"description": "Request headers",
"default": {}
},
"json": {
"type": "object",
"description": "JSON body para POST/PUT"
}
},
"required": ["method", "url"]
}
async def execute(self, method: str, url: str,
headers: Dict = None, json: Dict = None) -> ToolResult:
try:
import httpx
async with httpx.AsyncClient() as client:
response = await client.request(
method, url, headers=headers, json=json
)
return ToolResult(
success=response.status_code < 400,
content=response.text,
error=f"Status: {response.status_code}" if response.status_code >= 400 else None
)
except Exception as e:
return ToolResult(success=False, error=str(e))
# Registro en agente
tools = [
# ... otras herramientas
APITool(),
]
# mini_agent/tools/mcp_loader.py
async def load_mcp_tools_async(config_path: str = "mcp.json") -> list[Tool]:
"""
Carga herramientas desde servidores MCP.
Formatos soportados:
- STDIO: comando local (npm, python, etc.)
- SSE: Server-Sent Events para streaming
- HTTP: REST API endpoints
"""
Configuración MCP (mcp.json):
{
"mcpServers": {
"web-search": {
"description": "Búsqueda web y scraping",
"command": "npx",
"args": ["-y", "minimax-search"],
"env": {"JINA_API_KEY": ""},
"disabled": false
},
"memory": {
"description": "Knowledge graph para memoria",
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-memory"],
"disabled": false
},
"context7": {
"description": "Documentación de librerías",
"command": "npx",
"args": ["-y", "@upstash/context7-mcp"],
"disabled": false
}
}
}
from mini_agent.tools.skill_tool import GetSkillTool
from mini_agent.tools.skill_loader import create_skill_tools
# Cargar todas las skills disponibles
skill_tools, skill_loader = create_skill_tools("./skills")
# Usar skill específica
skill_tool = GetSkillTool()
# En el contexto del agente, usar:
# "Usa la skill pdf para generar un reporte"
from mini_agent.tools import SessionNoteTool, RecallNoteTool
# Grabar información importante
record_tool = SessionNoteTool(memory_file="./.agent_memory.json")
await record_tool.execute(
content="Usuario es desarrollador Python senior, prefiere type hints",
category="user_preferences"
)
# Recuperar en sesión nueva
recall_tool = RecallNoteTool(memory_file="./.agent_memory.json")
result = await recall_tool.execute(category="user_preferences")
agent = Agent(
llm_client=llm_client,
system_prompt=system_prompt,
tools=tools,
max_steps=100,
workspace_dir="./workspace",
token_limit=80000, # Auto-sumariza cuando se exceede
)
# El agente automáticamente:
# 1. Detecta cuando tokens > token_limit
# 2. Resume mensajes antiguos
# 3. Mantiene contexto relevante
# 4. Continúa ejecución sin pérdida de información
from mini_agent import LLMClient, LLMProvider
# Cliente Anthropic (recomendado para MiniMax)
client = LLMClient(
api_key="tu-api-key",
provider=LLMProvider.ANTHROPIC,
api_base="https://api.minimax.io/anthropic", # /anthropic suffix
model="MiniMax-M2.1",
)
# Cliente OpenAI
client = LLMClient(
api_key="tu-api-key",
provider=LLMProvider.OPENAI,
api_base="https://api.minimax.io/v1", # /v1 suffix
model="MiniMax-M2.1",
)
| Parámetro | Rango | Default | Descripción |
|-----------|-------|---------|-------------|
| temperature | 0.0 - 2.0 | 1.0 | Creatividad (1.0 = balanceado) |
| max_tokens | 1 - 8192 | 4096 | Máximo tokens de output |
| top_p | 0.0 - 1.0 | 0.9 | Nucleus sampling |
| stream | boolean | true | Streaming de respuesta |
{
"env": {
"ANTHROPIC_BASE_URL": "https://api.minimax.io/anthropic",
"ANTHROPIC_AUTH_TOKEN": "tu-api-key",
"ANTHROPIC_MODEL": "MiniMax-M2.1",
"API_TIMEOUT_MS": "3000000"
}
}
Eres {nombre_agente}, un agente de IA especializado en {dominio}.
## Tu Rol
- {descripcion_rol}
- Principalmente interactúas mediante herramientas y código
## Principios
1. {principio_1}
2. {principio_2}
3. {principio_3}
## Restricciones
- {restriccion_1}
- {restriccion_2}
## when trabajar con código:
- Siempre verifica antes de modificar
- Usa type hints en Python
- Maneja errores explícitamente
- Documenta funciones complejas
## when no sepas algo:
- Indica claramente tu incertidumbre
- Sugiere cómo investigar o verificar
- No inventes información
Eres un asistente de IA avanzado con capacidades de razonamiento extensivo.
## Capacidades
- Razonamiento paso a paso para problemas complejos
- Uso de herramientas para ejecutar código y acceder archivos
- Memoria de sesión para mantener contexto
## Directivas
1. Piensa en voz alta (thinking blocks) para problemas complejos
2. Usa herramientas cuando necesites verificar información
3. Mantén tus respuestas precisas y basadas en hechos
4. Si no puedes completar una tarea, explica claramente por qué
## Formato de Respuesta
- Cuando uses herramientas, reporta resultados completos
- Para código, incluye contexto y explicaciones
- Para decisiones, muestra el razonamiento
Ver: templates/research_agent.py
"""Agente especializado en investigación y análisis."""
from mini_agent import LLMClient, Agent
from mini_agent.tools import (
ReadTool, WriteTool, BashTool,
SessionNoteTool, RecallNoteTool
)
class ResearchAgent:
def __init__(self, api_key: str, workspace_dir: str = "./workspace"):
self.workspace_dir = workspace_dir
self.llm_client = LLMClient(
api_key=api_key,
api_base="https://api.minimax.io/anthropic",
model="MiniMax-M2.1",
)
self.agent = self._create_agent()
def _create_agent(self) -> Agent:
tools = [
ReadTool(workspace_dir=self.workspace_dir),
WriteTool(workspace_dir=self.workspace_dir),
BashTool(),
SessionNoteTool(memory_file=f"{self.workspace_dir}/.research_memory.json"),
RecallNoteTool(memory_file=f"{self.workspace_dir}/.research_memory.json"),
]
return Agent(
llm_client=self.llm_client,
system_prompt=self._get_system_prompt(),
tools=tools,
max_steps=100,
workspace_dir=self.workspace_dir,
)
def _get_system_prompt(self) -> str:
return """Eres un agente de investigación especializado.
## Tu Objetivo
Analizar temas complejos, buscar información relevante, y sintetizar hallazgos.
## Metodología
1. Identificar fuentes confiables
2. Verificar información cruzada
3. Documentar hallazgos
4. Proporcionar conclusiones basadas en evidencia
## Restricciones
- Solo usa fuentes verificables
- Indica incertidumbre claramente
- Cita tus fuentes cuando sea posible"""
Ver: templates/code_agent.py
"""Agente especializado en desarrollo de código."""
from mini_agent import LLMClient, Agent
from mini_agent.tools import (
ReadTool, WriteTool, EditTool, BashTool
)
class CodeAgent:
def __init__(self, api_key: str, workspace_dir: str = "./workspace"):
self.workspace_dir = workspace_dir
self.llm_client = LLMClient(
api_key=api_key,
api_base="https://api.minimax.io/anthropic",
model="MiniMax-M2.1",
)
self.agent = self._create_agent()
def _create_agent(self) -> Agent:
tools = [
ReadTool(workspace_dir=self.workspace_dir),
WriteTool(workspace_dir=self.workspace_dir),
EditTool(workspace_dir=self.workspace_dir),
BashTool(),
]
return Agent(
llm_client=self.llm_client,
system_prompt=self._get_system_prompt(),
tools=tools,
max_steps=50,
workspace_dir=self.workspace_dir,
)
def _get_system_prompt(self) -> str:
return """Eres un desarrollador senior experto en múltiples lenguajes.
## Principios
1. Escribe código limpio y mantenible
2. Usa type hints y documentación
3. Maneja errores explícitamente
4. Prefiere soluciones simples sobre complejas
## Stack Preferido
- Python con type hints
- JavaScript/TypeScript moderno
- SQL para bases de datos
- Bash para automatización
## Patrones
- SOLID para OOP
- Functional programming cuando aplique
- TDD cuando sea factible"""
Ver: templates/web_agent.py
"""Agente especializado en scraping y extracción web."""
from mini_agent import LLMClient, Agent
from mini_agent.tools import (
ReadTool, WriteTool, BashTool
)
class WebAgent:
def __init__(self, api_key: str, workspace_dir: str = "./workspace"):
self.workspace_dir = workspace_dir
self.llm_client = LLMClient(
api_key=api_key,
api_base="https://api.minimax.io/anthropic",
model="MiniMax-M2.1",
)
self.agent = self._create_agent()
def _create_agent(self) -> Agent:
tools = [
ReadTool(workspace_dir=self.workspace_dir),
WriteTool(workspace_dir=self.workspace_dir),
BashTool(),
# MCP tools would be added here
]
return Agent(
llm_client=self.llm_client,
system_prompt=self._get_system_prompt(),
tools=tools,
max_steps=30,
workspace_dir=self.workspace_dir,
)
def _get_system_prompt(self) -> str:
return """Eres un agente de extracción web especializado.
## Capacidades
- Scraping de páginas web
- Extracción de datos estructurados
- Búsqueda de información en línea
- Resumen de contenido web
## Restricciones
- Respeta robots.txt
- No sobrecargar servidores
- Usa rate limiting apropiado
- Almacena datos de forma organizada
## Formato de Salida
- JSON estructurado cuando aplique
- Markdown para resúmenes
- CSV para datos tabulares"""
| Servidor | Tipo | Descripción |
|----------|------|-------------|
| polymarket | Python | Datos de mercados de predicción |
| mermaid | npx | Diagramas Mermaid |
| playwright | npx | Automatización de browser |
| context7 | npx | Documentación de librerías |
| greptile | HTTP | Code review automatizado |
| supabase | HTTP | Base de datos backend |
| github | HTTP | Integración GitHub |
| stripe | HTTP | Pagos y fintech |
| linear | HTTP | Project management |
from mini_agent.tools.mcp_loader import load_mcp_tools_async
async def create_agent_with_mcp():
llm_client = LLMClient(api_key="...", model="MiniMax-M2.1")
# Cargar herramientas base
tools = [
ReadTool(workspace_dir="./workspace"),
WriteTool(workspace_dir="./workspace"),
BashTool(),
]
# Cargar herramientas MCP
mcp_tools = await load_mcp_tools_async("mcp.json")
tools.extend(mcp_tools)
agent = Agent(
llm_client=llm_client,
system_prompt="Eres un agente con acceso a herramientas MCP.",
tools=tools,
max_steps=50,
workspace_dir="./workspace",
)
return agent
Ver: examples/analysis_pipeline.py
"""
Pipeline completo de análisis con múltiples etapas.
"""
import asyncio
from pathlib import Path
from mini_agent import LLMClient, Agent
from mini_agent.tools import ReadTool, WriteTool, BashTool
class AnalysisPipeline:
def __init__(self, api_key: str, workspace_dir: str = "./workspace"):
self.workspace_dir = Path(workspace_dir)
self.workspace_dir.mkdir(parents=True, exist_ok=True)
self.llm_client = LLMClient(
api_key=api_key,
api_base="https://api.minimax.io/anthropic",
model="MiniMax-M2.1",
)
async def run(self, task: str) -> dict:
"""Ejecuta el pipeline completo."""
# Fase 1: Investigación
research_result = await self._research_phase(task)
# Fase 2: Análisis
analysis_result = await self._analysis_phase(research_result)
# Fase 3: Síntesis
final_result = await self._synthesis_phase(analysis_result)
return final_result
async def _research_phase(self, query: str) -> str:
"""Fase de investigación."""
agent = self._create_agent(
"Investiga sobre: " + query,
"Eres un investigador exhaustivo. Busca información detallada."
)
return await agent.run()
async def _analysis_phase(self, context: str) -> str:
"""Fase de análisis."""
agent = self._create_agent(
f"Analiza esta información:\n\n{context}",
"Eres un analista riguroso. Identifica patrones, inconsistencias y insights."
)
return await agent.run()
async def _synthesis_phase(self, analysis: str) -> dict:
"""Fase de síntesis."""
agent = self._create_agent(
f"Sintetiza el análisis en un reporte estructurado:\n\n{analysis}",
"Eres un redactor técnico. Produce reportes claros y estructurados."
)
result = await agent.run()
# Guardar resultado
output_file = self.workspace_dir / "analysis_report.md"
await WriteTool(workspace_dir=str(self.workspace_dir)).execute(
path=str(output_file), content=result
)
return {"report": result, "output_file": str(output_file)}
def _create_agent(self, task: str, system_prompt: str) -> Agent:
tools = [
ReadTool(workspace_dir=str(self.workspace_dir)),
WriteTool(workspace_dir=str(self.workspace_dir)),
BashTool(),
]
return Agent(
llm_client=self.llm_client,
system_prompt=system_prompt,
tools=tools,
max_steps=50,
workspace_dir=str(self.workspace_dir),
)
# Uso
async def main():
pipeline = AnalysisPipeline(api_key="tu-api-key")
result = await pipeline.run("Tendencias en inteligencia artificial 2024")
print(result)
if __name__ == "__main__":
asyncio.run(main())
Ver: examples/agent_with_retry.py
"""
Agente con manejo de errores y reintentos.
"""
import asyncio
from mini_agent import LLMClient, Agent
from mini_agent.tools import ReadTool, WriteTool, BashTool
from mini_agent.retry import async_retry, RetryConfig
class ResilientAgent:
def __init__(self, api_key: str):
self.llm_client = LLMClient(
api_key=api_key,
api_base="https://api.minimax.io/anthropic",
model="MiniMax-M2.1",
)
@async_retry(RetryConfig(max_retries=3, initial_delay=2.0))
async def run_with_retry(self, task: str, max_steps: int = 50) -> str:
"""Ejecuta tarea con reintentos automáticos."""
agent = self._create_agent()
agent.add_user_message(task)
return await agent.run()
def _create_agent(self) -> Agent:
tools = [
ReadTool(workspace_dir="./workspace"),
WriteTool(workspace_dir="./workspace"),
BashTool(),
]
return Agent(
llm_client=self.llm_client,
system_prompt="Eres un asistente preciso y cuidadoso.",
tools=tools,
max_steps=50,
workspace_dir="./workspace",
)
from mini_agent.logger import get_logger
logger = get_logger(__name__)
# Logs automáticos para:
# - Requests LLM
# - Responses LLM
# - Tool executions
# - Errors y retries
# Ver logs en: ~/.mini-agent/logs/
agent = Agent(
llm_client=llm_client,
tools=tools,
max_steps=100,
# Los logs se guardan automáticamente
)
cat ~/.mini-agent/logs/*.log | tail -100
verify=False en desarrolloAPI_TIMEOUT_MS| Error | Causa | Solución |
|-------|-------|----------|
| SSL CERTIFICATE_VERIFY_FAILED | Certificados过期 | pip install --upgrade certifi |
| ModuleNotFoundError | No estás en el directorio | cd Mini-Agent && python -m ... |
| 401 Unauthorized | API key inválida | Verificar credenciales |
| RateLimitError | Demasiadas requests | Implementar backoff |
| Context overflow | Token limit excedido | Reducir contexto o aumentar token_limit |
templates/ - Templates de agentes listos para usarscripts/ - Herramientas de utilidadexamples/ - Ejemplos completosreferences/ - Documentación detallada# pyproject.toml
httpx>=0.25.0 # HTTP client async
pydantic>=2.0.0 # Data validation
pyyaml>=6.0.0 # Config parsing
tiktoken>=0.5.0 # Token counting
requests>=2.31.0 # HTTP sync (fallback)
prompt-toolkit>=3.0.0 # CLI interface
Antes de desplegar tu agente:
MIT License - Ver archivo LICENSE para detalles.
Machine endpoints, protocol fit, contract coverage, invocation examples, and guardrails for agent-to-agent use.
Contract coverage
Status
missing
Auth
None
Streaming
No
Data region
Unspecified
Protocol support
Requires: none
Forbidden: none
Guardrails
Operational confidence: low
curl -s "https://xpersona.co/api/v1/agents/alfredolopez80-minimax-agent-creator/snapshot"
curl -s "https://xpersona.co/api/v1/agents/alfredolopez80-minimax-agent-creator/contract"
curl -s "https://xpersona.co/api/v1/agents/alfredolopez80-minimax-agent-creator/trust"
Trust and runtime signals, benchmark suites, failure patterns, and practical risk constraints.
Trust signals
Handshake
UNKNOWN
Confidence
unknown
Attempts 30d
unknown
Fallback rate
unknown
Runtime metrics
Observed P50
unknown
Observed P95
unknown
Rate limit
unknown
Estimated cost
unknown
Do not use if
Every public screenshot, visual asset, demo link, and owner-provided destination tied to this agent.
Neighboring agents from the same protocol and source ecosystem for comparison and shortlist building.
Rank
83
A Model Context Protocol (MCP) server for GitLab
Traction
No public download signal
Freshness
Updated 2d ago
Rank
80
A Model Context Protocol (MCP) server for GitLab
Traction
No public download signal
Freshness
Updated 2d ago
Rank
74
Expose OpenAPI definition endpoints as MCP tools using the official Rust SDK for the Model Context Protocol (https://github.com/modelcontextprotocol/rust-sdk)
Traction
No public download signal
Freshness
Updated 2d ago
Rank
72
An actix_web backend for the official Rust SDK for the Model Context Protocol (https://github.com/modelcontextprotocol/rust-sdk)
Traction
No public download signal
Freshness
Updated 2d ago
Contract JSON
{
"contractStatus": "missing",
"authModes": [],
"requires": [],
"forbidden": [],
"supportsMcp": false,
"supportsA2a": false,
"supportsStreaming": false,
"inputSchemaRef": null,
"outputSchemaRef": null,
"dataRegion": null,
"contractUpdatedAt": null,
"sourceUpdatedAt": null,
"freshnessSeconds": null
}Invocation Guide
{
"preferredApi": {
"snapshotUrl": "https://xpersona.co/api/v1/agents/alfredolopez80-minimax-agent-creator/snapshot",
"contractUrl": "https://xpersona.co/api/v1/agents/alfredolopez80-minimax-agent-creator/contract",
"trustUrl": "https://xpersona.co/api/v1/agents/alfredolopez80-minimax-agent-creator/trust"
},
"curlExamples": [
"curl -s \"https://xpersona.co/api/v1/agents/alfredolopez80-minimax-agent-creator/snapshot\"",
"curl -s \"https://xpersona.co/api/v1/agents/alfredolopez80-minimax-agent-creator/contract\"",
"curl -s \"https://xpersona.co/api/v1/agents/alfredolopez80-minimax-agent-creator/trust\""
],
"jsonRequestTemplate": {
"query": "summarize this repo",
"constraints": {
"maxLatencyMs": 2000,
"protocolPreference": [
"MCP"
]
}
},
"jsonResponseTemplate": {
"ok": true,
"result": {
"summary": "...",
"confidence": 0.9
},
"meta": {
"source": "GITHUB_OPENCLEW",
"generatedAt": "2026-04-17T04:50:31.837Z"
}
},
"retryPolicy": {
"maxAttempts": 3,
"backoffMs": [
500,
1500,
3500
],
"retryableConditions": [
"HTTP_429",
"HTTP_503",
"NETWORK_TIMEOUT"
]
}
}Trust JSON
{
"status": "unavailable",
"handshakeStatus": "UNKNOWN",
"verificationFreshnessHours": null,
"reputationScore": null,
"p95LatencyMs": null,
"successRate30d": null,
"fallbackRate": null,
"attempts30d": null,
"trustUpdatedAt": null,
"trustConfidence": "unknown",
"sourceUpdatedAt": null,
"freshnessSeconds": null
}Capability Matrix
{
"rows": [
{
"key": "MCP",
"type": "protocol",
"support": "unknown",
"confidenceSource": "profile",
"notes": "Listed on profile"
}
],
"flattenedTokens": "protocol:MCP|unknown|profile"
}Facts JSON
[
{
"factKey": "docs_crawl",
"category": "integration",
"label": "Crawlable docs",
"value": "6 indexed pages on the official domain",
"href": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
"sourceUrl": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
"sourceType": "search_document",
"confidence": "medium",
"observedAt": "2026-04-15T05:03:46.393Z",
"isPublic": true
},
{
"factKey": "vendor",
"category": "vendor",
"label": "Vendor",
"value": "Alfredolopez80",
"href": "https://github.com/alfredolopez80/minimax-agent-creator",
"sourceUrl": "https://github.com/alfredolopez80/minimax-agent-creator",
"sourceType": "profile",
"confidence": "medium",
"observedAt": "2026-04-15T04:13:05.608Z",
"isPublic": true
},
{
"factKey": "protocols",
"category": "compatibility",
"label": "Protocol compatibility",
"value": "MCP",
"href": "https://xpersona.co/api/v1/agents/alfredolopez80-minimax-agent-creator/contract",
"sourceUrl": "https://xpersona.co/api/v1/agents/alfredolopez80-minimax-agent-creator/contract",
"sourceType": "contract",
"confidence": "medium",
"observedAt": "2026-04-15T04:13:05.608Z",
"isPublic": true
},
{
"factKey": "handshake_status",
"category": "security",
"label": "Handshake status",
"value": "UNKNOWN",
"href": "https://xpersona.co/api/v1/agents/alfredolopez80-minimax-agent-creator/trust",
"sourceUrl": "https://xpersona.co/api/v1/agents/alfredolopez80-minimax-agent-creator/trust",
"sourceType": "trust",
"confidence": "medium",
"observedAt": null,
"isPublic": true
}
]Change Events JSON
[
{
"eventType": "docs_update",
"title": "Docs refreshed: Sign in to GitHub · GitHub",
"description": "Fresh crawlable documentation was indexed for the official domain.",
"href": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
"sourceUrl": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
"sourceType": "search_document",
"confidence": "medium",
"observedAt": "2026-04-15T05:03:46.393Z",
"isPublic": true
}
]Sponsored
Ads related to minimax-agent-creator and adjacent AI workflows.