Rank
70
AI Agents & MCPs & AI Workflow Automation • (~400 MCP servers for AI agents) • AI Automation / AI Agent with MCPs • AI Workflows & AI Agents • MCPs for AI Agents
Traction
No public download signal
Freshness
Updated 2d ago
Crawler Summary
30-day sprint from LLM basics to production AI systems — LangGraph multi-agent orchestration, RAG pipelines, HITL workflows, LangSmith observability, CrewAI, FastAPI deployment & SSE streaming. Built with Python, Gemini, ChromaDB. 🚀 30-Day AI Engineering Sprint This repository documents my journey through the **AI Engineering Sprint 2026**, moving from basic LLM calls to complex, production-ready AI agents. --- 🛠 Global Tech Stack - **Language:** Python 3.13 - **Model:** Google Gemini 3.0 Flash (via google-genai) - **Environment Management:** Virtual Environments (venv) & python-dotenv --- 📅 Day 1: Structured Meeting Extraction **Goal:** Tr Capability contract not published. No trust telemetry is available yet. Last updated 4/15/2026.
Freshness
Last checked 4/15/2026
Best For
AI-Engineering-Sprint is best for crewai, multi-agent workflows where OpenClaw compatibility matters.
Not Ideal For
Contract metadata is missing or unavailable for deterministic execution.
Evidence Sources Checked
editorial-content, GITHUB REPOS, runtime-metrics, public facts pack
30-day sprint from LLM basics to production AI systems — LangGraph multi-agent orchestration, RAG pipelines, HITL workflows, LangSmith observability, CrewAI, FastAPI deployment & SSE streaming. Built with Python, Gemini, ChromaDB. 🚀 30-Day AI Engineering Sprint This repository documents my journey through the **AI Engineering Sprint 2026**, moving from basic LLM calls to complex, production-ready AI agents. --- 🛠 Global Tech Stack - **Language:** Python 3.13 - **Model:** Google Gemini 3.0 Flash (via google-genai) - **Environment Management:** Virtual Environments (venv) & python-dotenv --- 📅 Day 1: Structured Meeting Extraction **Goal:** Tr
Public facts
4
Change events
1
Artifacts
0
Freshness
Apr 15, 2026
Capability contract not published. No trust telemetry is available yet. Last updated 4/15/2026.
Trust score
Unknown
Compatibility
OpenClaw
Freshness
Apr 15, 2026
Vendor
Makarand Thorat
Artifacts
0
Benchmarks
0
Last release
Unpublished
Key links, install path, and a quick operational read before the deeper crawl record.
Summary
Capability contract not published. No trust telemetry is available yet. Last updated 4/15/2026.
Setup snapshot
Setup complexity is LOW. This package is likely designed for quick installation with minimal external side-effects.
Final validation: Expose the agent to a mock request payload inside a sandbox and trace the network egress before allowing access to real customer data.
Everything public we have scraped or crawled about this agent, grouped by evidence type with provenance.
Vendor
Makarand Thorat
Protocol compatibility
OpenClaw
Handshake status
UNKNOWN
Crawlable docs
6 indexed pages on the official domain
Merged public release, docs, artifact, benchmark, pricing, and trust refresh events.
Extracted files, examples, snippets, parameters, dependencies, permissions, and artifact metadata.
Extracted files
0
Examples
1
Snippets
0
Languages
python
python
SYSTEM_PROMPT = """ YOU ARE A SELF-CORRECTING RESEARCH ANALYST. WORKFLOW: 1. SEARCH: Always search memory first. 2. EVALUATE: If results are insufficient, use 'add_knowledge'. 3. REFLECT: Critique your draft for hallucinations or logic errors. 4. FINAL: Present the refined answer with citations. """
Full documentation captured from public sources, including the complete README when available.
Docs source
GITHUB REPOS
Editorial quality
ready
30-day sprint from LLM basics to production AI systems — LangGraph multi-agent orchestration, RAG pipelines, HITL workflows, LangSmith observability, CrewAI, FastAPI deployment & SSE streaming. Built with Python, Gemini, ChromaDB. 🚀 30-Day AI Engineering Sprint This repository documents my journey through the **AI Engineering Sprint 2026**, moving from basic LLM calls to complex, production-ready AI agents. --- 🛠 Global Tech Stack - **Language:** Python 3.13 - **Model:** Google Gemini 3.0 Flash (via google-genai) - **Environment Management:** Virtual Environments (venv) & python-dotenv --- 📅 Day 1: Structured Meeting Extraction **Goal:** Tr
This repository documents my journey through the AI Engineering Sprint 2026, moving from basic LLM calls to complex, production-ready AI agents.
google-genai)venv) & python-dotenvGoal: Transform messy transcripts into machine-readable data.
/01_meeting_extractorGoal: Chat with long documents without hitting context limits.
/02_chat_with_transcriptlangchain-text-splitters.Goal: Transition from text-based processing to native audio "listening" and analysis.
/03_audio_processorPROCESSING status of large media files in the Google File API.[MM:SS] timestamps linked to specific transcript segments.Goal: Merge Function Calling, MCP Standards, and ReAct Reasoning into a single autonomous agent.
/04_agentic_foundationsThought -> Action -> Observation loop, ensuring the model reasons through complex, multi-step tasks before answering.get_product_inventory and calculate_shipping_time.Goal: Bridge the gap between temporary chat context and permanent episodic recall by implementing a tiered memory architecture.
This milestone demonstrates two distinct "temporal" layers of an AI brain:
05_short_term_memory.py)ChatSession to manage the immediate context window.google-genai SDK, the session history is managed via an internal _history state that grows by 2 entries (User + Model) for every interaction.05_episodic_recall.py)commit_to_diary: Triggered when the AI identifies significant facts or preferences to save them to episodic_diary.json.search_diary: Enables the agent to perform targeted keyword searches over past interactions.Goal: Build a production-grade RAG (Retrieval-Augmented Generation) system that autonomously ingests web data, stores it in a vector database, and synthesizes it with live system status.
This project implements a "Full-Stack" Agentic workflow involving three core layers:
trafilatura)trafilatura to strip away "web noise" (HTML boilerplate, ads, navbars) to ensure only high-signal text is fed to the model.\n\n) to preserve the semantic integrity of the information.ChromaDB + text-embedding-004)text-embedding-004.chromadb instance, allowing the agent to retain "learned" knowledge indefinitely across script restarts.Goal: Advance from "Simple RAG" to "Agentic RAG" by implementing a self-critique loop that identifies information gaps and corrects hallucinations before responding.
In Day 6, the agent blindly trusted its first search result. In Day 7, we introduced Cognitive Reflection. The agent now follows an internal "Standard Operating Procedure" (SOP):
The core of Day 7 is moving logic out of Python loops and into the System Instruction. This allows the model to manage its own tool-use and reflection phases natively.
SYSTEM_PROMPT = """
YOU ARE A SELF-CORRECTING RESEARCH ANALYST.
WORKFLOW:
1. SEARCH: Always search memory first.
2. EVALUATE: If results are insufficient, use 'add_knowledge'.
3. REFLECT: Critique your draft for hallucinations or logic errors.
4. FINAL: Present the refined answer with citations.
"""
Goal: Transition from manual Python loops to a professional orchestration framework by mastering the 5 core graph patterns in LangGraph.
This milestone marks the shift from "scripting" to System Architecture. By decoupling the workflow logic (the "Conveyor Belt") from the model's intelligence (the "Worker"), I’ve built a robust skeleton for deterministic state management.
TypedDict)State dictionary that serves as the "single source of truth" passed between nodes.I implemented five foundational patterns using pure Python logic to verify the infrastructure:
Goal: Implement a self-correcting state machine that loops between reasoning and tool execution to satisfy complex, multi-step goals.
This milestone implements the ReAct (Reasoning + Acting) pattern. By using LangGraph's cyclic capabilities, the agent can now perform internal loops to gather information or process data before ever returning a final result to the user.
SystemMessage to maintain persona and reliability throughout the cycle.our_agent: The LLM reasoning node.
tools: A dedicated execution node for running Python functions.should_continue router that inspects tool_calls to decide if the graph should cycle back to the agent or terminate.add_messages Reducer: Prevents message overwriting, allowing the agent to "remember" the results of tool executions from previous cycles.HumanMessage -> AIMessage (Tool Call) -> ToolMessage (Result) -> AIMessage (Final Answer).Goal: Standardize agent actions by integrating LangGraph's pre-built ToolNode and implementing robust exit conditions for autonomous document management.
I evolved the Drafter agent to delegate execution to a specialized ToolNode. This ensures the agent follows a strict ReAct (Reasoning and Acting) pattern:
ToolNode executes the Python functions (update or save).update and save tools. The model autonomously decides which tool to use based on the semantic intent of the user prompt.SystemMessage that provides the current document state, allowing the AI to understand the context of modifications.agent: The LLM reasoning node.
tools: A dedicated execution node (ToolNode) for running Python functions.should_continue router that inspects ToolMessage contents to decide if the graph should cycle back to the agent or terminate.add_messages Reducer: Prevents message overwriting, allowing the agent to "remember" the results of tool executions from previous cycles.if/else tool routing with langgraph.prebuilt.ToolNode, significantly reducing code complexity.should_continue function to parse tool outputs and reliably break the autonomous loop upon successful file saving.Goal: Build a Retrieval-Augmented Generation (RAG) agent that autonomously decides when to consult external PDF documents to answer complex queries.
This milestone moves beyond the agent's internal training data. By integrating a Vector Database, the agent can now perform "Open-Book" exams on specific datasets (Stock Market Performance 2024).
PyPDFLoader, split into semantic chunks, and embedded using gemini-embedding-001.retriever_tool only when the user's query requires specific data from the document.Goal: Build an autonomous agent capable of writing Python code, executing it in a real environment, and using real-time error feedback to self-correct until a valid solution is reached.
Today’s milestone introduces the Cyclic Reasoning Pattern. Unlike traditional linear pipelines, this agent operates within a "Loop of Truth"—it cannot provide a final answer until its generated code executes successfully.
call_model): The LLM acts as a Senior Developer, interpreting the user's prompt to architect a Python solution.python_executor): A custom environment that runs the code and captures the output or the exact Stack Trace if it fails.Goal: Transform the self-correcting agent into a production-ready system by migrating from volatile RAM-based memory to a persistent MySQL database backend.
Today’s milestone introduces Durable State Management. By integrating a relational database, the agent's conversation history and internal reasoning (checkpoints) are preserved even if the script crashes or the server restarts.
Persistence Layer (PyMySQLSaver): Replaces the temporary MemorySaver. This layer connects the LangGraph workflow to a dedicated MySQL schema (langgraph_db).
Checkpointing Engine: After every node execution (e.g., call_model or tools), a binary "snapshot" of the entire AgentState is serialized and saved to SQL tables.
Thread-Based Retrieval: Uses a unique thread_id to act as a lookup key. This allows the agent to distinguish between different users and resume specific conversations instantly.
Schema Automation: Implemented checkpointer.setup(), which automatically architects the required relational tables (checkpoints, checkpoint_blobs, checkpoint_writes) within the database.
Long-Term Memory: Successfully moved the agent's "brain" from temporary memory to a permanent disk-based storage system.
Session Resumption: Enabled the ability to stop the Python process and resume a complex debugging task hours later without losing progress.
Environment-Driven Security: Decoupled sensitive database credentials from the logic layer by implementing a secure .env configuration.
Multi-User Scalability: Established a foundation where unique thread_id values allow one agent instance to manage hundreds of independent, persistent conversations.
Goal: Build a production-ready autonomous research agent that leverages real-time web browsing and automated file persistence to synthesize complex topics into structured notes.
Today’s milestone marks the transition from simple chat loops to a Multi-Tool Orchestration system. The agent acts as a controller, deciding which tools to call and when the research objective has been met.
duckduckgo_search): Provides the agent with live access to the internet, bypassing the LLM's static knowledge cutoff.save_research_note): A custom tool decorated with @tool that allows the agent to interact with the local file system to save markdown notes.tool_calls. This determines the flow: Agent ➔ Router ➔ Action (Tools) ➔ Agent.ai_trends.md) based on research context rather than just using generic defaults.Goal: Transition from a "Swiss Army Knife" single agent to a professional "Kitchen Staff" architecture. Today's goal was to build a system where a central Supervisor coordinates specialized Researcher and Writer agents to produce a technical report.
In this design, agents don't talk to each other directly (Choreography); instead, they report back to a central "Brain" (Orchestration).
write_file tool.Problem: Gemini's API enforces a strict "User-Assistant-Tool" sequence. In multi-agent loops, the history often results in multiple "Assistant" turns in a row, causing a 400 INVALID_ARGUMENT error.
Fix: Implemented a Context Reset. Before invoking a worker, we wrap the relevant history into a fresh HumanMessage. This "tricks" the model into seeing a new user turn, ensuring API compliance.
Problem: The Researcher often calls multiple tools simultaneously. LangGraph stores these as a list of messages, which caused an AttributeError when trying to access .content directly.
Fix: Added a robust check in the Supervisor to detect list objects and join the contents into a single string for analysis.
TypedDict state to pass the "baton" between agents.Goal: Transition from manual graph-based agents to a high-level Agentic Framework. Today, I built a two-agent "Crew" consisting of a Senior Research Analyst and a Tech Content Strategist to automate the end-to-end process of researching and reporting on emerging tech trends.
Unlike simple chains, CrewAI uses a "Role-Playing" architecture where agents are defined by their Role, Goal, and Backstory. This provides a much deeper cognitive context for the LLM.
SerperDevTool for real-time Google Search access.I implemented a Process.sequential workflow. This ensures a strict linear progression:
By using the langchain_google_genai provider, I integrated Gemini 1.5 Flash as the brain for both agents. This allows for high-speed reasoning while maintaining low token costs.
Instead of just printing to the console, I configured the final task with the output_file="crew_report.md" parameter. This ensures the agentic workflow results in a tangible asset saved directly to the local workspace.
expected_output for each task is the most critical step to prevent agent "hallucination" or scope creep.SerperDevTool effectively gave the agents "eyes" on the current internet, bridging the gap between training data and real-time facts.Goal: Implement a "Concierge Pattern" using LangGraph. Today, I built a system that uses an LLM-based Router to dynamically triage user requests to specialized expert nodes (Math vs. Creative) using the modern Command pattern.
Unlike basic linear chains, this graph uses a "Zero-Edge" approach for internal routing.
math_expert and creative_expert) that only execute when called by the Router.langgraph.types.Command object to handle both the state update and the navigation (goto) in a single return statement.LLM-Based Triage
Moved away from fragile if "math" in query checks. By using a small "Router Prompt," the system can now handle complex natural language (e.g., "What is 15% of 450?") and route it correctly.
Handling Multimodal Content Blocks
Navigated the Gemini 3 Flash output structure. Since the model returns a list[dict] for content (to support text + image blocks), I implemented direct indexing to extract the decision_text cleanly.
State Management
Used the built-in MessagesState to maintain a clean chat history while allowing the specialists to access the original human query through simple list indexing (state["messages"][0]).
Today, I moved from simple "Agents" to a "Self-Healing Organization." My system uses a Hierarchical Process nested inside a CrewAI Flow.
kickoff to Tasks.Pydantic to maintain a retry_count and feedback loop across multiple execution attempts.__init__ to hire agents only when the department is called.Goal: To build an automated SDLC (Software Development Life Cycle) using a multi-agent "Department" nested within a stateful Flow.
I implemented a Creator vs. Critic pattern to ensure high-quality, peer-reviewed output.
DevState (retry counts, feedback, and code).review_feedback and triggers a re-coding phase.Pydantic to carry feedback across loops so the Coder learns from mistakes.Process.hierarchical inside the Crew to allow a Manager LLM to oversee the handoff.@router determines if the code is "Deployable" or "Needs Fix."Quality increases exponentially when you give one agent the explicit goal to find faults. By setting the Reviewer's backstory to "Paranoid Security Engineer," the final code is documented, type-hinted, and robust.
Goal: To migrate the SDLC workflow from CrewAI to LangGraph to implement enterprise-grade Human-in-the-Loop (HITL) and state persistence.
I shifted from simple orchestration to a State Machine architecture where the human acts as the final gatekeeper.
TypedDict state to track requirements, code snippets, and approval status across the lifecycle.MemorySaver to provide "Time Travel" capabilities—the graph can be saved and resumed without losing context.interrupt_before on a dedicated human_approval node, forcing the AI to stop and wait for manual verification.update_state to inject comments and "rewinds" the execution pointer.as_node in state updates to manually trigger specific paths in the graph's logic.The transition from "Scripting" to "Graph Engineering" is the bridge to production AI. By using Checkpoints and Interrupts, I've moved away from a "Black Box" agent toward a transparent, auditable system where a human can steer the AI's logic in real-time. This level of control is what separates a prototype from a professional AI product.
Goal: To build a self-correcting "Social Media Manager" using the ReAct (Reasoning + Acting) pattern and DuckDuckGo search integration.
I moved beyond static generation by giving the agent "hands" to fetch real-time data.
Annotated[list, add_messages] to ensure the agent maintains a continuous memory of its research findings.ValueError: contents are required by ensuring every tool request is preceded by a HumanMessage to satisfy Gemini's strict turn-taking requirements.tools_condition from LangGraph's prebuilt library to manage the handoff between the LLM and the search tool.The most difficult part of Tool Augmentation isn't the API call—it's State Management. Ensuring the agent "remembers" the tool's output and doesn't get caught in an infinite loop requires precise control over the message history and the use of message reducers.
Goal: To transition from "black box" development to a data-driven engineering workflow by implementing full-stack observability and automated testing with LangSmith.
I moved from simply running code to "auditing" every decision the LLM makes through a dedicated observability pipeline.
Manual Graph Construction: Avoided deprecated agent executors to build a raw StateGraph. This allows for a granular view in LangSmith, where each node (Agent vs. Tools) is timestamped and tracked individually.
Explicit Router Logic: Instead of using "magic" prebuilt conditions, I mapped the tools_condition to explicit END and tools edges. This ensures the trace accurately reflects the branching logic of the ReAct pattern.
The "Golden" Dataset: Captured successful traces and converted them into a version-controlled benchmark. This creates a "ground truth" that the agent must satisfy even as the underlying model or prompts change.
LLM-as-a-Judge: Implemented an automated evaluation script (eval_test.py) using Gemini 2.5 Flash as a "Judge" to grade the performance of Gemini 3 Flash. This provides a quantitative "Relevance" score for every run.
Path-Aware Dotenv Loading: Solved directory-scoping issues by implementing explicit pathing for .env files, ensuring that the tracing configuration is active regardless of where the script is executed.
The real shift in Day 22 was realizing that AI Engineering is 20% prompting and 80% evaluation. Without observability, you are just "vibing" with your prompts. By building a baseline dataset and an automated judge, I can now mathematically prove if a prompt change actually improves the system or just changes the style.
Goal: Today’s focus was transitioning from an autonomous agent to a steerable, safe agent. I implemented a system that pauses for human intervention and utilizes hard-coded guardrails to prevent tool-calling hallucinations or policy violations.
Unlike a standard ReAct loop, this workflow introduces a Stateful Checkpointer and an Interrupt Node.
InMemorySaver to enable state persistence. This allows the graph to be paused and resumed across different sessions using a thread_id.interrupt() function to halt execution before sensitive tool calls (Social Media posting).output_guardrail) to filter LLM-generated tool arguments for banned keywords before they ever reach the user for approval.The agent now follows a "Trust but Verify" model. When it decides to use a tool, it doesn't execute immediately. Instead, it enters a human_approval node that triggers an __interrupt__.
I implemented a safety filter that acts as a "hard law" the LLM cannot bypass. If the agent tries to post about "crypto-scams" or "spam," the guardrail triggers an automatic rejection.
Safe Execution: Valid posts require a "yes" to proceed.
Auto-Rejection: Banned words trigger an immediate "Rewrite" loop without human effort.
Audit Trail: LangSmith's Trace Tree shows the exact gap where the human review occurred.
Today I implemented Entity Memory, moving beyond linear chat history to building a persistent, structured profile of the user.
I developed two versions to compare data handling:
Goal: Today I implemented a CFO for my AI. In production, using a high-reasoning model for a simple "Hello" is a waste of resources. I built an Intelligent Router that tiers LLM workloads based on task complexity.
The system acts as a traffic controller, directing queries to the most cost-effective "Specialist":
easy or complex.add_conditional_edges to physically route the state to different specialist nodes.complexity key in the state dictionary dictates the path, ensuring the "Large" model is only billed when necessary.Goal: Today I transitioned the LangGraph agent from a local script into a multi-service API architecture. I built a custom FastAPI backend for client interactions and a LangServe instance for developer tools.
I decoupled the core logic from the interface to allow for independent scaling and testing:
agent.py: The "Brain." Contains the LangGraph definition and the logic for clearing message history.main.py: The "Client API." A custom FastAPI instance running on Port 8000 for standard user requests.main_langserve.py: The "Developer API." A LangServe instance running on Port 8001 for native streaming and a visual Playground.I successfully deployed two parallel FastAPI applications to isolate development tools from user traffic:
POST /chat and DELETE /chat)./agent/playground for real-time visual debugging.I implemented a "Nuclear Reset" function that wipes conversation history from the checkpointer.
RemoveMessage to target and delete specific message IDs.Memory Status: 0 messages remaining.Integrated thread_id into all API calls to ensure the agent can maintain isolated conversation states for multiple users simultaneously.
| Endpoint | Method | Action |
| :--- | :--- | :--- |
| http://localhost:8000/chat | POST | Sends a message to the agent using a unique thread_id. |
| http://localhost:8000/chat/{id} | DELETE | Wipes the entire memory for a specific user ID. |
| http://localhost:8001/agent/playground | GET | Opens the visual UI to watch the agent execute graph nodes. |
Goal: Today I solved the "Long Wait" problem. Instead of making users wait for the entire AI completion, I implemented Server-Sent Events (SSE) to stream tokens in real-time.
FastAPI StreamingResponse: Configured the API to hold an open connection using the text/event-stream media type.
astream_events (v2): Leveraged LangGraph's event-driven streaming to filter for on_chat_model_stream events, ensuring only raw LLM content is sent to the UI.
Asynchronous Generators: Used async for and yield to push data chunks without blocking the server.
Backend: Used astream_events(version="v2") to intercept LLM tokens.
Data Extraction: Handled Gemini's multimodal chunk format ([{'text': '...'}]) by extracting the raw string in the FastAPI generator.
Protocol: Implemented Server-Sent Events (SSE) with StreamingResponse.
Frontend: Built a JavaScript consumer that uses fetch, Reader, and JSON.parse to decode and append text chunks to the UI dynamically.
Goal: Today I kicked off the Final Capstone Project. I transitioned from a basic chatbot to a functional Autonomous Agent that can browse the live web, research companies, and draft personalized outreach emails.
.bind_tools(), allowing the LLM to autonomously decide when it needs external data.MessagesState with add_messages to ensure the agent "remembers" its research findings while drafting the final email.should_continue logic gate that inspects LLM outputs for tool_calls and routes the flow between the "Brain" (Model) and the "Hands" (Tools).call_model): Injected a specialized SystemMessage to define the agent's persona as a Corporate Researcher, ensuring it follows a "Research → Analyze → Draft" workflow.call_tool): Developed a manual execution node that iterates through LLM-generated search queries, fetches live 2026 data, and returns ToolMessage objects to the graph.on_tool_start events, allowing the UI to show a "🔍 Searching..." status indicator during web latency.agent.py) from the delivery layer (main.py) for a professional, modular architecture.Goal: Today I finalized the Capstone by building a professional-grade frontend and a secondary "Auditor" agent. I transitioned from a basic HTML interface to a Reactive Streamlit Dashboard that features real-time token streaming and an automated quality guardrail system.
st.chat_message and st.empty to handle live-streaming text blocks directly from the backend..find() and .rfind() to extract structured data from LLM responses, ensuring the UI never crashes on "chatty" AI outputs.app_ui.py): Designed a dashboard with a persistent Sidebar Report Card, providing users with instant transparency into the agent's performance metrics (1-10 scale).evaluate_output): Engineered a specialized evaluation function that operates at temperature=0.1 to provide objective, consistent grading of partnership emails.requests.post(stream=True) to bridge the gap between the FastAPI /chat endpoint and the Streamlit frontend, maintaining a low-latency user experience.st.error or st.warning messages instead of crashing.Developed by Makarand Thorat
Machine endpoints, protocol fit, contract coverage, invocation examples, and guardrails for agent-to-agent use.
Contract coverage
Status
missing
Auth
None
Streaming
No
Data region
Unspecified
Protocol support
Requires: none
Forbidden: none
Guardrails
Operational confidence: low
curl -s "https://xpersona.co/api/v1/agents/crewai-makarand-thorat-ai-engineering-sprint/snapshot"
curl -s "https://xpersona.co/api/v1/agents/crewai-makarand-thorat-ai-engineering-sprint/contract"
curl -s "https://xpersona.co/api/v1/agents/crewai-makarand-thorat-ai-engineering-sprint/trust"
Trust and runtime signals, benchmark suites, failure patterns, and practical risk constraints.
Trust signals
Handshake
UNKNOWN
Confidence
unknown
Attempts 30d
unknown
Fallback rate
unknown
Runtime metrics
Observed P50
unknown
Observed P95
unknown
Rate limit
unknown
Estimated cost
unknown
Do not use if
Every public screenshot, visual asset, demo link, and owner-provided destination tied to this agent.
Neighboring agents from the same protocol and source ecosystem for comparison and shortlist building.
Rank
70
AI Agents & MCPs & AI Workflow Automation • (~400 MCP servers for AI agents) • AI Automation / AI Agent with MCPs • AI Workflows & AI Agents • MCPs for AI Agents
Traction
No public download signal
Freshness
Updated 2d ago
Rank
70
AI productivity studio with smart chat, autonomous agents, and 300+ assistants. Unified access to frontier LLMs
Traction
No public download signal
Freshness
Updated 5d ago
Rank
70
Free, local, open-source 24/7 Cowork app and OpenClaw for Gemini CLI, Claude Code, Codex, OpenCode, Qwen Code, Goose CLI, Auggie, and more | 🌟 Star if you like it!
Traction
No public download signal
Freshness
Updated 6d ago
Rank
70
The Frontend for Agents & Generative UI. React + Angular
Traction
No public download signal
Freshness
Updated 23d ago
Contract JSON
{
"contractStatus": "missing",
"authModes": [],
"requires": [],
"forbidden": [],
"supportsMcp": false,
"supportsA2a": false,
"supportsStreaming": false,
"inputSchemaRef": null,
"outputSchemaRef": null,
"dataRegion": null,
"contractUpdatedAt": null,
"sourceUpdatedAt": null,
"freshnessSeconds": null
}Invocation Guide
{
"preferredApi": {
"snapshotUrl": "https://xpersona.co/api/v1/agents/crewai-makarand-thorat-ai-engineering-sprint/snapshot",
"contractUrl": "https://xpersona.co/api/v1/agents/crewai-makarand-thorat-ai-engineering-sprint/contract",
"trustUrl": "https://xpersona.co/api/v1/agents/crewai-makarand-thorat-ai-engineering-sprint/trust"
},
"curlExamples": [
"curl -s \"https://xpersona.co/api/v1/agents/crewai-makarand-thorat-ai-engineering-sprint/snapshot\"",
"curl -s \"https://xpersona.co/api/v1/agents/crewai-makarand-thorat-ai-engineering-sprint/contract\"",
"curl -s \"https://xpersona.co/api/v1/agents/crewai-makarand-thorat-ai-engineering-sprint/trust\""
],
"jsonRequestTemplate": {
"query": "summarize this repo",
"constraints": {
"maxLatencyMs": 2000,
"protocolPreference": [
"OPENCLEW"
]
}
},
"jsonResponseTemplate": {
"ok": true,
"result": {
"summary": "...",
"confidence": 0.9
},
"meta": {
"source": "GITHUB_REPOS",
"generatedAt": "2026-04-17T00:07:38.581Z"
}
},
"retryPolicy": {
"maxAttempts": 3,
"backoffMs": [
500,
1500,
3500
],
"retryableConditions": [
"HTTP_429",
"HTTP_503",
"NETWORK_TIMEOUT"
]
}
}Trust JSON
{
"status": "unavailable",
"handshakeStatus": "UNKNOWN",
"verificationFreshnessHours": null,
"reputationScore": null,
"p95LatencyMs": null,
"successRate30d": null,
"fallbackRate": null,
"attempts30d": null,
"trustUpdatedAt": null,
"trustConfidence": "unknown",
"sourceUpdatedAt": null,
"freshnessSeconds": null
}Capability Matrix
{
"rows": [
{
"key": "OPENCLEW",
"type": "protocol",
"support": "unknown",
"confidenceSource": "profile",
"notes": "Listed on profile"
},
{
"key": "crewai",
"type": "capability",
"support": "supported",
"confidenceSource": "profile",
"notes": "Declared in agent profile metadata"
},
{
"key": "multi-agent",
"type": "capability",
"support": "supported",
"confidenceSource": "profile",
"notes": "Declared in agent profile metadata"
}
],
"flattenedTokens": "protocol:OPENCLEW|unknown|profile capability:crewai|supported|profile capability:multi-agent|supported|profile"
}Facts JSON
[
{
"factKey": "vendor",
"category": "vendor",
"label": "Vendor",
"value": "Makarand Thorat",
"href": "https://github.com/makarand-thorat/AI-Engineering-Sprint",
"sourceUrl": "https://github.com/makarand-thorat/AI-Engineering-Sprint",
"sourceType": "profile",
"confidence": "medium",
"observedAt": "2026-04-15T06:04:21.014Z",
"isPublic": true
},
{
"factKey": "protocols",
"category": "compatibility",
"label": "Protocol compatibility",
"value": "OpenClaw",
"href": "https://xpersona.co/api/v1/agents/crewai-makarand-thorat-ai-engineering-sprint/contract",
"sourceUrl": "https://xpersona.co/api/v1/agents/crewai-makarand-thorat-ai-engineering-sprint/contract",
"sourceType": "contract",
"confidence": "medium",
"observedAt": "2026-04-15T06:04:21.014Z",
"isPublic": true
},
{
"factKey": "docs_crawl",
"category": "integration",
"label": "Crawlable docs",
"value": "6 indexed pages on the official domain",
"href": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
"sourceUrl": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
"sourceType": "search_document",
"confidence": "medium",
"observedAt": "2026-04-15T05:03:46.393Z",
"isPublic": true
},
{
"factKey": "handshake_status",
"category": "security",
"label": "Handshake status",
"value": "UNKNOWN",
"href": "https://xpersona.co/api/v1/agents/crewai-makarand-thorat-ai-engineering-sprint/trust",
"sourceUrl": "https://xpersona.co/api/v1/agents/crewai-makarand-thorat-ai-engineering-sprint/trust",
"sourceType": "trust",
"confidence": "medium",
"observedAt": null,
"isPublic": true
}
]Change Events JSON
[
{
"eventType": "docs_update",
"title": "Docs refreshed: Sign in to GitHub · GitHub",
"description": "Fresh crawlable documentation was indexed for the official domain.",
"href": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
"sourceUrl": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
"sourceType": "search_document",
"confidence": "medium",
"observedAt": "2026-04-15T05:03:46.393Z",
"isPublic": true
}
]Sponsored
Ads related to AI-Engineering-Sprint and adjacent AI workflows.