Rank
70
AI Agents & MCPs & AI Workflow Automation • (~400 MCP servers for AI agents) • AI Automation / AI Agent with MCPs • AI Workflows & AI Agents • MCPs for AI Agents
Traction
No public download signal
Freshness
Updated 2d ago
Crawler Summary
AI-powered automation system that transforms natural language requirements into production-ready Kubernetes manifests through three specialized CrewAI agents. System is able to execute manifest creation and editing, knowledge retrieval using RAG with late chunking, web search, and static and runtime validation of the configuration files. KubernetesCrew $1 TFM - AI-Powered DevOps Automation System The repository contains a DevOps automation assistant that leverages AI agents, vector databases, and comprehensive tooling to streamline infrastructure management and operations, designed for Kubernetes as part of the final project of my Master's Thesis in Applied Artificial Intelligence. This is repo is indexed with DeepWiki, you can ask questions about it Capability contract not published. No trust telemetry is available yet. Last updated 2/25/2026.
Freshness
Last checked 2/25/2026
Best For
KubernetesCrew is best for crewai, multi-agent workflows where OpenClaw compatibility matters.
Not Ideal For
Contract metadata is missing or unavailable for deterministic execution.
Evidence Sources Checked
editorial-content, GITHUB REPOS, runtime-metrics, public facts pack
AI-powered automation system that transforms natural language requirements into production-ready Kubernetes manifests through three specialized CrewAI agents. System is able to execute manifest creation and editing, knowledge retrieval using RAG with late chunking, web search, and static and runtime validation of the configuration files. KubernetesCrew $1 TFM - AI-Powered DevOps Automation System The repository contains a DevOps automation assistant that leverages AI agents, vector databases, and comprehensive tooling to streamline infrastructure management and operations, designed for Kubernetes as part of the final project of my Master's Thesis in Applied Artificial Intelligence. This is repo is indexed with DeepWiki, you can ask questions about it
Public facts
4
Change events
1
Artifacts
0
Freshness
Feb 25, 2026
Capability contract not published. No trust telemetry is available yet. Last updated 2/25/2026.
Trust score
Unknown
Compatibility
OpenClaw
Freshness
Feb 25, 2026
Vendor
Thesov
Artifacts
0
Benchmarks
0
Last release
Unpublished
Key links, install path, and a quick operational read before the deeper crawl record.
Summary
Capability contract not published. No trust telemetry is available yet. Last updated 2/25/2026.
Setup snapshot
Setup complexity is LOW. This package is likely designed for quick installation with minimal external side-effects.
Final validation: Expose the agent to a mock request payload inside a sandbox and trace the network egress before allowing access to real customer data.
Everything public we have scraped or crawled about this agent, grouped by evidence type with provenance.
Vendor
Thesov
Protocol compatibility
OpenClaw
Handshake status
UNKNOWN
Crawlable docs
6 indexed pages on the official domain
Merged public release, docs, artifact, benchmark, pricing, and trust refresh events.
Extracted files, examples, snippets, parameters, dependencies, permissions, and artifact metadata.
Extracted files
0
Examples
6
Snippets
0
Languages
python
bash
git clone https://github.com/TheSOV/TFM cd TFM poetry install
bash
cp .env.example .env
bash
# OpenAI API (required for AI agents) OPENAI_API_KEY=your_openai_api_key_here # Alternative: OpenRouter API/ In roadmap OPENROUTER_API_KEY=your_openrouter_key OPENROUTER_BASE_URL="https://openrouter.ai/api/v1" # Model Configuration AGENT_MAIN_MODEL="openai/gpt-4.1-mini" # model that will be used by the agents during reasoning and to answer the user AGENT_TOOL_CALL_MODEL="openai/gpt-4.1-mini" # model that will be used by the agents during tool calls TOOL_MODEL="openai/gpt-4.1-mini" # model that will be used by the tools (in most case, to summarize raw information gathered by the RAG and Web Research tools) GUARDRAIL_MODEL="openai/gpt-4.1-nano" # model that will be used by the guardrails (to validate the agents' responses)
bash
# Weaviate Vector Database WEAVIATE_API_KEY=your_secure_weaviate_key WEAVIATE_HOST="127.0.0.1" WEAVIATE_PORT="8080" WEAVIATE_GRPC_PORT="50051"
bash
# Embedding Model Configuration
LATE_CHUNKING_MODEL_NAME="jinaai/jina-embeddings-v3" # model that will be used to generate embeddings for the knowledge
LATE_CHUNKING_HEADERS_TO_SPLIT_ON=[("#", "h1"), ("##", "h2")] # headers to split on for markdown files
LATE_CHUNKING_MAX_CHUNK_CHARS=2048 # maximum chunk size in characters
LATE_CHUNKING_DEVICE="cuda" # or "cpu", "cuda" is recommended for GPU acceleration
# Knowledge Ingestion
INGEST_KNOWLEDGE_SUMMARY_MODEL="gpt-4.1-nano" # model that will be used to generate summaries for the knowledge
INGEST_KNOWLEDGE_CONFIG_PATH="config/knowledge/knowledge.yaml" # path to the knowledge ingestion configuration file
INGEST_KNOWLEDGE_OVERRIDE_COLLECTION=True # override the collection if it already exists. When this option is enabled, the ingestion process will check at the begining if the collection already exists and if it does, it will be deleted and recreated. This verification occurs before the ingestion process starts, if multiple knowledge sources are configured with the same collection name, they will still be merged.bash
# Working Directories TEMP_FILES_DIR="temp" # directory where the kubernetes YAML files will be stored during the assistant's execution CONFIG_FILES_DIR="config" # directory where configuration files are stored CREWAI_STORAGE_DIR="./memory" # directory where CrewAI will store its memory (on roadmap)
Full documentation captured from public sources, including the complete README when available.
Docs source
GITHUB REPOS
Editorial quality
ready
AI-powered automation system that transforms natural language requirements into production-ready Kubernetes manifests through three specialized CrewAI agents. System is able to execute manifest creation and editing, knowledge retrieval using RAG with late chunking, web search, and static and runtime validation of the configuration files. KubernetesCrew $1 TFM - AI-Powered DevOps Automation System The repository contains a DevOps automation assistant that leverages AI agents, vector databases, and comprehensive tooling to streamline infrastructure management and operations, designed for Kubernetes as part of the final project of my Master's Thesis in Applied Artificial Intelligence. This is repo is indexed with DeepWiki, you can ask questions about it
The repository contains a DevOps automation assistant that leverages AI agents, vector databases, and comprehensive tooling to streamline infrastructure management and operations, designed for Kubernetes as part of the final project of my Master's Thesis in Applied Artificial Intelligence. This is repo is indexed with DeepWiki, you can ask questions about it here.
Clone the repository and install dependencies:
git clone https://github.com/TheSOV/TFM
cd TFM
poetry install
Copy the example environment file and configure your settings:
cp .env.example .env
Edit .env with your configuration:
# OpenAI API (required for AI agents)
OPENAI_API_KEY=your_openai_api_key_here
# Alternative: OpenRouter API/ In roadmap
OPENROUTER_API_KEY=your_openrouter_key
OPENROUTER_BASE_URL="https://openrouter.ai/api/v1"
# Model Configuration
AGENT_MAIN_MODEL="openai/gpt-4.1-mini" # model that will be used by the agents during reasoning and to answer the user
AGENT_TOOL_CALL_MODEL="openai/gpt-4.1-mini" # model that will be used by the agents during tool calls
TOOL_MODEL="openai/gpt-4.1-mini" # model that will be used by the tools (in most case, to summarize raw information gathered by the RAG and Web Research tools)
GUARDRAIL_MODEL="openai/gpt-4.1-nano" # model that will be used by the guardrails (to validate the agents' responses)
# Weaviate Vector Database
WEAVIATE_API_KEY=your_secure_weaviate_key
WEAVIATE_HOST="127.0.0.1"
WEAVIATE_PORT="8080"
WEAVIATE_GRPC_PORT="50051"
# Embedding Model Configuration
LATE_CHUNKING_MODEL_NAME="jinaai/jina-embeddings-v3" # model that will be used to generate embeddings for the knowledge
LATE_CHUNKING_HEADERS_TO_SPLIT_ON=[("#", "h1"), ("##", "h2")] # headers to split on for markdown files
LATE_CHUNKING_MAX_CHUNK_CHARS=2048 # maximum chunk size in characters
LATE_CHUNKING_DEVICE="cuda" # or "cpu", "cuda" is recommended for GPU acceleration
# Knowledge Ingestion
INGEST_KNOWLEDGE_SUMMARY_MODEL="gpt-4.1-nano" # model that will be used to generate summaries for the knowledge
INGEST_KNOWLEDGE_CONFIG_PATH="config/knowledge/knowledge.yaml" # path to the knowledge ingestion configuration file
INGEST_KNOWLEDGE_OVERRIDE_COLLECTION=True # override the collection if it already exists. When this option is enabled, the ingestion process will check at the begining if the collection already exists and if it does, it will be deleted and recreated. This verification occurs before the ingestion process starts, if multiple knowledge sources are configured with the same collection name, they will still be merged.
# Working Directories
TEMP_FILES_DIR="temp" # directory where the kubernetes YAML files will be stored during the assistant's execution
CONFIG_FILES_DIR="config" # directory where configuration files are stored
CREWAI_STORAGE_DIR="./memory" # directory where CrewAI will store its memory (on roadmap)
# kubectl Setup
KUBECTL_PATH="kubectl" # or full path on Windows
KUBECTL_ALLOWED_VERBS="get,describe,logs,apply,diff,delete,create,patch,exec,cp,rollout,scale" # verbs allowed to be used by the assistant
KUBECTL_SAFE_NAMESPACES="" # comma-separated list of all safe namespaces, leave empty to allow all namespaces
KUBECTL_DENIED_NAMESPACES="kube-system,kube-public" # comma-separated list of all denied namespaces
KUBECTL_DENY_FLAGS="--raw,--kubeconfig,--context,-ojsonpath,--output" # comma-separated list of all denied flags
K8S_VERSION="v1.29.0" # kubernetes version targeted
# Web Research APIs
STACK_EXCHANGE_API_KEY=your_stack_exchange_key # stack exchange API key
BRAVE_API_KEY=your_brave_search_key # brave search API key
POPEYE_PATH="/path/to/popeye" # Kubernetes cluster scanner
The config/knowledge/knowledge.yaml file, defines how knowledge ingestion system processes documents for the RAG (Retrieval Augmented Generation) system. The knowledge.yaml file defines collections of documents that will be ingested into the Weaviate vector database.
Each collection in the YAML file must have the following structure:
name: The Weaviate collection identifier. If you use the same name for multiple collections, they will be merged into a single collection. If you use different names, they will be created as separate collections. While using RAG system, the collections will isolate the information, forcing to make a query over one collection at a time.description: Metadata describing the collection's content. It is useful when multiple collections are defined, allowing the assistant to know what information a collection contains.dirs: List of directories to scan for documents. Directories are scaned recursively.rules: Processing rules for file filtering and handlingThe rules section controls how files are processed during ingestion:
include: Array of file extensions to process (e.g., ["md"], ["yaml", "yml"], ["adoc"])exclude: Array of file extensions to skip (typically empty [])min_length: Minimum file size in characters (-1 for unlimited)max_length: Maximum file size in characters (-1 for unlimited)generate_summary: Boolean flag controlling whether to generate LLM summaries, that will be added as a comment at the beginning of the file.
true for code only files to add contextfalse for files that are self-documentingThe ingestion system handles different file types with specialized chunking strategies:
MarkdownHeaderTextSplittergenerate_summary: true for contextHere's how to configure a new knowledge source:
collections:
- name: "knowledge"
description: "Custom documentation collection about Kubernetes"
dirs:
- "knowledge\\custom\\docs"
- "knowledge\\custom\\docs2"
rules:
include: ["md", "rst"]
exclude: []
min_length: 100
max_length: -1
generate_summary: false
The ingestion process will scan the specified directories, apply the filtering rules, and process matching files according to their type-specific chunking strategy before storing them in the Weaviate vector database. To begin the ingestion process, run the ingest_knowledge.py script.
For CUDA support (GPU acceleration):
poetry install --with cu118
To run the application, execute:
python main.py
Machine endpoints, protocol fit, contract coverage, invocation examples, and guardrails for agent-to-agent use.
Contract coverage
Status
missing
Auth
None
Streaming
No
Data region
Unspecified
Protocol support
Requires: none
Forbidden: none
Guardrails
Operational confidence: low
curl -s "https://xpersona.co/api/v1/agents/crewai-thesov-kubernetescrew/snapshot"
curl -s "https://xpersona.co/api/v1/agents/crewai-thesov-kubernetescrew/contract"
curl -s "https://xpersona.co/api/v1/agents/crewai-thesov-kubernetescrew/trust"
Trust and runtime signals, benchmark suites, failure patterns, and practical risk constraints.
Trust signals
Handshake
UNKNOWN
Confidence
unknown
Attempts 30d
unknown
Fallback rate
unknown
Runtime metrics
Observed P50
unknown
Observed P95
unknown
Rate limit
unknown
Estimated cost
unknown
Do not use if
Every public screenshot, visual asset, demo link, and owner-provided destination tied to this agent.
Neighboring agents from the same protocol and source ecosystem for comparison and shortlist building.
Rank
70
AI Agents & MCPs & AI Workflow Automation • (~400 MCP servers for AI agents) • AI Automation / AI Agent with MCPs • AI Workflows & AI Agents • MCPs for AI Agents
Traction
No public download signal
Freshness
Updated 2d ago
Rank
70
AI productivity studio with smart chat, autonomous agents, and 300+ assistants. Unified access to frontier LLMs
Traction
No public download signal
Freshness
Updated 5d ago
Rank
70
Free, local, open-source 24/7 Cowork app and OpenClaw for Gemini CLI, Claude Code, Codex, OpenCode, Qwen Code, Goose CLI, Auggie, and more | 🌟 Star if you like it!
Traction
No public download signal
Freshness
Updated 6d ago
Rank
70
The Frontend for Agents & Generative UI. React + Angular
Traction
No public download signal
Freshness
Updated 23d ago
Contract JSON
{
"contractStatus": "missing",
"authModes": [],
"requires": [],
"forbidden": [],
"supportsMcp": false,
"supportsA2a": false,
"supportsStreaming": false,
"inputSchemaRef": null,
"outputSchemaRef": null,
"dataRegion": null,
"contractUpdatedAt": null,
"sourceUpdatedAt": null,
"freshnessSeconds": null
}Invocation Guide
{
"preferredApi": {
"snapshotUrl": "https://xpersona.co/api/v1/agents/crewai-thesov-kubernetescrew/snapshot",
"contractUrl": "https://xpersona.co/api/v1/agents/crewai-thesov-kubernetescrew/contract",
"trustUrl": "https://xpersona.co/api/v1/agents/crewai-thesov-kubernetescrew/trust"
},
"curlExamples": [
"curl -s \"https://xpersona.co/api/v1/agents/crewai-thesov-kubernetescrew/snapshot\"",
"curl -s \"https://xpersona.co/api/v1/agents/crewai-thesov-kubernetescrew/contract\"",
"curl -s \"https://xpersona.co/api/v1/agents/crewai-thesov-kubernetescrew/trust\""
],
"jsonRequestTemplate": {
"query": "summarize this repo",
"constraints": {
"maxLatencyMs": 2000,
"protocolPreference": [
"OPENCLEW"
]
}
},
"jsonResponseTemplate": {
"ok": true,
"result": {
"summary": "...",
"confidence": 0.9
},
"meta": {
"source": "GITHUB_REPOS",
"generatedAt": "2026-04-16T23:34:06.091Z"
}
},
"retryPolicy": {
"maxAttempts": 3,
"backoffMs": [
500,
1500,
3500
],
"retryableConditions": [
"HTTP_429",
"HTTP_503",
"NETWORK_TIMEOUT"
]
}
}Trust JSON
{
"status": "unavailable",
"handshakeStatus": "UNKNOWN",
"verificationFreshnessHours": null,
"reputationScore": null,
"p95LatencyMs": null,
"successRate30d": null,
"fallbackRate": null,
"attempts30d": null,
"trustUpdatedAt": null,
"trustConfidence": "unknown",
"sourceUpdatedAt": null,
"freshnessSeconds": null
}Capability Matrix
{
"rows": [
{
"key": "OPENCLEW",
"type": "protocol",
"support": "unknown",
"confidenceSource": "profile",
"notes": "Listed on profile"
},
{
"key": "crewai",
"type": "capability",
"support": "supported",
"confidenceSource": "profile",
"notes": "Declared in agent profile metadata"
},
{
"key": "multi-agent",
"type": "capability",
"support": "supported",
"confidenceSource": "profile",
"notes": "Declared in agent profile metadata"
}
],
"flattenedTokens": "protocol:OPENCLEW|unknown|profile capability:crewai|supported|profile capability:multi-agent|supported|profile"
}Facts JSON
[
{
"factKey": "docs_crawl",
"category": "integration",
"label": "Crawlable docs",
"value": "6 indexed pages on the official domain",
"href": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
"sourceUrl": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
"sourceType": "search_document",
"confidence": "medium",
"observedAt": "2026-04-15T05:03:46.393Z",
"isPublic": true
},
{
"factKey": "vendor",
"category": "vendor",
"label": "Vendor",
"value": "Thesov",
"href": "https://github.com/TheSOV/KubernetesCrew",
"sourceUrl": "https://github.com/TheSOV/KubernetesCrew",
"sourceType": "profile",
"confidence": "medium",
"observedAt": "2026-02-25T05:06:59.482Z",
"isPublic": true
},
{
"factKey": "protocols",
"category": "compatibility",
"label": "Protocol compatibility",
"value": "OpenClaw",
"href": "https://xpersona.co/api/v1/agents/crewai-thesov-kubernetescrew/contract",
"sourceUrl": "https://xpersona.co/api/v1/agents/crewai-thesov-kubernetescrew/contract",
"sourceType": "contract",
"confidence": "medium",
"observedAt": "2026-02-25T05:06:59.482Z",
"isPublic": true
},
{
"factKey": "handshake_status",
"category": "security",
"label": "Handshake status",
"value": "UNKNOWN",
"href": "https://xpersona.co/api/v1/agents/crewai-thesov-kubernetescrew/trust",
"sourceUrl": "https://xpersona.co/api/v1/agents/crewai-thesov-kubernetescrew/trust",
"sourceType": "trust",
"confidence": "medium",
"observedAt": null,
"isPublic": true
}
]Change Events JSON
[
{
"eventType": "docs_update",
"title": "Docs refreshed: Sign in to GitHub · GitHub",
"description": "Fresh crawlable documentation was indexed for the official domain.",
"href": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
"sourceUrl": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
"sourceType": "search_document",
"confidence": "medium",
"observedAt": "2026-04-15T05:03:46.393Z",
"isPublic": true
}
]Sponsored
Ads related to KubernetesCrew and adjacent AI workflows.