Rank
83
A Model Context Protocol (MCP) server for GitLab
Traction
No public download signal
Freshness
Updated 2d ago
Crawler Summary
Turn any documentation into an AI-searchable knowledge base with MCP integration, vector search, and a CLI for ingestion Company Docs MCP Turn any documentation into an AI-searchable knowledge base. Write your content in markdown, publish it to a database, and let anyone on your team query it through AI tools like Claude, Cursor, or Slack — all powered by the $1. What This Does 1. **Write** — Create documentation as markdown files (design systems, HR policies, engineering guides, product specs — anything). 2. **Publish** — Run a comman Capability contract not published. No trust telemetry is available yet. 23 GitHub stars reported by the source. Last updated 2/25/2026.
Freshness
Last checked 2/25/2026
Best For
company-docs-mcp is best for cli workflows where MCP compatibility matters.
Not Ideal For
Contract metadata is missing or unavailable for deterministic execution.
Evidence Sources Checked
editorial-content, GITHUB MCP, runtime-metrics, public facts pack
Turn any documentation into an AI-searchable knowledge base with MCP integration, vector search, and a CLI for ingestion Company Docs MCP Turn any documentation into an AI-searchable knowledge base. Write your content in markdown, publish it to a database, and let anyone on your team query it through AI tools like Claude, Cursor, or Slack — all powered by the $1. What This Does 1. **Write** — Create documentation as markdown files (design systems, HR policies, engineering guides, product specs — anything). 2. **Publish** — Run a comman
Public facts
5
Change events
1
Artifacts
0
Freshness
Feb 25, 2026
Capability contract not published. No trust telemetry is available yet. 23 GitHub stars reported by the source. Last updated 2/25/2026.
Trust score
Unknown
Compatibility
MCP
Freshness
Feb 25, 2026
Vendor
Southleft
Artifacts
0
Benchmarks
0
Last release
1.3.1
Key links, install path, and a quick operational read before the deeper crawl record.
Summary
Capability contract not published. No trust telemetry is available yet. 23 GitHub stars reported by the source. Last updated 2/25/2026.
Setup snapshot
git clone https://github.com/southleft/company-docs-mcp.gitSetup complexity is MEDIUM. Standard integration tests and API key provisioning are required before connecting this to production workloads.
Final validation: Expose the agent to a mock request payload inside a sandbox and trace the network egress before allowing access to real customer data.
Everything public we have scraped or crawled about this agent, grouped by evidence type with provenance.
Vendor
Southleft
Protocol compatibility
MCP
Adoption signal
23 GitHub stars
Handshake status
UNKNOWN
Crawlable docs
6 indexed pages on the official domain
Merged public release, docs, artifact, benchmark, pricing, and trust refresh events.
Extracted files, examples, snippets, parameters, dependencies, permissions, and artifact metadata.
Extracted files
0
Examples
6
Snippets
0
Languages
typescript
mermaid
flowchart TD
A["Your Markdown Files"]
B["Cloudflare Workers AI"]
C[("Supabase")]
D["Your Team<br/>Claude, Cursor, Slack, Chat UI"]
E["Cloudflare Worker"]
A -- "ingest + publish" --> B
B -- "store vectors" --> C
D -- "ask a question" --> E
E -- "vector search" --> C
C -. "matching docs" .-> E
E -. "answers" .-> D
style A fill:#f9f9f9,stroke:#333,color:#333
style B fill:#dbeafe,stroke:#1d4ed8,color:#333
style C fill:#d4edda,stroke:#155724,color:#333
style D fill:#f0fdf4,stroke:#15803d,color:#333
style E fill:#dbeafe,stroke:#1d4ed8,color:#333bash
npm install company-docs-mcp
bash
npx wrangler login
env
# Supabase — where your documentation is stored SUPABASE_URL=https://your-project.supabase.co SUPABASE_ANON_KEY=eyJ... SUPABASE_SERVICE_KEY=eyJ... # Cloudflare — your Account ID (from Step 3) CLOUDFLARE_ACCOUNT_ID=your-account-id
text
docs/
├── onboarding/
│ ├── new-hire-checklist.md
│ └── tools-and-access.md
├── engineering/
│ ├── deployment-guide.md
│ └── code-review-process.md
├── policies/
│ ├── pto-policy.md
│ └── expense-guidelines.md
└── product/
├── feature-specs.md
└── release-process.mdmarkdown
--- title: Deployment Guide category: engineering tags: [deploy, ci-cd, release] description: How to deploy to production --- # Deployment Guide Your content here...
Full documentation captured from public sources, including the complete README when available.
Docs source
GITHUB MCP
Editorial quality
ready
Turn any documentation into an AI-searchable knowledge base with MCP integration, vector search, and a CLI for ingestion Company Docs MCP Turn any documentation into an AI-searchable knowledge base. Write your content in markdown, publish it to a database, and let anyone on your team query it through AI tools like Claude, Cursor, or Slack — all powered by the $1. What This Does 1. **Write** — Create documentation as markdown files (design systems, HR policies, engineering guides, product specs — anything). 2. **Publish** — Run a comman
Turn any documentation into an AI-searchable knowledge base. Write your content in markdown, publish it to a database, and let anyone on your team query it through AI tools like Claude, Cursor, or Slack — all powered by the Model Context Protocol.
There are two distinct roles when working with Company Docs MCP. Most people on your team only need the first one.
You just need a URL. No accounts, no installation, no terminal commands.
https://company-docs-mcp.example.workers.dev/mcp)That's it. Cloudflare, Supabase, and the CLI are only needed by the person who sets up and maintains the server.
The rest of this README is for you. Follow the setup guide below to get everything running.
The system uses three services. All three offer free tiers that are sufficient for most teams.
flowchart TD
A["Your Markdown Files"]
B["Cloudflare Workers AI"]
C[("Supabase")]
D["Your Team<br/>Claude, Cursor, Slack, Chat UI"]
E["Cloudflare Worker"]
A -- "ingest + publish" --> B
B -- "store vectors" --> C
D -- "ask a question" --> E
E -- "vector search" --> C
C -. "matching docs" .-> E
E -. "answers" .-> D
style A fill:#f9f9f9,stroke:#333,color:#333
style B fill:#dbeafe,stroke:#1d4ed8,color:#333
style C fill:#d4edda,stroke:#155724,color:#333
style D fill:#f0fdf4,stroke:#15803d,color:#333
style E fill:#dbeafe,stroke:#1d4ed8,color:#333
| Service | What it does | Why it's needed | |---------|-------------|-----------------| | Cloudflare | Hosts your server and converts text into searchable vectors using its built-in AI | This is where your server runs 24/7 so your team can query docs at any time. It also handles the AI processing that makes semantic search possible — no separate AI subscription needed. | | Supabase | Stores your documentation in a PostgreSQL database with vector search | Powers "smart" search — asking "how do I deploy?" will find documents about releases, CI/CD, and shipping, not just pages containing the word "deploy." | | npm package | A command-line tool that reads your markdown and publishes it to the database | You run this on your computer whenever you add or update documentation. |
No third-party AI API keys are required. Cloudflare provides the AI capabilities through its Workers AI service, which is included with every Cloudflare account at no extra cost.
Before starting, create free accounts on these two services:
That's it. No OpenAI, Anthropic, or Google API keys needed.
Follow these steps in order. Each one builds on the previous.
Open your terminal in the project where your documentation lives and run:
npm install company-docs-mcp
This downloads the CLI tool to your project. No external services are contacted yet.
Your documentation needs a database to store content and make it searchable.
https://abc123.supabase.co)eyJ)eyJ — keep this private)database/schema.sql, and click RunThis creates the database tables and search functions the system uses.
The schema file is included in the npm package at
node_modules/company-docs-mcp/database/schema.sql.
The CLI needs access to Cloudflare's AI service to convert your documentation into searchable vectors. The simplest way to connect is through the Wrangler CLI (Cloudflare's command-line tool, included with this package).
Run this command:
npx wrangler login
A browser window will open asking you to log in to your Cloudflare account and grant permission. Click Allow and return to your terminal.
You also need your Cloudflare Account ID:
That's the only Cloudflare setup needed for publishing. The CLI automatically detects the login credentials that wrangler login saved to your computer.
Token expiration: The login session expires periodically. If you see an authentication error when publishing, just run
npx wrangler loginagain.
Create a file called .env in your project root with these values:
# Supabase — where your documentation is stored
SUPABASE_URL=https://your-project.supabase.co
SUPABASE_ANON_KEY=eyJ...
SUPABASE_SERVICE_KEY=eyJ...
# Cloudflare — your Account ID (from Step 3)
CLOUDFLARE_ACCOUNT_ID=your-account-id
Replace the placeholder values with the ones you copied from Supabase (Step 2) and Cloudflare (Step 3).
Keep this file private. Never commit
.envto version control — it contains credentials. Add.envto your.gitignorefile.
Create markdown files in a directory. Any folder structure works:
docs/
├── onboarding/
│ ├── new-hire-checklist.md
│ └── tools-and-access.md
├── engineering/
│ ├── deployment-guide.md
│ └── code-review-process.md
├── policies/
│ ├── pto-policy.md
│ └── expense-guidelines.md
└── product/
├── feature-specs.md
└── release-process.md
You can optionally add YAML frontmatter to control how each document is categorized:
---
title: Deployment Guide
category: engineering
tags: [deploy, ci-cd, release]
description: How to deploy to production
---
# Deployment Guide
Your content here...
If you don't include frontmatter, the system will auto-detect a category and extract tags from the content.
Two commands turn your markdown files into a searchable knowledge base:
# Step 1: Parse markdown files into structured entries
npx company-docs ingest markdown --dir=./docs
# Step 2: Push entries to the database with AI-generated vectors
npx company-docs publish
What happens:
ingest markdown reads your files, extracts titles and sections, and saves structured entries to a content/entries/ folder in your project.publish sends each entry to Cloudflare's AI to generate search vectors, then stores everything in your Supabase database. A content hash automatically skips entries that haven't changed, so re-running is fast.To preview what would be published without actually writing to the database:
npx company-docs publish --dry-run
Updating documentation: Whenever you edit your markdown files, run both commands again. Only changed entries are re-processed.
The server is what runs 24/7 and handles search queries from your team's AI tools. It's deployed as a Cloudflare Worker.
git clone https://github.com/southleft/company-docs-mcp.git
cd company-docs-mcp
npm install
Edit wrangler.toml with your organization name:
name = "company-docs-mcp"
main = "src/index.ts"
compatibility_date = "2024-01-01"
compatibility_flags = ["nodejs_compat"]
[ai]
binding = "AI"
[vars]
ORGANIZATION_NAME = "Your Organization"
VECTOR_SEARCH_ENABLED = "true"
VECTOR_SEARCH_MODE = "vector"
The Worker caches recent search results to keep things fast. Run this command to create the cache:
npx wrangler kv namespace create CONTENT_CACHE
It will print an ID. Add it to wrangler.toml:
[[kv_namespaces]]
binding = "CONTENT_CACHE"
id = "the-id-that-was-printed"
These are stored securely as encrypted secrets — they never appear in plain text in the dashboard or config files.
echo "your-supabase-url" | npx wrangler secret put SUPABASE_URL
echo "your-anon-key" | npx wrangler secret put SUPABASE_ANON_KEY
echo "your-service-key" | npx wrangler secret put SUPABASE_SERVICE_KEY
Make sure you're logged in (you should be from Step 3 — if not, run npx wrangler login again), then:
npm run deploy
Your server is now live at https://company-docs-mcp.<your-subdomain>.workers.dev.
Share this URL with your team:
https://company-docs-mcp.<your-subdomain>.workers.dev/mcp
Claude: Settings > Connectors > Add custom connector > paste the URL.
Cursor / Windsurf / Other MCP clients: Add the URL as a remote MCP server in your client's settings.
Once connected, your AI tool will have access to these search tools:
| Tool | What it does |
|------|-------------|
| search_documentation | Finds documentation that matches your question using semantic search |
| search_chunks | Searches specific sections within documents |
| browse_by_category | Lists all documentation in a category (categories come from your markdown frontmatter or the --category flag) |
| get_all_tags | Lists every tag used across your documentation |
Since Cloudflare appears in several steps, here's a plain-language summary of what it does and when:
| When | What Cloudflare does | How it's accessed |
|------|---------------------|-------------------|
| Publishing docs (Step 6) | Converts your text into numerical vectors that enable semantic search | CLI calls the Cloudflare REST API using your wrangler login credentials |
| Running the server (Step 7+) | Hosts the always-on server that your team queries; generates vectors for incoming questions | Built-in — no API keys needed at runtime |
Is Cloudflare optional? No — it's required for both publishing and hosting. However, the free tier is more than sufficient and no separate AI subscription is needed. The only setup required is creating an account and running npx wrangler login.
company-docs <command> [options]
| Command | Description |
|---------|-------------|
| ingest markdown | Parse markdown files into content/entries/ |
| publish | Push entries to the database with AI-generated vectors |
| ingest supabase | Same as publish |
| manifest | Generate content/manifest.json (used during Worker deployment) |
| Option | Description | Default |
|--------|-------------|---------|
| --dir, -d | Folder containing your markdown files | ./docs |
| --category, -c | Category label for the content (overrides frontmatter) | documentation |
| --recursive | Include files in subfolders | true |
| --verbose, -v | Show detailed output | false |
| Option | Description |
|--------|-------------|
| --clear | Delete all existing data before publishing (start fresh) |
| --dry-run | Preview what would change without writing to the database |
| --verbose | Show detailed per-entry progress |
# Ingest docs from different folders with different categories
npx company-docs ingest markdown --dir=./docs/engineering --category=engineering
npx company-docs ingest markdown --dir=./docs/policies --category=hr
npx company-docs publish
# Full re-publish from scratch
npx company-docs publish --clear
# Preview changes
npx company-docs publish --dry-run --verbose
Each markdown file can optionally include a YAML frontmatter block at the very top. The system reads these fields:
---
title: Page Title
category: engineering
tags: [deploy, ci-cd, release]
description: A short summary of this page
status: stable
version: 1.0.0
source: src/path/to/source.ts
figma: https://figma.com/...
author: Jane Smith
department: Engineering
---
| Field | Effect |
|-------|--------|
| title | Used as the document title (overrides the first # Heading) |
| category | Sets the browseable category for this document |
| tags | Adds tags for filtering and discovery |
| description | Stored as metadata, returned in search results |
| status | Stored as metadata (e.g., draft, stable, deprecated) |
| version | Stored as metadata |
| source, figma, author, department | Stored as metadata, available in search results |
All fields are optional. If no frontmatter is present, the system auto-detects a category and extracts tags from the content.
Priority order: Frontmatter values take highest priority, followed by CLI flags (like --category), followed by auto-detection.
The system is designed for repeated runs — you don't need to start from scratch each time:
# Edit your markdown, then re-publish — only changes are processed
npx company-docs ingest markdown --dir=./docs
npx company-docs publish
The server includes a Slack slash command so team members can search documentation directly from Slack:
/docs deployment process
/docs PTO policy
/docs how to set up staging
See docs/SLACK_SETUP.md for setup instructions.
The server includes a web-based chat UI at its root URL (visit the Worker URL in a browser). It has two modes:
OPENAI_API_KEY).Customize the chat UI with environment variables in wrangler.toml:
[vars]
ORGANIZATION_NAME = "Your Organization"
ORGANIZATION_LOGO_URL = "https://example.com/logo.svg"
ORGANIZATION_TAGLINE = "Ask anything about our documentation"
See docs/BRANDING.md for full branding options.
By default, the system uses Cloudflare's Workers AI for embeddings (free, no extra keys). If your organization prefers OpenAI, you can switch:
OPENAI_API_KEY=sk-...
EMBEDDING_PROVIDER=openai
| Provider | Model | Dimensions | When to use |
|----------|-------|------------|-------------|
| Workers AI (default) | @cf/baai/bge-large-en-v1.5 | 1024 | Default. No extra keys. Free on Cloudflare. |
| OpenAI | text-embedding-3-small | 1536 | If your organization already standardizes on OpenAI. |
Important: The embedding provider must match the database schema. The default schema.sql uses 1024 dimensions (Workers AI). If switching to OpenAI, change all vector(1024) to vector(1536) in the schema before running it.
No results from search
npx company-docs publish completed without errors.env has the correct Supabase credentialsnpx company-docs publish --dry-run to see what entries existAuthentication error when publishing
wrangler login session may have expired — run npx wrangler login againCLOUDFLARE_ACCOUNT_ID is set in your .envDuplicate entries
npx company-docs ingest markdown followed by npx company-docs publish — duplicates are cleaned up automaticallyMCP client not connecting
/mcp path in the URL (not just the root URL)Wrangler login not working
CLOUDFLARE_API_TOKEN set in your environment or .env file, it can interfere with the login flow. Remove or comment it out, then try npx wrangler login again.When running from the cloned repository (not the npm package), additional ingestion methods are available:
# Crawl a website
npm run ingest:web -- --url=https://docs.example.com
# Import from CSV with URLs
npm run ingest:csv -- urls.csv
# Import a single URL
npm run ingest:url https://example.com/page
# Import PDFs
npm run ingest:pdf ./document.pdf
.env files — they contain credentialsSUPABASE_SERVICE_KEY has full database access — keep it privateSUPABASE_ANON_KEY is restricted by Row Level Security policies (read-only)MIT — see LICENSE for details.
Issues and pull requests are welcome at github.com/southleft/company-docs-mcp.
Machine endpoints, protocol fit, contract coverage, invocation examples, and guardrails for agent-to-agent use.
Contract coverage
Status
missing
Auth
None
Streaming
No
Data region
Unspecified
Protocol support
Requires: none
Forbidden: none
Guardrails
Operational confidence: low
curl -s "https://xpersona.co/api/v1/agents/mcp-southleft-company-docs-mcp/snapshot"
curl -s "https://xpersona.co/api/v1/agents/mcp-southleft-company-docs-mcp/contract"
curl -s "https://xpersona.co/api/v1/agents/mcp-southleft-company-docs-mcp/trust"
Trust and runtime signals, benchmark suites, failure patterns, and practical risk constraints.
Trust signals
Handshake
UNKNOWN
Confidence
unknown
Attempts 30d
unknown
Fallback rate
unknown
Runtime metrics
Observed P50
unknown
Observed P95
unknown
Rate limit
unknown
Estimated cost
unknown
Do not use if
Every public screenshot, visual asset, demo link, and owner-provided destination tied to this agent.
Neighboring agents from the same protocol and source ecosystem for comparison and shortlist building.
Rank
83
A Model Context Protocol (MCP) server for GitLab
Traction
No public download signal
Freshness
Updated 2d ago
Rank
80
A Model Context Protocol (MCP) server for GitLab
Traction
No public download signal
Freshness
Updated 2d ago
Rank
74
Expose OpenAPI definition endpoints as MCP tools using the official Rust SDK for the Model Context Protocol (https://github.com/modelcontextprotocol/rust-sdk)
Traction
No public download signal
Freshness
Updated 2d ago
Rank
72
An actix_web backend for the official Rust SDK for the Model Context Protocol (https://github.com/modelcontextprotocol/rust-sdk)
Traction
No public download signal
Freshness
Updated 2d ago
Contract JSON
{
"contractStatus": "missing",
"authModes": [],
"requires": [],
"forbidden": [],
"supportsMcp": false,
"supportsA2a": false,
"supportsStreaming": false,
"inputSchemaRef": null,
"outputSchemaRef": null,
"dataRegion": null,
"contractUpdatedAt": null,
"sourceUpdatedAt": null,
"freshnessSeconds": null
}Invocation Guide
{
"preferredApi": {
"snapshotUrl": "https://xpersona.co/api/v1/agents/mcp-southleft-company-docs-mcp/snapshot",
"contractUrl": "https://xpersona.co/api/v1/agents/mcp-southleft-company-docs-mcp/contract",
"trustUrl": "https://xpersona.co/api/v1/agents/mcp-southleft-company-docs-mcp/trust"
},
"curlExamples": [
"curl -s \"https://xpersona.co/api/v1/agents/mcp-southleft-company-docs-mcp/snapshot\"",
"curl -s \"https://xpersona.co/api/v1/agents/mcp-southleft-company-docs-mcp/contract\"",
"curl -s \"https://xpersona.co/api/v1/agents/mcp-southleft-company-docs-mcp/trust\""
],
"jsonRequestTemplate": {
"query": "summarize this repo",
"constraints": {
"maxLatencyMs": 2000,
"protocolPreference": [
"MCP"
]
}
},
"jsonResponseTemplate": {
"ok": true,
"result": {
"summary": "...",
"confidence": 0.9
},
"meta": {
"source": "GITHUB_MCP",
"generatedAt": "2026-04-17T00:10:50.755Z"
}
},
"retryPolicy": {
"maxAttempts": 3,
"backoffMs": [
500,
1500,
3500
],
"retryableConditions": [
"HTTP_429",
"HTTP_503",
"NETWORK_TIMEOUT"
]
}
}Trust JSON
{
"status": "unavailable",
"handshakeStatus": "UNKNOWN",
"verificationFreshnessHours": null,
"reputationScore": null,
"p95LatencyMs": null,
"successRate30d": null,
"fallbackRate": null,
"attempts30d": null,
"trustUpdatedAt": null,
"trustConfidence": "unknown",
"sourceUpdatedAt": null,
"freshnessSeconds": null
}Capability Matrix
{
"rows": [
{
"key": "MCP",
"type": "protocol",
"support": "unknown",
"confidenceSource": "profile",
"notes": "Listed on profile"
},
{
"key": "cli",
"type": "capability",
"support": "supported",
"confidenceSource": "profile",
"notes": "Declared in agent profile metadata"
}
],
"flattenedTokens": "protocol:MCP|unknown|profile capability:cli|supported|profile"
}Facts JSON
[
{
"factKey": "docs_crawl",
"category": "integration",
"label": "Crawlable docs",
"value": "6 indexed pages on the official domain",
"href": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
"sourceUrl": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
"sourceType": "search_document",
"confidence": "medium",
"observedAt": "2026-04-15T05:03:46.393Z",
"isPublic": true
},
{
"factKey": "vendor",
"category": "vendor",
"label": "Vendor",
"value": "Southleft",
"href": "https://github.com/southleft/company-docs-mcp",
"sourceUrl": "https://github.com/southleft/company-docs-mcp",
"sourceType": "profile",
"confidence": "medium",
"observedAt": "2026-02-25T02:58:56.590Z",
"isPublic": true
},
{
"factKey": "protocols",
"category": "compatibility",
"label": "Protocol compatibility",
"value": "MCP",
"href": "https://xpersona.co/api/v1/agents/mcp-southleft-company-docs-mcp/contract",
"sourceUrl": "https://xpersona.co/api/v1/agents/mcp-southleft-company-docs-mcp/contract",
"sourceType": "contract",
"confidence": "medium",
"observedAt": "2026-02-25T02:58:56.590Z",
"isPublic": true
},
{
"factKey": "traction",
"category": "adoption",
"label": "Adoption signal",
"value": "23 GitHub stars",
"href": "https://github.com/southleft/company-docs-mcp",
"sourceUrl": "https://github.com/southleft/company-docs-mcp",
"sourceType": "profile",
"confidence": "medium",
"observedAt": "2026-02-25T02:58:56.590Z",
"isPublic": true
},
{
"factKey": "handshake_status",
"category": "security",
"label": "Handshake status",
"value": "UNKNOWN",
"href": "https://xpersona.co/api/v1/agents/mcp-southleft-company-docs-mcp/trust",
"sourceUrl": "https://xpersona.co/api/v1/agents/mcp-southleft-company-docs-mcp/trust",
"sourceType": "trust",
"confidence": "medium",
"observedAt": null,
"isPublic": true
}
]Change Events JSON
[
{
"eventType": "docs_update",
"title": "Docs refreshed: Sign in to GitHub · GitHub",
"description": "Fresh crawlable documentation was indexed for the official domain.",
"href": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
"sourceUrl": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
"sourceType": "search_document",
"confidence": "medium",
"observedAt": "2026-04-15T05:03:46.393Z",
"isPublic": true
}
]Sponsored
Ads related to company-docs-mcp and adjacent AI workflows.