Rank
70
AI Agents & MCPs & AI Workflow Automation • (~400 MCP servers for AI agents) • AI Automation / AI Agent with MCPs • AI Workflows & AI Agents • MCPs for AI Agents
Traction
No public download signal
Freshness
Updated 2d ago
Crawler Summary
Rust-native multi-agent orchestration inspired by CrewAI. crewai-rs <p align="center"> <img src="./assets/hero.svg" alt="crewai-rs hero illustration" width="960" /> </p> <p align="center"> <a href="https://github.com/LongWeihan/crewai-rs/actions/workflows/ci.yml"><img alt="CI" src="https://github.com/LongWeihan/crewai-rs/actions/workflows/ci.yml/badge.svg" /></a> <a href="https://github.com/LongWeihan/crewai-rs/stargazers"><img alt="GitHub stars" src="https://img.shields.io Capability contract not published. No trust telemetry is available yet. Last updated 4/15/2026.
Freshness
Last checked 4/15/2026
Best For
crewai-rs is best for crewai, multi-agent workflows where OpenClaw compatibility matters.
Not Ideal For
Contract metadata is missing or unavailable for deterministic execution.
Evidence Sources Checked
editorial-content, GITHUB REPOS, runtime-metrics, public facts pack
Rust-native multi-agent orchestration inspired by CrewAI. crewai-rs <p align="center"> <img src="./assets/hero.svg" alt="crewai-rs hero illustration" width="960" /> </p> <p align="center"> <a href="https://github.com/LongWeihan/crewai-rs/actions/workflows/ci.yml"><img alt="CI" src="https://github.com/LongWeihan/crewai-rs/actions/workflows/ci.yml/badge.svg" /></a> <a href="https://github.com/LongWeihan/crewai-rs/stargazers"><img alt="GitHub stars" src="https://img.shields.io
Public facts
4
Change events
1
Artifacts
0
Freshness
Apr 15, 2026
Capability contract not published. No trust telemetry is available yet. Last updated 4/15/2026.
Trust score
Unknown
Compatibility
OpenClaw
Freshness
Apr 15, 2026
Vendor
Longweihan
Artifacts
0
Benchmarks
0
Last release
Unpublished
Key links, install path, and a quick operational read before the deeper crawl record.
Summary
Capability contract not published. No trust telemetry is available yet. Last updated 4/15/2026.
Setup snapshot
Setup complexity is LOW. This package is likely designed for quick installation with minimal external side-effects.
Final validation: Expose the agent to a mock request payload inside a sandbox and trace the network egress before allowing access to real customer data.
Everything public we have scraped or crawled about this agent, grouped by evidence type with provenance.
Vendor
Longweihan
Protocol compatibility
OpenClaw
Handshake status
UNKNOWN
Crawlable docs
6 indexed pages on the official domain
Merged public release, docs, artifact, benchmark, pricing, and trust refresh events.
Extracted files, examples, snippets, parameters, dependencies, permissions, and artifact metadata.
Extracted files
0
Examples
6
Snippets
0
Languages
python
toml
[dependencies]
crewai-rs = "0.1.0"
tokio = { version = "1", features = ["macros", "rt-multi-thread"] }rust
use std::sync::Arc;
use crewai_rs::prelude::*;
#[tokio::main]
async fn main() -> crewai_rs::Result<()> {
let search = Arc::new(FnTool::new(
"search",
"Find launch signals from the web index.",
|input| async move {
Ok(ToolOutput::text(format!(
"Top hit for `{}`: Rust developers want a typed CrewAI alternative.",
input.value
)))
},
));
let researcher_model = Arc::new(MockChatModel::from_strings(
"mock-researcher",
[
r#"<tool_call>{"tool":"search","input":"CrewAI Rust launch opportunities"}</tool_call>"#,
r#"<final_answer>- Demand is real
- Typed flows are a differentiator
- Early adopters care about OpenAI-compatible backends</final_answer>"#,
],
));
let writer_model = Arc::new(MockChatModel::from_strings(
"mock-writer",
[r#"<final_answer># Launch Brief
Ship `crewai-rs` as the Rust-native way to build crews, typed flows, and durable AI orchestration.</final_answer>"#],
));
let researcher = Agent::builder("researcher")
.role("Launch Researcher")
.goal("Find the sharpest positioning for a Rust AI crew framework.")
.backstory("You turn weak launch ideas into high-signal product narratives.")
.model_ref(researcher_model)
.tool_ref(search)
.build()?;
let writer = Agent::builder("writer")
.role("Technical Writer")
.goal("Turn research into a polished launch-ready brief.")
.model_ref(writer_model)
.build()?;
let research = Task::builder("research-market")
.description("Research the market gap for a Rust-native CrewAI-style framework.")
.expected_output("A concise markdown bullet list with launch hooks.")
.agent("researcher")
.build()?;
let brief = Task::builder("write-brief")
.description("Write a crisp launch brief for GitHub and X.")
.expected_outputbash
cargo run --example mock_launch
yaml
name: launch-crew
process: hierarchical
manager:
id: manager
role: Product Strategist
goal: Keep the crew focused and commercially sharp.
model: planner
agents:
- id: researcher
role: Rust OSS Researcher
goal: Find demand signals and differentiators.
model: planner
tools: [search]
- id: writer
role: Launch Writer
goal: Turn findings into launch copy.
model: writer
tasks:
- id: research-market
description: Research launch opportunities for crewai-rs.
expected_output: Bullet findings with proof points.
agent: researcher
- id: write-brief
description: Turn findings into a launch brief.
expected_output: Markdown brief.
agent: writer
depends_on: [research-market]bash
cargo run --release --example runtime_bench
rust
use crewai_rs::OpenAIChatModel;
let model = OpenAIChatModel::builder("gpt-4.1-mini", std::env::var("OPENAI_API_KEY")?)
.temperature(0.2)
.build()?;Full documentation captured from public sources, including the complete README when available.
Docs source
GITHUB REPOS
Editorial quality
ready
Rust-native multi-agent orchestration inspired by CrewAI. crewai-rs <p align="center"> <img src="./assets/hero.svg" alt="crewai-rs hero illustration" width="960" /> </p> <p align="center"> <a href="https://github.com/LongWeihan/crewai-rs/actions/workflows/ci.yml"><img alt="CI" src="https://github.com/LongWeihan/crewai-rs/actions/workflows/ci.yml/badge.svg" /></a> <a href="https://github.com/LongWeihan/crewai-rs/stargazers"><img alt="GitHub stars" src="https://img.shields.io
Rust-native multi-agent orchestration inspired by CrewAI.
crewai-rs gives you a practical MVP stack for building agent crews in Rust:
Agent, Task, and Crew primitivesFlow state machines for deterministic workflow stepsMockChatModel for deterministic testsOpenAIChatModel for real deployments and OpenAI-compatible endpointsThe current AI orchestration ecosystem is still heavily Python-first. That is fine for experiments, but not ideal for:
crewai-rs is built to be idiomatic Rust first, not a line-by-line port.
[dependencies]
crewai-rs = "0.1.0"
tokio = { version = "1", features = ["macros", "rt-multi-thread"] }
use std::sync::Arc;
use crewai_rs::prelude::*;
#[tokio::main]
async fn main() -> crewai_rs::Result<()> {
let search = Arc::new(FnTool::new(
"search",
"Find launch signals from the web index.",
|input| async move {
Ok(ToolOutput::text(format!(
"Top hit for `{}`: Rust developers want a typed CrewAI alternative.",
input.value
)))
},
));
let researcher_model = Arc::new(MockChatModel::from_strings(
"mock-researcher",
[
r#"<tool_call>{"tool":"search","input":"CrewAI Rust launch opportunities"}</tool_call>"#,
r#"<final_answer>- Demand is real
- Typed flows are a differentiator
- Early adopters care about OpenAI-compatible backends</final_answer>"#,
],
));
let writer_model = Arc::new(MockChatModel::from_strings(
"mock-writer",
[r#"<final_answer># Launch Brief
Ship `crewai-rs` as the Rust-native way to build crews, typed flows, and durable AI orchestration.</final_answer>"#],
));
let researcher = Agent::builder("researcher")
.role("Launch Researcher")
.goal("Find the sharpest positioning for a Rust AI crew framework.")
.backstory("You turn weak launch ideas into high-signal product narratives.")
.model_ref(researcher_model)
.tool_ref(search)
.build()?;
let writer = Agent::builder("writer")
.role("Technical Writer")
.goal("Turn research into a polished launch-ready brief.")
.model_ref(writer_model)
.build()?;
let research = Task::builder("research-market")
.description("Research the market gap for a Rust-native CrewAI-style framework.")
.expected_output("A concise markdown bullet list with launch hooks.")
.agent("researcher")
.build()?;
let brief = Task::builder("write-brief")
.description("Write a crisp launch brief for GitHub and X.")
.expected_output("A markdown brief with one clear positioning sentence.")
.agent("writer")
.depends_on(["research-market"])
.build()?;
let crew = Crew::builder("launch-crew")
.agent(researcher)
.agent(writer)
.task(research)
.task(brief)
.build()?;
let result = crew
.kickoff(
KickoffInput::new("Prepare the strongest possible public launch story.")
.with_context("target_audience", "Rust builders shipping AI products"),
)
.await?;
println!("{}", result.final_output);
Ok(())
}
Run the example in this repo:
cargo run --example mock_launch
crewai-rs lets you define a crew in YAML, then attach live models and tools from a runtime registry.
name: launch-crew
process: hierarchical
manager:
id: manager
role: Product Strategist
goal: Keep the crew focused and commercially sharp.
model: planner
agents:
- id: researcher
role: Rust OSS Researcher
goal: Find demand signals and differentiators.
model: planner
tools: [search]
- id: writer
role: Launch Writer
goal: Turn findings into launch copy.
model: writer
tasks:
- id: research-market
description: Research launch opportunities for crewai-rs.
expected_output: Bullet findings with proof points.
agent: researcher
- id: write-brief
description: Turn findings into a launch brief.
expected_output: Markdown brief.
agent: writer
depends_on: [research-market]
Agent crews are not the whole story. Sometimes you need deterministic workflow around them.
Flow<State> gives you typed orchestration for stateful steps such as:
See examples/flow_pipeline.rs for a full example.
These numbers are from a local, reproducible microbenchmark that measures orchestration overhead only. They do not include real network latency or provider latency.
Benchmark environment:
rustc: 1.94.0 (4a4ef493e 2026-03-02)| Scenario | Observed mean range | Observed median range | Observed p95 range | What it measures |
| --- | ---: | ---: | ---: | --- |
| flow_run | 368-461 ns | 300-400 ns | 500 ns | typed deterministic flow overhead for a two-step state machine |
| crew_kickoff_sequential | 5.7-6.4 us | 5.4-6.0 us | 6.5-6.9 us | two-task crew kickoff with deterministic models and one tool call |
| crew_kickoff_hierarchical | 7.1-8.0 us | 6.9-7.7 us | 7.4-8.4 us | same as above, plus a manager brief on each task |
| blueprint_parse_and_build | 20.1-22.4 us | 18.8-21.1 us | 24.1-26.7 us | parse YAML and build a runtime-bound crew from a registry |
One practical takeaway from this snapshot: adding the manager layer increased kickoff overhead by about 23.8% to 25.4% versus the sequential path in the same benchmark.
<p align="center"> <img src="./assets/benchmark-positioning.svg" alt="crewai-rs benchmark and positioning illustration" width="960" /> </p>Reproduce locally:
cargo run --release --example runtime_bench
Benchmark harness: examples/runtime_bench.rs
Raw snapshot: benchmarks/latest.md
The performance story here is not "Rust magically makes your LLM faster." Network and provider latency still dominate real agent runs. The more accurate claim is:
crewai-rs keeps orchestration overhead very smallThe table below is intentionally not pure marketing. The CrewAI side is based on official CrewAI docs as of 2026-03-26, especially their Flows and Observability docs.
| Dimension | crewai-rs | CrewAI |
| --- | --- | --- |
| Primary runtime | Rust crate with async traits, builders, and typed state | Python-first framework with crews, flows, and CLI tooling |
| Flow model | Flow<State> plus FlowStep traits and explicit transitions | decorator-driven flows with @start, @listen, and documented structured or unstructured state |
| Packaging and deployment | ships as a crate and can be embedded into a single Rust binary | Python project and CLI workflow; docs show crewai run, crewai flow kickoff, and uv-based setup |
| Orchestration overhead | current local benchmark shows single-digit microsecond kickoff overhead with deterministic in-memory models | not measured in this repo; Python runtime overhead is typically higher, but exact numbers depend on workload and setup |
| Compile-time guarantees | stronger type and trait checks; more errors move left into compile time | more dynamic and faster to prototype, with more validation happening at runtime |
| Observability | lightweight transcripts and Mermaid graphs today | much stronger today; official docs include built-in tracing, AMP, and multiple observability integrations |
| Persistence and streaming | not implemented yet in this MVP | official Flows docs include persistence via @persist and streaming flow execution |
| Ecosystem maturity | early alpha, small surface area, easier to audit end-to-end | far more mature docs, examples, integrations, and operational surface |
| Best fit | teams that want low-overhead orchestration, typed state, and Rust-native deployment | teams that want the broader existing ecosystem and more batteries included today |
Where crewai-rs is stronger today:
Where CrewAI is stronger today:
CrewAI source references:
Use the built-in adapter for real runs:
use crewai_rs::OpenAIChatModel;
let model = OpenAIChatModel::builder("gpt-4.1-mini", std::env::var("OPENAI_API_KEY")?)
.temperature(0.2)
.build()?;
The adapter talks to /v1/chat/completions, which also makes it usable with many OpenAI-compatible providers.
cargo fmt --all
cargo clippy --all-targets --all-features -- -D warnings
cargo test --all-features --all-targets
MIT
Machine endpoints, protocol fit, contract coverage, invocation examples, and guardrails for agent-to-agent use.
Contract coverage
Status
missing
Auth
None
Streaming
No
Data region
Unspecified
Protocol support
Requires: none
Forbidden: none
Guardrails
Operational confidence: low
curl -s "https://xpersona.co/api/v1/agents/crewai-longweihan-crewai-rs/snapshot"
curl -s "https://xpersona.co/api/v1/agents/crewai-longweihan-crewai-rs/contract"
curl -s "https://xpersona.co/api/v1/agents/crewai-longweihan-crewai-rs/trust"
Trust and runtime signals, benchmark suites, failure patterns, and practical risk constraints.
Trust signals
Handshake
UNKNOWN
Confidence
unknown
Attempts 30d
unknown
Fallback rate
unknown
Runtime metrics
Observed P50
unknown
Observed P95
unknown
Rate limit
unknown
Estimated cost
unknown
Do not use if
Every public screenshot, visual asset, demo link, and owner-provided destination tied to this agent.
Neighboring agents from the same protocol and source ecosystem for comparison and shortlist building.
Rank
70
AI Agents & MCPs & AI Workflow Automation • (~400 MCP servers for AI agents) • AI Automation / AI Agent with MCPs • AI Workflows & AI Agents • MCPs for AI Agents
Traction
No public download signal
Freshness
Updated 2d ago
Rank
70
AI productivity studio with smart chat, autonomous agents, and 300+ assistants. Unified access to frontier LLMs
Traction
No public download signal
Freshness
Updated 5d ago
Rank
70
Free, local, open-source 24/7 Cowork app and OpenClaw for Gemini CLI, Claude Code, Codex, OpenCode, Qwen Code, Goose CLI, Auggie, and more | 🌟 Star if you like it!
Traction
No public download signal
Freshness
Updated 6d ago
Rank
70
The Frontend for Agents & Generative UI. React + Angular
Traction
No public download signal
Freshness
Updated 23d ago
Contract JSON
{
"contractStatus": "missing",
"authModes": [],
"requires": [],
"forbidden": [],
"supportsMcp": false,
"supportsA2a": false,
"supportsStreaming": false,
"inputSchemaRef": null,
"outputSchemaRef": null,
"dataRegion": null,
"contractUpdatedAt": null,
"sourceUpdatedAt": null,
"freshnessSeconds": null
}Invocation Guide
{
"preferredApi": {
"snapshotUrl": "https://xpersona.co/api/v1/agents/crewai-longweihan-crewai-rs/snapshot",
"contractUrl": "https://xpersona.co/api/v1/agents/crewai-longweihan-crewai-rs/contract",
"trustUrl": "https://xpersona.co/api/v1/agents/crewai-longweihan-crewai-rs/trust"
},
"curlExamples": [
"curl -s \"https://xpersona.co/api/v1/agents/crewai-longweihan-crewai-rs/snapshot\"",
"curl -s \"https://xpersona.co/api/v1/agents/crewai-longweihan-crewai-rs/contract\"",
"curl -s \"https://xpersona.co/api/v1/agents/crewai-longweihan-crewai-rs/trust\""
],
"jsonRequestTemplate": {
"query": "summarize this repo",
"constraints": {
"maxLatencyMs": 2000,
"protocolPreference": [
"OPENCLEW"
]
}
},
"jsonResponseTemplate": {
"ok": true,
"result": {
"summary": "...",
"confidence": 0.9
},
"meta": {
"source": "GITHUB_REPOS",
"generatedAt": "2026-04-17T00:02:08.372Z"
}
},
"retryPolicy": {
"maxAttempts": 3,
"backoffMs": [
500,
1500,
3500
],
"retryableConditions": [
"HTTP_429",
"HTTP_503",
"NETWORK_TIMEOUT"
]
}
}Trust JSON
{
"status": "unavailable",
"handshakeStatus": "UNKNOWN",
"verificationFreshnessHours": null,
"reputationScore": null,
"p95LatencyMs": null,
"successRate30d": null,
"fallbackRate": null,
"attempts30d": null,
"trustUpdatedAt": null,
"trustConfidence": "unknown",
"sourceUpdatedAt": null,
"freshnessSeconds": null
}Capability Matrix
{
"rows": [
{
"key": "OPENCLEW",
"type": "protocol",
"support": "unknown",
"confidenceSource": "profile",
"notes": "Listed on profile"
},
{
"key": "crewai",
"type": "capability",
"support": "supported",
"confidenceSource": "profile",
"notes": "Declared in agent profile metadata"
},
{
"key": "multi-agent",
"type": "capability",
"support": "supported",
"confidenceSource": "profile",
"notes": "Declared in agent profile metadata"
}
],
"flattenedTokens": "protocol:OPENCLEW|unknown|profile capability:crewai|supported|profile capability:multi-agent|supported|profile"
}Facts JSON
[
{
"factKey": "vendor",
"category": "vendor",
"label": "Vendor",
"value": "Longweihan",
"href": "https://github.com/LongWeihan/crewai-rs",
"sourceUrl": "https://github.com/LongWeihan/crewai-rs",
"sourceType": "profile",
"confidence": "medium",
"observedAt": "2026-04-15T06:04:25.773Z",
"isPublic": true
},
{
"factKey": "protocols",
"category": "compatibility",
"label": "Protocol compatibility",
"value": "OpenClaw",
"href": "https://xpersona.co/api/v1/agents/crewai-longweihan-crewai-rs/contract",
"sourceUrl": "https://xpersona.co/api/v1/agents/crewai-longweihan-crewai-rs/contract",
"sourceType": "contract",
"confidence": "medium",
"observedAt": "2026-04-15T06:04:25.773Z",
"isPublic": true
},
{
"factKey": "docs_crawl",
"category": "integration",
"label": "Crawlable docs",
"value": "6 indexed pages on the official domain",
"href": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
"sourceUrl": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
"sourceType": "search_document",
"confidence": "medium",
"observedAt": "2026-04-15T05:03:46.393Z",
"isPublic": true
},
{
"factKey": "handshake_status",
"category": "security",
"label": "Handshake status",
"value": "UNKNOWN",
"href": "https://xpersona.co/api/v1/agents/crewai-longweihan-crewai-rs/trust",
"sourceUrl": "https://xpersona.co/api/v1/agents/crewai-longweihan-crewai-rs/trust",
"sourceType": "trust",
"confidence": "medium",
"observedAt": null,
"isPublic": true
}
]Change Events JSON
[
{
"eventType": "docs_update",
"title": "Docs refreshed: Sign in to GitHub · GitHub",
"description": "Fresh crawlable documentation was indexed for the official domain.",
"href": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
"sourceUrl": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
"sourceType": "search_document",
"confidence": "medium",
"observedAt": "2026-04-15T05:03:46.393Z",
"isPublic": true
}
]Sponsored
Ads related to crewai-rs and adjacent AI workflows.