Rank
70
AI Agents & MCPs & AI Workflow Automation • (~400 MCP servers for AI agents) • AI Automation / AI Agent with MCPs • AI Workflows & AI Agents • MCPs for AI Agents
Traction
No public download signal
Freshness
Updated 2d ago
Crawler Summary
🤖 Multi-agent code review framework using CrewAI. 7 specialized agents analyze PRs with evidence-based findings, auto-patches, and comprehensive evaluation metrics. Multi-Agent Code Review Framework A project implementing a multi-agent system for automated code review using CrewAI. Quick Start Features - 🤖 **Multi-Agent System**: 7 specialized agents (context, security, style, logic, performance, docs, tests) - 🔍 **Evidence-Based**: All findings require tool output or code references - 📊 **Evaluation Framework**: Statistical analysis and LaTeX export - ⚡ **Tool Integration**: Capability contract not published. No trust telemetry is available yet. 5 GitHub stars reported by the source. Last updated 4/16/2026.
Freshness
Last checked 4/16/2026
Best For
code-review-agentic-framework is best for crewai, multi-agent workflows where OpenClaw compatibility matters.
Not Ideal For
Contract metadata is missing or unavailable for deterministic execution.
Evidence Sources Checked
editorial-content, GITHUB OPENCLEW, runtime-metrics, public facts pack
🤖 Multi-agent code review framework using CrewAI. 7 specialized agents analyze PRs with evidence-based findings, auto-patches, and comprehensive evaluation metrics. Multi-Agent Code Review Framework A project implementing a multi-agent system for automated code review using CrewAI. Quick Start Features - 🤖 **Multi-Agent System**: 7 specialized agents (context, security, style, logic, performance, docs, tests) - 🔍 **Evidence-Based**: All findings require tool output or code references - 📊 **Evaluation Framework**: Statistical analysis and LaTeX export - ⚡ **Tool Integration**:
Public facts
5
Change events
1
Artifacts
0
Freshness
Apr 16, 2026
Capability contract not published. No trust telemetry is available yet. 5 GitHub stars reported by the source. Last updated 4/16/2026.
Trust score
Unknown
Compatibility
OpenClaw
Freshness
Apr 16, 2026
Vendor
Redrussianarmy
Artifacts
0
Benchmarks
0
Last release
Unpublished
Key links, install path, and a quick operational read before the deeper crawl record.
Summary
Capability contract not published. No trust telemetry is available yet. 5 GitHub stars reported by the source. Last updated 4/16/2026.
Setup snapshot
git clone https://github.com/redrussianarmy/code-review-agentic-framework.gitSetup complexity is LOW. This package is likely designed for quick installation with minimal external side-effects.
Final validation: Expose the agent to a mock request payload inside a sandbox and trace the network egress before allowing access to real customer data.
Everything public we have scraped or crawled about this agent, grouped by evidence type with provenance.
Vendor
Redrussianarmy
Protocol compatibility
OpenClaw
Adoption signal
5 GitHub stars
Handshake status
UNKNOWN
Crawlable docs
6 indexed pages on the official domain
Merged public release, docs, artifact, benchmark, pricing, and trust refresh events.
Extracted files, examples, snippets, parameters, dependencies, permissions, and artifact metadata.
Extracted files
0
Examples
6
Snippets
0
Languages
python
bash
# Install dependencies poetry install # Configure environment variables cp .env.example .env # Edit .env and add your API keys: # - LLM_PROVIDER (openai or anthropic) # - OPENAI_API_KEY (required if LLM_PROVIDER=openai) # - ANTHROPIC_API_KEY (required if LLM_PROVIDER=anthropic) # - GITHUB_TOKEN (required for dataset collection) # Run a review (local path) poetry run python -m app.cli review \ --pr-id "123" \ --title "Your PR Title" \ --language python \ /path/to/repo # Or use GitHub URL directly (title/description auto-fetched) poetry run python -m app.cli review \ --pr-id "14468" \ --language python \ "https://github.com/fastapi/fastapi" # Supported languages: python, javascript, typescript, java, go, rust, cpp, csharp, ruby, php
text
┌─────────────┐
│ CLI │ poetry run python -m app.cli review ...
└──────┬──────┘
│
▼
┌─────────────┐
│ ReviewFlow │ Orchestrates the entire process
└──────┬──────┘
│
├─► 1️⃣ Context Builder (Git diff + Tools)
│
├─► 2️⃣ Analysis Agents (Parallel)
│ ├─ ChangeContextAnalyst (LLM)
│ ├─ SecurityReviewer (Tool)
│ ├─ StyleFormatReviewer (Tool)
│ ├─ LogicBugReviewer (LLM)
│ ├─ PerformanceReviewer (LLM)
│ ├─ DocumentationReviewer (LLM)
│ └─ TestCoverageReviewer (Hybrid)
│
├─► 3️⃣ RevisionProposer (Patch generation)
│
├─► 4️⃣ Supervisor (Consolidation)
│
└─► 5️⃣ PRReviewResult (Final output)text
. ├── agents/ # Agent implementations │ ├── base.py # Base agent class │ ├── change_context_analyst.py │ ├── security_reviewer.py │ ├── style_reviewer.py │ ├── logic_reviewer.py │ ├── performance_reviewer.py │ ├── documentation_reviewer.py │ ├── test_reviewer.py │ ├── revision_proposer.py │ └── supervisor.py ├── domain/ # Domain models (Pydantic) │ ├── models.py # PRMetadata, Finding, Language enum, LLMProvider enum │ └── __init__.py ├── tools/ # Analysis tool integrations │ ├── base.py # Tool base class │ ├── git_diff.py │ ├── linters.py # Ruff, ESLint │ ├── security.py # Semgrep, Bandit │ └── coverage.py ├── flows/ # Orchestration │ ├── context_builder.py │ └── review_flow.py ├── eval/ # Evaluation framework │ ├── metrics/ │ └── dataset/ ├── app/ # Application layer │ ├── cli.py # CLI interface │ ├── config.py # Settings │ └── logging.py # Structured logging ├── prompts/ # Versioned prompts │ ├── cca/ │ ├── security/ │ ├── style/ │ └── ... └── reviews/ # Review results storage
env
# LLM Provider Selection LLM_PROVIDER=anthropic # or "openai" # OpenAI Configuration (if LLM_PROVIDER=openai) OPENAI_API_KEY=sk-proj-... OPENAI_MODEL=gpt-4-turbo-preview OPENAI_TEMPERATURE=0.0 OPENAI_SEED=42 # Anthropic Configuration (if LLM_PROVIDER=anthropic) # Recommended: claude-3-5-haiku-20241022 (best price-performance) # Alternatives: claude-3-5-sonnet-20241022 (balanced), claude-3-opus-20240229 (highest quality) ANTHROPIC_API_KEY=sk-ant-api03-... ANTHROPIC_MODEL=claude-3-5-haiku-20241022 # GitHub (required for dataset collection and PR fetching) GITHUB_TOKEN=ghp_... # Review Configuration MAX_NITS_PER_REVIEW=5 MAX_PATCH_LINES=10 ENABLE_PARALLEL_AGENTS=true # Evaluation EVAL_DATASET_PATH=./eval/dataset EVAL_RESULTS_PATH=./eval/results SEED_FOR_EXPERIMENTS=42
bash
# Configure GitHub token in .env # GITHUB_TOKEN=ghp_your_token_here # Collect balanced dataset poetry run python eval/dataset/collect_dataset.py collect \ --repos 5 \ --prs-per-repo 5 \ --balanced
bash
# Evaluate using stored reviews (recommended) poetry run python -m app.cli evaluate \ --system multi_agent \ --use-stored # Evaluate specific PRs poetry run python -m app.cli evaluate \ --system multi_agent \ --pr-ids "14468,2779" \ --use-stored # Re-run reviews and evaluate poetry run python -m app.cli evaluate \ --system single_agent \ --rerun \ --repo-path /path/to/repo # Compare systems poetry run python -m app.cli compare \ ./eval/results/evaluation_single_agent.json \ ./eval/results/evaluation_multi_agent.json \ --latex results.tex
Full documentation captured from public sources, including the complete README when available.
Docs source
GITHUB OPENCLEW
Editorial quality
ready
🤖 Multi-agent code review framework using CrewAI. 7 specialized agents analyze PRs with evidence-based findings, auto-patches, and comprehensive evaluation metrics. Multi-Agent Code Review Framework A project implementing a multi-agent system for automated code review using CrewAI. Quick Start Features - 🤖 **Multi-Agent System**: 7 specialized agents (context, security, style, logic, performance, docs, tests) - 🔍 **Evidence-Based**: All findings require tool output or code references - 📊 **Evaluation Framework**: Statistical analysis and LaTeX export - ⚡ **Tool Integration**:
A project implementing a multi-agent system for automated code review using CrewAI.
# Install dependencies
poetry install
# Configure environment variables
cp .env.example .env
# Edit .env and add your API keys:
# - LLM_PROVIDER (openai or anthropic)
# - OPENAI_API_KEY (required if LLM_PROVIDER=openai)
# - ANTHROPIC_API_KEY (required if LLM_PROVIDER=anthropic)
# - GITHUB_TOKEN (required for dataset collection)
# Run a review (local path)
poetry run python -m app.cli review \
--pr-id "123" \
--title "Your PR Title" \
--language python \
/path/to/repo
# Or use GitHub URL directly (title/description auto-fetched)
poetry run python -m app.cli review \
--pr-id "14468" \
--language python \
"https://github.com/fastapi/fastapi"
# Supported languages: python, javascript, typescript, java, go, rust, cpp, csharp, ruby, php
┌─────────────┐
│ CLI │ poetry run python -m app.cli review ...
└──────┬──────┘
│
▼
┌─────────────┐
│ ReviewFlow │ Orchestrates the entire process
└──────┬──────┘
│
├─► 1️⃣ Context Builder (Git diff + Tools)
│
├─► 2️⃣ Analysis Agents (Parallel)
│ ├─ ChangeContextAnalyst (LLM)
│ ├─ SecurityReviewer (Tool)
│ ├─ StyleFormatReviewer (Tool)
│ ├─ LogicBugReviewer (LLM)
│ ├─ PerformanceReviewer (LLM)
│ ├─ DocumentationReviewer (LLM)
│ └─ TestCoverageReviewer (Hybrid)
│
├─► 3️⃣ RevisionProposer (Patch generation)
│
├─► 4️⃣ Supervisor (Consolidation)
│
└─► 5️⃣ PRReviewResult (Final output)
--language parameter):
PRContext with all information7 specialized agents analyze the PR in parallel:
Generates patches for findings that need fixes.
Creates final PRReviewResult with:
.
├── agents/ # Agent implementations
│ ├── base.py # Base agent class
│ ├── change_context_analyst.py
│ ├── security_reviewer.py
│ ├── style_reviewer.py
│ ├── logic_reviewer.py
│ ├── performance_reviewer.py
│ ├── documentation_reviewer.py
│ ├── test_reviewer.py
│ ├── revision_proposer.py
│ └── supervisor.py
├── domain/ # Domain models (Pydantic)
│ ├── models.py # PRMetadata, Finding, Language enum, LLMProvider enum
│ └── __init__.py
├── tools/ # Analysis tool integrations
│ ├── base.py # Tool base class
│ ├── git_diff.py
│ ├── linters.py # Ruff, ESLint
│ ├── security.py # Semgrep, Bandit
│ └── coverage.py
├── flows/ # Orchestration
│ ├── context_builder.py
│ └── review_flow.py
├── eval/ # Evaluation framework
│ ├── metrics/
│ └── dataset/
├── app/ # Application layer
│ ├── cli.py # CLI interface
│ ├── config.py # Settings
│ └── logging.py # Structured logging
├── prompts/ # Versioned prompts
│ ├── cca/
│ ├── security/
│ ├── style/
│ └── ...
└── reviews/ # Review results storage
Key settings in .env:
# LLM Provider Selection
LLM_PROVIDER=anthropic # or "openai"
# OpenAI Configuration (if LLM_PROVIDER=openai)
OPENAI_API_KEY=sk-proj-...
OPENAI_MODEL=gpt-4-turbo-preview
OPENAI_TEMPERATURE=0.0
OPENAI_SEED=42
# Anthropic Configuration (if LLM_PROVIDER=anthropic)
# Recommended: claude-3-5-haiku-20241022 (best price-performance)
# Alternatives: claude-3-5-sonnet-20241022 (balanced), claude-3-opus-20240229 (highest quality)
ANTHROPIC_API_KEY=sk-ant-api03-...
ANTHROPIC_MODEL=claude-3-5-haiku-20241022
# GitHub (required for dataset collection and PR fetching)
GITHUB_TOKEN=ghp_...
# Review Configuration
MAX_NITS_PER_REVIEW=5
MAX_PATCH_LINES=10
ENABLE_PARALLEL_AGENTS=true
# Evaluation
EVAL_DATASET_PATH=./eval/dataset
EVAL_RESULTS_PATH=./eval/results
SEED_FOR_EXPERIMENTS=42
The framework supports both OpenAI and Anthropic LLM providers:
Set LLM_PROVIDER=anthropic or LLM_PROVIDER=openai in your .env file.
See .env.example for all available configuration options.
Collect real PRs from GitHub for evaluation:
# Configure GitHub token in .env
# GITHUB_TOKEN=ghp_your_token_here
# Collect balanced dataset
poetry run python eval/dataset/collect_dataset.py collect \
--repos 5 \
--prs-per-repo 5 \
--balanced
See eval/dataset/README.md for detailed instructions.
Run evaluation on collected dataset:
# Evaluate using stored reviews (recommended)
poetry run python -m app.cli evaluate \
--system multi_agent \
--use-stored
# Evaluate specific PRs
poetry run python -m app.cli evaluate \
--system multi_agent \
--pr-ids "14468,2779" \
--use-stored
# Re-run reviews and evaluate
poetry run python -m app.cli evaluate \
--system single_agent \
--rerun \
--repo-path /path/to/repo
# Compare systems
poetry run python -m app.cli compare \
./eval/results/evaluation_single_agent.json \
./eval/results/evaluation_multi_agent.json \
--latex results.tex
Evaluate whether multi-agent code review with tool integration achieves:
Compared to single-agent LLM baselines.
# Run tests
poetry run pytest
# Lint
poetry run ruff check .
# Format
poetry run ruff format .
See CONTRIBUTING.md for contribution guidelines.
MIT
Machine endpoints, protocol fit, contract coverage, invocation examples, and guardrails for agent-to-agent use.
Contract coverage
Status
missing
Auth
None
Streaming
No
Data region
Unspecified
Protocol support
Requires: none
Forbidden: none
Guardrails
Operational confidence: low
curl -s "https://xpersona.co/api/v1/agents/crewai-hakanbogan-code-review-agentic-framework/snapshot"
curl -s "https://xpersona.co/api/v1/agents/crewai-hakanbogan-code-review-agentic-framework/contract"
curl -s "https://xpersona.co/api/v1/agents/crewai-hakanbogan-code-review-agentic-framework/trust"
Trust and runtime signals, benchmark suites, failure patterns, and practical risk constraints.
Trust signals
Handshake
UNKNOWN
Confidence
unknown
Attempts 30d
unknown
Fallback rate
unknown
Runtime metrics
Observed P50
unknown
Observed P95
unknown
Rate limit
unknown
Estimated cost
unknown
Do not use if
Every public screenshot, visual asset, demo link, and owner-provided destination tied to this agent.
Neighboring agents from the same protocol and source ecosystem for comparison and shortlist building.
Rank
70
AI Agents & MCPs & AI Workflow Automation • (~400 MCP servers for AI agents) • AI Automation / AI Agent with MCPs • AI Workflows & AI Agents • MCPs for AI Agents
Traction
No public download signal
Freshness
Updated 2d ago
Rank
70
AI productivity studio with smart chat, autonomous agents, and 300+ assistants. Unified access to frontier LLMs
Traction
No public download signal
Freshness
Updated 6d ago
Rank
70
Free, local, open-source 24/7 Cowork app and OpenClaw for Gemini CLI, Claude Code, Codex, OpenCode, Qwen Code, Goose CLI, Auggie, and more | 🌟 Star if you like it!
Traction
No public download signal
Freshness
Updated 6d ago
Rank
70
The Frontend for Agents & Generative UI. React + Angular
Traction
No public download signal
Freshness
Updated 23d ago
Contract JSON
{
"contractStatus": "missing",
"authModes": [],
"requires": [],
"forbidden": [],
"supportsMcp": false,
"supportsA2a": false,
"supportsStreaming": false,
"inputSchemaRef": null,
"outputSchemaRef": null,
"dataRegion": null,
"contractUpdatedAt": null,
"sourceUpdatedAt": null,
"freshnessSeconds": null
}Invocation Guide
{
"preferredApi": {
"snapshotUrl": "https://xpersona.co/api/v1/agents/crewai-hakanbogan-code-review-agentic-framework/snapshot",
"contractUrl": "https://xpersona.co/api/v1/agents/crewai-hakanbogan-code-review-agentic-framework/contract",
"trustUrl": "https://xpersona.co/api/v1/agents/crewai-hakanbogan-code-review-agentic-framework/trust"
},
"curlExamples": [
"curl -s \"https://xpersona.co/api/v1/agents/crewai-hakanbogan-code-review-agentic-framework/snapshot\"",
"curl -s \"https://xpersona.co/api/v1/agents/crewai-hakanbogan-code-review-agentic-framework/contract\"",
"curl -s \"https://xpersona.co/api/v1/agents/crewai-hakanbogan-code-review-agentic-framework/trust\""
],
"jsonRequestTemplate": {
"query": "summarize this repo",
"constraints": {
"maxLatencyMs": 2000,
"protocolPreference": [
"OPENCLEW"
]
}
},
"jsonResponseTemplate": {
"ok": true,
"result": {
"summary": "...",
"confidence": 0.9
},
"meta": {
"source": "GITHUB_OPENCLEW",
"generatedAt": "2026-04-17T03:20:49.111Z"
}
},
"retryPolicy": {
"maxAttempts": 3,
"backoffMs": [
500,
1500,
3500
],
"retryableConditions": [
"HTTP_429",
"HTTP_503",
"NETWORK_TIMEOUT"
]
}
}Trust JSON
{
"status": "unavailable",
"handshakeStatus": "UNKNOWN",
"verificationFreshnessHours": null,
"reputationScore": null,
"p95LatencyMs": null,
"successRate30d": null,
"fallbackRate": null,
"attempts30d": null,
"trustUpdatedAt": null,
"trustConfidence": "unknown",
"sourceUpdatedAt": null,
"freshnessSeconds": null
}Capability Matrix
{
"rows": [
{
"key": "OPENCLEW",
"type": "protocol",
"support": "unknown",
"confidenceSource": "profile",
"notes": "Listed on profile"
},
{
"key": "crewai",
"type": "capability",
"support": "supported",
"confidenceSource": "profile",
"notes": "Declared in agent profile metadata"
},
{
"key": "multi-agent",
"type": "capability",
"support": "supported",
"confidenceSource": "profile",
"notes": "Declared in agent profile metadata"
}
],
"flattenedTokens": "protocol:OPENCLEW|unknown|profile capability:crewai|supported|profile capability:multi-agent|supported|profile"
}Facts JSON
[
{
"factKey": "vendor",
"label": "Vendor",
"value": "Redrussianarmy",
"category": "vendor",
"href": "https://github.com/redrussianarmy/code-review-agentic-framework",
"sourceUrl": "https://github.com/redrussianarmy/code-review-agentic-framework",
"sourceType": "profile",
"confidence": "medium",
"observedAt": "2026-04-16T06:46:45.434Z",
"isPublic": true,
"metadata": {}
},
{
"factKey": "protocols",
"label": "Protocol compatibility",
"value": "OpenClaw",
"category": "compatibility",
"href": "https://xpersona.co/api/v1/agents/crewai-hakanbogan-code-review-agentic-framework/contract",
"sourceUrl": "https://xpersona.co/api/v1/agents/crewai-hakanbogan-code-review-agentic-framework/contract",
"sourceType": "contract",
"confidence": "medium",
"observedAt": "2026-04-16T06:46:45.434Z",
"isPublic": true,
"metadata": {}
},
{
"factKey": "traction",
"label": "Adoption signal",
"value": "5 GitHub stars",
"category": "adoption",
"href": "https://github.com/redrussianarmy/code-review-agentic-framework",
"sourceUrl": "https://github.com/redrussianarmy/code-review-agentic-framework",
"sourceType": "profile",
"confidence": "medium",
"observedAt": "2026-04-16T06:46:45.434Z",
"isPublic": true,
"metadata": {}
},
{
"factKey": "docs_crawl",
"label": "Crawlable docs",
"value": "6 indexed pages on the official domain",
"category": "integration",
"href": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
"sourceUrl": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
"sourceType": "search_document",
"confidence": "medium",
"observedAt": "2026-04-15T05:03:46.393Z",
"isPublic": true,
"metadata": {}
},
{
"factKey": "handshake_status",
"label": "Handshake status",
"value": "UNKNOWN",
"category": "security",
"href": "https://xpersona.co/api/v1/agents/crewai-hakanbogan-code-review-agentic-framework/trust",
"sourceUrl": "https://xpersona.co/api/v1/agents/crewai-hakanbogan-code-review-agentic-framework/trust",
"sourceType": "trust",
"confidence": "medium",
"observedAt": null,
"isPublic": true,
"metadata": {}
}
]Change Events JSON
[
{
"eventType": "docs_update",
"title": "Docs refreshed: Sign in to GitHub · GitHub",
"description": "Fresh crawlable documentation was indexed for the official domain.",
"href": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
"sourceUrl": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
"sourceType": "search_document",
"confidence": "medium",
"observedAt": "2026-04-15T05:03:46.393Z",
"isPublic": true,
"metadata": {}
}
]Sponsored
Ads related to code-review-agentic-framework and adjacent AI workflows.