Crawler Summary

code-review-agentic-framework answer-first brief

🤖 Multi-agent code review framework using CrewAI. 7 specialized agents analyze PRs with evidence-based findings, auto-patches, and comprehensive evaluation metrics. Multi-Agent Code Review Framework A project implementing a multi-agent system for automated code review using CrewAI. Quick Start Features - 🤖 **Multi-Agent System**: 7 specialized agents (context, security, style, logic, performance, docs, tests) - 🔍 **Evidence-Based**: All findings require tool output or code references - 📊 **Evaluation Framework**: Statistical analysis and LaTeX export - ⚡ **Tool Integration**: Capability contract not published. No trust telemetry is available yet. 5 GitHub stars reported by the source. Last updated 4/16/2026.

Freshness

Last checked 4/16/2026

Best For

code-review-agentic-framework is best for crewai, multi-agent workflows where OpenClaw compatibility matters.

Not Ideal For

Contract metadata is missing or unavailable for deterministic execution.

Evidence Sources Checked

editorial-content, GITHUB OPENCLEW, runtime-metrics, public facts pack

Claim this agent
Agent DossierGitHubSafety: 66/100

code-review-agentic-framework

🤖 Multi-agent code review framework using CrewAI. 7 specialized agents analyze PRs with evidence-based findings, auto-patches, and comprehensive evaluation metrics. Multi-Agent Code Review Framework A project implementing a multi-agent system for automated code review using CrewAI. Quick Start Features - 🤖 **Multi-Agent System**: 7 specialized agents (context, security, style, logic, performance, docs, tests) - 🔍 **Evidence-Based**: All findings require tool output or code references - 📊 **Evaluation Framework**: Statistical analysis and LaTeX export - ⚡ **Tool Integration**:

OpenClawself-declared

Public facts

5

Change events

1

Artifacts

0

Freshness

Apr 16, 2026

Verifiededitorial-contentNo verified compatibility signals5 GitHub stars

Capability contract not published. No trust telemetry is available yet. 5 GitHub stars reported by the source. Last updated 4/16/2026.

5 GitHub starsTrust evidence available

Trust score

Unknown

Compatibility

OpenClaw

Freshness

Apr 16, 2026

Vendor

Redrussianarmy

Artifacts

0

Benchmarks

0

Last release

Unpublished

Executive Summary

Key links, install path, and a quick operational read before the deeper crawl record.

Verifiededitorial-content

Summary

Capability contract not published. No trust telemetry is available yet. 5 GitHub stars reported by the source. Last updated 4/16/2026.

Setup snapshot

git clone https://github.com/redrussianarmy/code-review-agentic-framework.git
  1. 1

    Setup complexity is LOW. This package is likely designed for quick installation with minimal external side-effects.

  2. 2

    Final validation: Expose the agent to a mock request payload inside a sandbox and trace the network egress before allowing access to real customer data.

Evidence Ledger

Everything public we have scraped or crawled about this agent, grouped by evidence type with provenance.

Verifiededitorial-content
Vendor (1)

Vendor

Redrussianarmy

profilemedium
Observed Apr 16, 2026Source linkProvenance
Compatibility (1)

Protocol compatibility

OpenClaw

contractmedium
Observed Apr 16, 2026Source linkProvenance
Adoption (1)

Adoption signal

5 GitHub stars

profilemedium
Observed Apr 16, 2026Source linkProvenance
Security (1)

Handshake status

UNKNOWN

trustmedium
Observed unknownSource linkProvenance
Integration (1)

Crawlable docs

6 indexed pages on the official domain

search_documentmedium
Observed Apr 15, 2026Source linkProvenance

Release & Crawl Timeline

Merged public release, docs, artifact, benchmark, pricing, and trust refresh events.

Self-declaredagent-index

Artifacts Archive

Extracted files, examples, snippets, parameters, dependencies, permissions, and artifact metadata.

Self-declaredGITHUB OPENCLEW

Extracted files

0

Examples

6

Snippets

0

Languages

python

Executable Examples

bash

# Install dependencies
poetry install

# Configure environment variables
cp .env.example .env
# Edit .env and add your API keys:
#   - LLM_PROVIDER (openai or anthropic)
#   - OPENAI_API_KEY (required if LLM_PROVIDER=openai)
#   - ANTHROPIC_API_KEY (required if LLM_PROVIDER=anthropic)
#   - GITHUB_TOKEN (required for dataset collection)

# Run a review (local path)
poetry run python -m app.cli review \
  --pr-id "123" \
  --title "Your PR Title" \
  --language python \
  /path/to/repo

# Or use GitHub URL directly (title/description auto-fetched)
poetry run python -m app.cli review \
  --pr-id "14468" \
  --language python \
  "https://github.com/fastapi/fastapi"

# Supported languages: python, javascript, typescript, java, go, rust, cpp, csharp, ruby, php

text

┌─────────────┐
│   CLI       │  poetry run python -m app.cli review ...
└──────┬──────┘
       │
       ▼
┌─────────────┐
│ ReviewFlow  │  Orchestrates the entire process
└──────┬──────┘
       │
       ├─► 1️⃣ Context Builder (Git diff + Tools)
       │
       ├─► 2️⃣ Analysis Agents (Parallel)
       │    ├─ ChangeContextAnalyst (LLM)
       │    ├─ SecurityReviewer (Tool)
       │    ├─ StyleFormatReviewer (Tool)
       │    ├─ LogicBugReviewer (LLM)
       │    ├─ PerformanceReviewer (LLM)
       │    ├─ DocumentationReviewer (LLM)
       │    └─ TestCoverageReviewer (Hybrid)
       │
       ├─► 3️⃣ RevisionProposer (Patch generation)
       │
       ├─► 4️⃣ Supervisor (Consolidation)
       │
       └─► 5️⃣ PRReviewResult (Final output)

text

.
├── agents/              # Agent implementations
│   ├── base.py         # Base agent class
│   ├── change_context_analyst.py
│   ├── security_reviewer.py
│   ├── style_reviewer.py
│   ├── logic_reviewer.py
│   ├── performance_reviewer.py
│   ├── documentation_reviewer.py
│   ├── test_reviewer.py
│   ├── revision_proposer.py
│   └── supervisor.py
├── domain/             # Domain models (Pydantic)
│   ├── models.py       # PRMetadata, Finding, Language enum, LLMProvider enum
│   └── __init__.py
├── tools/              # Analysis tool integrations
│   ├── base.py         # Tool base class
│   ├── git_diff.py
│   ├── linters.py      # Ruff, ESLint
│   ├── security.py     # Semgrep, Bandit
│   └── coverage.py
├── flows/              # Orchestration
│   ├── context_builder.py
│   └── review_flow.py
├── eval/               # Evaluation framework
│   ├── metrics/
│   └── dataset/
├── app/                # Application layer
│   ├── cli.py          # CLI interface
│   ├── config.py       # Settings
│   └── logging.py      # Structured logging
├── prompts/            # Versioned prompts
│   ├── cca/
│   ├── security/
│   ├── style/
│   └── ...
└── reviews/            # Review results storage

env

# LLM Provider Selection
LLM_PROVIDER=anthropic  # or "openai"

# OpenAI Configuration (if LLM_PROVIDER=openai)
OPENAI_API_KEY=sk-proj-...
OPENAI_MODEL=gpt-4-turbo-preview
OPENAI_TEMPERATURE=0.0
OPENAI_SEED=42

# Anthropic Configuration (if LLM_PROVIDER=anthropic)
# Recommended: claude-3-5-haiku-20241022 (best price-performance)
# Alternatives: claude-3-5-sonnet-20241022 (balanced), claude-3-opus-20240229 (highest quality)
ANTHROPIC_API_KEY=sk-ant-api03-...
ANTHROPIC_MODEL=claude-3-5-haiku-20241022

# GitHub (required for dataset collection and PR fetching)
GITHUB_TOKEN=ghp_...

# Review Configuration
MAX_NITS_PER_REVIEW=5
MAX_PATCH_LINES=10
ENABLE_PARALLEL_AGENTS=true

# Evaluation
EVAL_DATASET_PATH=./eval/dataset
EVAL_RESULTS_PATH=./eval/results
SEED_FOR_EXPERIMENTS=42

bash

# Configure GitHub token in .env
# GITHUB_TOKEN=ghp_your_token_here

# Collect balanced dataset
poetry run python eval/dataset/collect_dataset.py collect \
  --repos 5 \
  --prs-per-repo 5 \
  --balanced

bash

# Evaluate using stored reviews (recommended)
poetry run python -m app.cli evaluate \
  --system multi_agent \
  --use-stored

# Evaluate specific PRs
poetry run python -m app.cli evaluate \
  --system multi_agent \
  --pr-ids "14468,2779" \
  --use-stored

# Re-run reviews and evaluate
poetry run python -m app.cli evaluate \
  --system single_agent \
  --rerun \
  --repo-path /path/to/repo

# Compare systems
poetry run python -m app.cli compare \
  ./eval/results/evaluation_single_agent.json \
  ./eval/results/evaluation_multi_agent.json \
  --latex results.tex

Docs & README

Full documentation captured from public sources, including the complete README when available.

Self-declaredGITHUB OPENCLEW

Docs source

GITHUB OPENCLEW

Editorial quality

ready

🤖 Multi-agent code review framework using CrewAI. 7 specialized agents analyze PRs with evidence-based findings, auto-patches, and comprehensive evaluation metrics. Multi-Agent Code Review Framework A project implementing a multi-agent system for automated code review using CrewAI. Quick Start Features - 🤖 **Multi-Agent System**: 7 specialized agents (context, security, style, logic, performance, docs, tests) - 🔍 **Evidence-Based**: All findings require tool output or code references - 📊 **Evaluation Framework**: Statistical analysis and LaTeX export - ⚡ **Tool Integration**:

Full README

Multi-Agent Code Review Framework

A project implementing a multi-agent system for automated code review using CrewAI.

Quick Start

# Install dependencies
poetry install

# Configure environment variables
cp .env.example .env
# Edit .env and add your API keys:
#   - LLM_PROVIDER (openai or anthropic)
#   - OPENAI_API_KEY (required if LLM_PROVIDER=openai)
#   - ANTHROPIC_API_KEY (required if LLM_PROVIDER=anthropic)
#   - GITHUB_TOKEN (required for dataset collection)

# Run a review (local path)
poetry run python -m app.cli review \
  --pr-id "123" \
  --title "Your PR Title" \
  --language python \
  /path/to/repo

# Or use GitHub URL directly (title/description auto-fetched)
poetry run python -m app.cli review \
  --pr-id "14468" \
  --language python \
  "https://github.com/fastapi/fastapi"

# Supported languages: python, javascript, typescript, java, go, rust, cpp, csharp, ruby, php

Features

  • 🤖 Multi-Agent System: 7 specialized agents (context, security, style, logic, performance, docs, tests)
  • 🔍 Evidence-Based: All findings require tool output or code references
  • 📊 Evaluation Framework: Statistical analysis and LaTeX export
  • Tool Integration: Git, Ruff (Python), ESLint (JS/TS), Semgrep, Bandit, Coverage.py
  • 🎯 Actionable: Auto-patches for simple fixes, detailed guidance for complex issues
  • 💰 Cost Tracking: Real-time token usage and cost estimation for OpenAI and Anthropic
  • 🌐 Multi-Provider: Support for both OpenAI and Anthropic LLMs

System Architecture

┌─────────────┐
│   CLI       │  poetry run python -m app.cli review ...
└──────┬──────┘
       │
       ▼
┌─────────────┐
│ ReviewFlow  │  Orchestrates the entire process
└──────┬──────┘
       │
       ├─► 1️⃣ Context Builder (Git diff + Tools)
       │
       ├─► 2️⃣ Analysis Agents (Parallel)
       │    ├─ ChangeContextAnalyst (LLM)
       │    ├─ SecurityReviewer (Tool)
       │    ├─ StyleFormatReviewer (Tool)
       │    ├─ LogicBugReviewer (LLM)
       │    ├─ PerformanceReviewer (LLM)
       │    ├─ DocumentationReviewer (LLM)
       │    └─ TestCoverageReviewer (Hybrid)
       │
       ├─► 3️⃣ RevisionProposer (Patch generation)
       │
       ├─► 4️⃣ Supervisor (Consolidation)
       │
       └─► 5️⃣ PRReviewResult (Final output)

System Flow

Phase 1: Context Building

  • Extract git diff between PR branch and base branch
  • Run language-specific tools (automatically selected based on --language parameter):
    • Python: Ruff (linting), Bandit (security)
    • JavaScript/TypeScript: ESLint (linting)
    • All languages: Semgrep (security, language-agnostic)
  • Build PRContext with all information

Phase 2: Analysis Agents

7 specialized agents analyze the PR in parallel:

  • ChangeContextAnalyst: Checks PR title/description consistency
  • SecurityReviewer: Finds security vulnerabilities
  • StyleFormatReviewer: Detects style/formatting issues
  • LogicBugReviewer: Identifies logical errors
  • PerformanceReviewer: Finds performance bottlenecks
  • DocumentationReviewer: Checks documentation quality
  • TestCoverageReviewer: Analyzes test coverage

Phase 3: Revision Proposer

Generates patches for findings that need fixes.

Phase 4: Supervisor

  • Consolidates all findings
  • Removes duplicates
  • Prioritizes by severity
  • Applies nit limits

Phase 5: Result Synthesis

Creates final PRReviewResult with:

  • Findings grouped by severity
  • Markdown review comment
  • JSON output for evaluation
  • Metrics (time, cost, token usage)
  • Real-time cost estimation based on provider and model

Project Structure

.
├── agents/              # Agent implementations
│   ├── base.py         # Base agent class
│   ├── change_context_analyst.py
│   ├── security_reviewer.py
│   ├── style_reviewer.py
│   ├── logic_reviewer.py
│   ├── performance_reviewer.py
│   ├── documentation_reviewer.py
│   ├── test_reviewer.py
│   ├── revision_proposer.py
│   └── supervisor.py
├── domain/             # Domain models (Pydantic)
│   ├── models.py       # PRMetadata, Finding, Language enum, LLMProvider enum
│   └── __init__.py
├── tools/              # Analysis tool integrations
│   ├── base.py         # Tool base class
│   ├── git_diff.py
│   ├── linters.py      # Ruff, ESLint
│   ├── security.py     # Semgrep, Bandit
│   └── coverage.py
├── flows/              # Orchestration
│   ├── context_builder.py
│   └── review_flow.py
├── eval/               # Evaluation framework
│   ├── metrics/
│   └── dataset/
├── app/                # Application layer
│   ├── cli.py          # CLI interface
│   ├── config.py       # Settings
│   └── logging.py      # Structured logging
├── prompts/            # Versioned prompts
│   ├── cca/
│   ├── security/
│   ├── style/
│   └── ...
└── reviews/            # Review results storage

Configuration

Key settings in .env:

# LLM Provider Selection
LLM_PROVIDER=anthropic  # or "openai"

# OpenAI Configuration (if LLM_PROVIDER=openai)
OPENAI_API_KEY=sk-proj-...
OPENAI_MODEL=gpt-4-turbo-preview
OPENAI_TEMPERATURE=0.0
OPENAI_SEED=42

# Anthropic Configuration (if LLM_PROVIDER=anthropic)
# Recommended: claude-3-5-haiku-20241022 (best price-performance)
# Alternatives: claude-3-5-sonnet-20241022 (balanced), claude-3-opus-20240229 (highest quality)
ANTHROPIC_API_KEY=sk-ant-api03-...
ANTHROPIC_MODEL=claude-3-5-haiku-20241022

# GitHub (required for dataset collection and PR fetching)
GITHUB_TOKEN=ghp_...

# Review Configuration
MAX_NITS_PER_REVIEW=5
MAX_PATCH_LINES=10
ENABLE_PARALLEL_AGENTS=true

# Evaluation
EVAL_DATASET_PATH=./eval/dataset
EVAL_RESULTS_PATH=./eval/results
SEED_FOR_EXPERIMENTS=42

LLM Provider Selection

The framework supports both OpenAI and Anthropic LLM providers:

  • OpenAI: GPT-4 Turbo, GPT-4, GPT-3.5 Turbo
  • Anthropic:
    • Claude 3.5 Haiku (recommended): Best price-performance ratio ($0.80-1.00/1M input, $4-5/1M output)
    • Claude 3.5 Sonnet: Balanced performance ($3/1M input, $15/1M output)
    • Claude 3 Opus: Highest quality ($15/1M input, $75/1M output)

Set LLM_PROVIDER=anthropic or LLM_PROVIDER=openai in your .env file.

See .env.example for all available configuration options.

Dataset Collection

Collect real PRs from GitHub for evaluation:

# Configure GitHub token in .env
# GITHUB_TOKEN=ghp_your_token_here

# Collect balanced dataset
poetry run python eval/dataset/collect_dataset.py collect \
  --repos 5 \
  --prs-per-repo 5 \
  --balanced

See eval/dataset/README.md for detailed instructions.

Evaluation

Run evaluation on collected dataset:

# Evaluate using stored reviews (recommended)
poetry run python -m app.cli evaluate \
  --system multi_agent \
  --use-stored

# Evaluate specific PRs
poetry run python -m app.cli evaluate \
  --system multi_agent \
  --pr-ids "14468,2779" \
  --use-stored

# Re-run reviews and evaluate
poetry run python -m app.cli evaluate \
  --system single_agent \
  --rerun \
  --repo-path /path/to/repo

# Compare systems
poetry run python -m app.cli compare \
  ./eval/results/evaluation_single_agent.json \
  ./eval/results/evaluation_multi_agent.json \
  --latex results.tex

Research Goals

Evaluate whether multi-agent code review with tool integration achieves:

  • Higher actionability (more patches/clear fixes)
  • Lower noise (fewer false positives)
  • Better coverage (detect more critical issues)

Compared to single-agent LLM baselines.

Design Principles

  • SOLID: Single responsibility, dependency injection, clear abstractions
  • DRY: Shared base classes, reusable components
  • Evidence-Based: Every finding must cite tool output or code reference
  • Reproducible: Deterministic settings, versioned prompts, pinned tools
  • Type-Safe: Enum-based language and provider selection
  • Cost-Aware: Real-time token tracking and cost estimation

Development

# Run tests
poetry run pytest

# Lint
poetry run ruff check .

# Format
poetry run ruff format .

Contributing

See CONTRIBUTING.md for contribution guidelines.

License

MIT

Contract & API

Machine endpoints, protocol fit, contract coverage, invocation examples, and guardrails for agent-to-agent use.

MissingGITHUB OPENCLEW

Contract coverage

Status

missing

Auth

None

Streaming

No

Data region

Unspecified

Protocol support

OpenClaw: self-declared

Requires: none

Forbidden: none

Guardrails

Operational confidence: low

No positive guardrails captured.
Invocation examples
curl -s "https://xpersona.co/api/v1/agents/crewai-hakanbogan-code-review-agentic-framework/snapshot"
curl -s "https://xpersona.co/api/v1/agents/crewai-hakanbogan-code-review-agentic-framework/contract"
curl -s "https://xpersona.co/api/v1/agents/crewai-hakanbogan-code-review-agentic-framework/trust"

Reliability & Benchmarks

Trust and runtime signals, benchmark suites, failure patterns, and practical risk constraints.

Missingruntime-metrics

Trust signals

Handshake

UNKNOWN

Confidence

unknown

Attempts 30d

unknown

Fallback rate

unknown

Runtime metrics

Observed P50

unknown

Observed P95

unknown

Rate limit

unknown

Estimated cost

unknown

Do not use if

Contract metadata is missing or unavailable for deterministic execution.
No benchmark suites or observed failure patterns are available.

Media & Demo

Every public screenshot, visual asset, demo link, and owner-provided destination tied to this agent.

Missingno-media
No screenshots, media assets, or demo links are available.

Related Agents

Neighboring agents from the same protocol and source ecosystem for comparison and shortlist building.

Self-declaredprotocol-neighbors
GITHUB_REPOSactivepieces

Rank

70

AI Agents & MCPs & AI Workflow Automation • (~400 MCP servers for AI agents) • AI Automation / AI Agent with MCPs • AI Workflows & AI Agents • MCPs for AI Agents

Traction

No public download signal

Freshness

Updated 2d ago

OPENCLAW
GITHUB_REPOScherry-studio

Rank

70

AI productivity studio with smart chat, autonomous agents, and 300+ assistants. Unified access to frontier LLMs

Traction

No public download signal

Freshness

Updated 6d ago

MCPOPENCLAW
GITHUB_REPOSAionUi

Rank

70

Free, local, open-source 24/7 Cowork app and OpenClaw for Gemini CLI, Claude Code, Codex, OpenCode, Qwen Code, Goose CLI, Auggie, and more | 🌟 Star if you like it!

Traction

No public download signal

Freshness

Updated 6d ago

MCPOPENCLAW
GITHUB_REPOSCopilotKit

Rank

70

The Frontend for Agents & Generative UI. React + Angular

Traction

No public download signal

Freshness

Updated 23d ago

OPENCLAW
Machine Appendix

Contract JSON

{
  "contractStatus": "missing",
  "authModes": [],
  "requires": [],
  "forbidden": [],
  "supportsMcp": false,
  "supportsA2a": false,
  "supportsStreaming": false,
  "inputSchemaRef": null,
  "outputSchemaRef": null,
  "dataRegion": null,
  "contractUpdatedAt": null,
  "sourceUpdatedAt": null,
  "freshnessSeconds": null
}

Invocation Guide

{
  "preferredApi": {
    "snapshotUrl": "https://xpersona.co/api/v1/agents/crewai-hakanbogan-code-review-agentic-framework/snapshot",
    "contractUrl": "https://xpersona.co/api/v1/agents/crewai-hakanbogan-code-review-agentic-framework/contract",
    "trustUrl": "https://xpersona.co/api/v1/agents/crewai-hakanbogan-code-review-agentic-framework/trust"
  },
  "curlExamples": [
    "curl -s \"https://xpersona.co/api/v1/agents/crewai-hakanbogan-code-review-agentic-framework/snapshot\"",
    "curl -s \"https://xpersona.co/api/v1/agents/crewai-hakanbogan-code-review-agentic-framework/contract\"",
    "curl -s \"https://xpersona.co/api/v1/agents/crewai-hakanbogan-code-review-agentic-framework/trust\""
  ],
  "jsonRequestTemplate": {
    "query": "summarize this repo",
    "constraints": {
      "maxLatencyMs": 2000,
      "protocolPreference": [
        "OPENCLEW"
      ]
    }
  },
  "jsonResponseTemplate": {
    "ok": true,
    "result": {
      "summary": "...",
      "confidence": 0.9
    },
    "meta": {
      "source": "GITHUB_OPENCLEW",
      "generatedAt": "2026-04-17T03:20:49.111Z"
    }
  },
  "retryPolicy": {
    "maxAttempts": 3,
    "backoffMs": [
      500,
      1500,
      3500
    ],
    "retryableConditions": [
      "HTTP_429",
      "HTTP_503",
      "NETWORK_TIMEOUT"
    ]
  }
}

Trust JSON

{
  "status": "unavailable",
  "handshakeStatus": "UNKNOWN",
  "verificationFreshnessHours": null,
  "reputationScore": null,
  "p95LatencyMs": null,
  "successRate30d": null,
  "fallbackRate": null,
  "attempts30d": null,
  "trustUpdatedAt": null,
  "trustConfidence": "unknown",
  "sourceUpdatedAt": null,
  "freshnessSeconds": null
}

Capability Matrix

{
  "rows": [
    {
      "key": "OPENCLEW",
      "type": "protocol",
      "support": "unknown",
      "confidenceSource": "profile",
      "notes": "Listed on profile"
    },
    {
      "key": "crewai",
      "type": "capability",
      "support": "supported",
      "confidenceSource": "profile",
      "notes": "Declared in agent profile metadata"
    },
    {
      "key": "multi-agent",
      "type": "capability",
      "support": "supported",
      "confidenceSource": "profile",
      "notes": "Declared in agent profile metadata"
    }
  ],
  "flattenedTokens": "protocol:OPENCLEW|unknown|profile capability:crewai|supported|profile capability:multi-agent|supported|profile"
}

Facts JSON

[
  {
    "factKey": "vendor",
    "label": "Vendor",
    "value": "Redrussianarmy",
    "category": "vendor",
    "href": "https://github.com/redrussianarmy/code-review-agentic-framework",
    "sourceUrl": "https://github.com/redrussianarmy/code-review-agentic-framework",
    "sourceType": "profile",
    "confidence": "medium",
    "observedAt": "2026-04-16T06:46:45.434Z",
    "isPublic": true,
    "metadata": {}
  },
  {
    "factKey": "protocols",
    "label": "Protocol compatibility",
    "value": "OpenClaw",
    "category": "compatibility",
    "href": "https://xpersona.co/api/v1/agents/crewai-hakanbogan-code-review-agentic-framework/contract",
    "sourceUrl": "https://xpersona.co/api/v1/agents/crewai-hakanbogan-code-review-agentic-framework/contract",
    "sourceType": "contract",
    "confidence": "medium",
    "observedAt": "2026-04-16T06:46:45.434Z",
    "isPublic": true,
    "metadata": {}
  },
  {
    "factKey": "traction",
    "label": "Adoption signal",
    "value": "5 GitHub stars",
    "category": "adoption",
    "href": "https://github.com/redrussianarmy/code-review-agentic-framework",
    "sourceUrl": "https://github.com/redrussianarmy/code-review-agentic-framework",
    "sourceType": "profile",
    "confidence": "medium",
    "observedAt": "2026-04-16T06:46:45.434Z",
    "isPublic": true,
    "metadata": {}
  },
  {
    "factKey": "docs_crawl",
    "label": "Crawlable docs",
    "value": "6 indexed pages on the official domain",
    "category": "integration",
    "href": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
    "sourceUrl": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
    "sourceType": "search_document",
    "confidence": "medium",
    "observedAt": "2026-04-15T05:03:46.393Z",
    "isPublic": true,
    "metadata": {}
  },
  {
    "factKey": "handshake_status",
    "label": "Handshake status",
    "value": "UNKNOWN",
    "category": "security",
    "href": "https://xpersona.co/api/v1/agents/crewai-hakanbogan-code-review-agentic-framework/trust",
    "sourceUrl": "https://xpersona.co/api/v1/agents/crewai-hakanbogan-code-review-agentic-framework/trust",
    "sourceType": "trust",
    "confidence": "medium",
    "observedAt": null,
    "isPublic": true,
    "metadata": {}
  }
]

Change Events JSON

[
  {
    "eventType": "docs_update",
    "title": "Docs refreshed: Sign in to GitHub · GitHub",
    "description": "Fresh crawlable documentation was indexed for the official domain.",
    "href": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
    "sourceUrl": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
    "sourceType": "search_document",
    "confidence": "medium",
    "observedAt": "2026-04-15T05:03:46.393Z",
    "isPublic": true,
    "metadata": {}
  }
]

Sponsored

Ads related to code-review-agentic-framework and adjacent AI workflows.