Rank
83
A Model Context Protocol (MCP) server for GitLab
Traction
No public download signal
Freshness
Updated 2d ago
Crawler Summary
The Cyber Swiss Army Knife for encryption, encoding, compression and data analysis. CyberChef MCP Server This project provides a **Model Context Protocol (MCP)** server interface for **CyberChef**, the "Cyber Swiss Army Knife" created by $1. By running this server, you enable AI assistants (like Claude, Cursor AI, and others) to natively utilize CyberChef's extensive library of 463 data manipulation operations—including encryption, encoding, compression, and forensic analysis—as executable tools. ** Capability contract not published. No trust telemetry is available yet. 2 GitHub stars reported by the source. Last updated 2/25/2026.
Freshness
Last checked 2/25/2026
Best For
cyberchef is best for cipher, cypher, encode workflows where MCP compatibility matters.
Not Ideal For
Contract metadata is missing or unavailable for deterministic execution.
Evidence Sources Checked
editorial-content, GITHUB MCP, runtime-metrics, public facts pack
The Cyber Swiss Army Knife for encryption, encoding, compression and data analysis. CyberChef MCP Server This project provides a **Model Context Protocol (MCP)** server interface for **CyberChef**, the "Cyber Swiss Army Knife" created by $1. By running this server, you enable AI assistants (like Claude, Cursor AI, and others) to natively utilize CyberChef's extensive library of 463 data manipulation operations—including encryption, encoding, compression, and forensic analysis—as executable tools. **
Public facts
4
Change events
0
Artifacts
0
Freshness
Feb 25, 2026
Capability contract not published. No trust telemetry is available yet. 2 GitHub stars reported by the source. Last updated 2/25/2026.
Trust score
Unknown
Compatibility
MCP
Freshness
Feb 25, 2026
Vendor
Github
Artifacts
0
Benchmarks
0
Last release
10.19.4
Key links, install path, and a quick operational read before the deeper crawl record.
Summary
Capability contract not published. No trust telemetry is available yet. 2 GitHub stars reported by the source. Last updated 2/25/2026.
Setup snapshot
git clone https://github.com/doublegate/CyberChef-MCP.gitSetup complexity is MEDIUM. Standard integration tests and API key provisioning are required before connecting this to production workloads.
Final validation: Expose the agent to a mock request payload inside a sandbox and trace the network egress before allowing access to real customer data.
Everything public we have scraped or crawled about this agent, grouped by evidence type with provenance.
Vendor
Github
Protocol compatibility
MCP
Adoption signal
2 GitHub stars
Handshake status
UNKNOWN
Merged public release, docs, artifact, benchmark, pricing, and trust refresh events.
Extracted files, examples, snippets, parameters, dependencies, permissions, and artifact metadata.
Extracted files
0
Examples
6
Snippets
0
Languages
typescript
bash
# Docker Hub provides health scores and supply chain attestations docker pull doublegate/cyberchef-mcp:latest docker tag doublegate/cyberchef-mcp:latest cyberchef-mcp docker run -i --rm cyberchef-mcp
bash
docker pull ghcr.io/doublegate/cyberchef-mcp_v1:latest docker tag ghcr.io/doublegate/cyberchef-mcp_v1:latest cyberchef-mcp docker run -i --rm cyberchef-mcp
bash
# Download from GitHub Releases
wget https://github.com/doublegate/CyberChef-MCP/releases/download/v1.9.0/cyberchef-mcp-v1.9.0-docker-image.tar.gzbash
docker load < cyberchef-mcp-v1.9.0-docker-image.tar.gz
bash
docker tag ghcr.io/doublegate/cyberchef-mcp_v1:v1.9.0 cyberchef-mcp
bash
docker run -i --rm cyberchef-mcp
Full documentation captured from public sources, including the complete README when available.
Docs source
GITHUB MCP
Editorial quality
ready
The Cyber Swiss Army Knife for encryption, encoding, compression and data analysis. CyberChef MCP Server This project provides a **Model Context Protocol (MCP)** server interface for **CyberChef**, the "Cyber Swiss Army Knife" created by $1. By running this server, you enable AI assistants (like Claude, Cursor AI, and others) to natively utilize CyberChef's extensive library of 463 data manipulation operations—including encryption, encoding, compression, and forensic analysis—as executable tools. **
This project provides a Model Context Protocol (MCP) server interface for CyberChef, the "Cyber Swiss Army Knife" created by GCHQ.
By running this server, you enable AI assistants (like Claude, Cursor AI, and others) to natively utilize CyberChef's extensive library of 463 data manipulation operations—including encryption, encoding, compression, and forensic analysis—as executable tools.
Latest Release: v1.9.0 | Release Notes | Security Policy | Security Fixes Report

CyberChef is a simple, intuitive web app for carrying out all manner of "cyber" operations within a web browser. It was originally conceived and built by GCHQ.
This fork wraps the core CyberChef Node.js API into an MCP server, bridging the gap between natural language AI intent and deterministic data processing.
This project maintains a selective sync relationship with the upstream GCHQ/CyberChef repository:
src/core/operations/*.mjs) via automated workflowssrc/node/mcp-server.mjs, tests, workflows)See Upstream Sync Guide for details on the synchronization process.

The server exposes CyberChef operations as MCP tools:
cyberchef_bake: The "Omni-tool". Executes a full CyberChef recipe (a chain of operations) on an input. Ideal for complex, multi-step transformations (e.g., "Decode Base64, then Gunzip, then prettify JSON").cyberchef_to_base64 / cyberchef_from_base64cyberchef_aes_decryptcyberchef_sha2cyberchef_yara_rulescyberchef_search: A utility tool to help the AI discover available operations and their descriptions.cyberchef_recipe_create / cyberchef_recipe_get / cyberchef_recipe_listcyberchef_recipe_update / cyberchef_recipe_delete / cyberchef_recipe_executecyberchef_recipe_export / cyberchef_recipe_importcyberchef_recipe_validate / cyberchef_recipe_testcyberchef_batch - Execute multiple operations in parallel or sequential modecyberchef_telemetry_export - Privacy-first usage analytics (opt-in)cyberchef_cache_stats / cyberchef_cache_clear - Cache inspection and managementcyberchef_quota_info - Resource quota and usage trackingcyberchef_migration_preview - Analyze recipes for v2.0.0 compatibility with two modes:
analyze mode: Check recipes for breaking changes with detailed diagnosticstransform mode: Automatically convert recipes to v2.0.0 formatcyberchef_deprecation_stats - Track deprecated API usage statistics
cyberchef_worker_stats - Monitor worker pool utilization, active/completed tasks, and pool configurationCYBERCHEF_ENABLE_WORKERS=true environment variableCYBERCHEF_TRANSPORT=http for browser and remote clients.notifications/progress via the MCP SDK progress token mechanism for real-time status updates during long-running tasks.zod.Option 1: Pull from Docker Hub (Online, Recommended)
# Docker Hub provides health scores and supply chain attestations
docker pull doublegate/cyberchef-mcp:latest
docker tag doublegate/cyberchef-mcp:latest cyberchef-mcp
docker run -i --rm cyberchef-mcp
Option 1b: Pull from GitHub Container Registry (Alternative)
docker pull ghcr.io/doublegate/cyberchef-mcp_v1:latest
docker tag ghcr.io/doublegate/cyberchef-mcp_v1:latest cyberchef-mcp
docker run -i --rm cyberchef-mcp
Option 2: Download Pre-built Image (Offline Installation)
For environments without direct GHCR access, download the pre-built Docker image tarball from the latest release:
Download the tarball (approximately 90MB compressed):
# Download from GitHub Releases
wget https://github.com/doublegate/CyberChef-MCP/releases/download/v1.9.0/cyberchef-mcp-v1.9.0-docker-image.tar.gz
Load the image into Docker:
docker load < cyberchef-mcp-v1.9.0-docker-image.tar.gz
Tag for easier usage:
docker tag ghcr.io/doublegate/cyberchef-mcp_v1:v1.9.0 cyberchef-mcp
Run the server:
docker run -i --rm cyberchef-mcp
Option 3: Build from Source
Clone the Repository:
git clone https://github.com/doublegate/CyberChef-MCP.git
cd CyberChef-MCP
Build the Docker Image:
docker build -f Dockerfile.mcp -t cyberchef-mcp .
Run the Server (Interactive Mode): This command starts the server and listens on stdin. This is what your MCP client will run.
docker run -i --rm cyberchef-mcp
Optional: Run with Enhanced Security (Read-Only Filesystem): For maximum security in production deployments:
docker run -i --rm --read-only --tmpfs /tmp:rw,noexec,nosuid,size=100m cyberchef-mcp
CyberChefcommanddockerrun -i --rm cyberchef-mcpAdd to your configuration file (typically ~/.config/claude/config.json):
{
"mcpServers": {
"cyberchef": {
"command": "docker",
"args": ["run", "-i", "--rm", "cyberchef-mcp"]
}
}
}
Add to your Claude Desktop configuration file:
~/Library/Application Support/Claude/claude_desktop_config.json%APPDATA%/Claude/claude_desktop_config.json{
"mcpServers": {
"cyberchef": {
"command": "docker",
"args": ["run", "-i", "--rm", "cyberchef-mcp"]
}
}
}
After adding the configuration, restart Claude Desktop. The CyberChef tools will appear in the available tools panel.
Version 1.4.0 introduces comprehensive performance optimizations and configurable resource limits. All features can be tuned via environment variables for your deployment needs.
LRU Cache for Operation Results
Automatic Streaming for Large Inputs
Resource Limits
Memory Monitoring
All features are configurable via environment variables:
# Logging (v1.5.0+)
LOG_LEVEL=info # Logging level: debug, info, warn, error, fatal
# Retry Logic (v1.5.0+)
CYBERCHEF_MAX_RETRIES=3 # Maximum retry attempts for transient failures
CYBERCHEF_INITIAL_BACKOFF=1000 # Initial backoff delay in milliseconds
CYBERCHEF_MAX_BACKOFF=10000 # Maximum backoff delay in milliseconds
CYBERCHEF_BACKOFF_MULTIPLIER=2 # Backoff multiplier for exponential backoff
# Streaming (v1.5.0+)
CYBERCHEF_STREAM_CHUNK_SIZE=1048576 # Chunk size for streaming (1MB)
CYBERCHEF_STREAM_PROGRESS_INTERVAL=10485760 # Progress reporting interval (10MB)
# Recipe Management (v1.6.0+)
CYBERCHEF_RECIPE_STORAGE=./recipes.json # Storage file path
CYBERCHEF_RECIPE_MAX_COUNT=10000 # Maximum number of recipes
CYBERCHEF_RECIPE_MAX_OPERATIONS=100 # Max operations per recipe
CYBERCHEF_RECIPE_MAX_DEPTH=5 # Max nesting depth
# Batch Processing (v1.7.0+)
CYBERCHEF_BATCH_MAX_SIZE=100 # Maximum operations per batch
CYBERCHEF_BATCH_ENABLED=true # Enable/disable batch processing
# Telemetry & Analytics (v1.7.0+)
CYBERCHEF_TELEMETRY_ENABLED=false # Privacy-first: disabled by default
# Rate Limiting (v1.7.0+)
CYBERCHEF_RATE_LIMIT_ENABLED=false # Disabled by default
CYBERCHEF_RATE_LIMIT_REQUESTS=100 # Max requests per window
CYBERCHEF_RATE_LIMIT_WINDOW=60000 # Time window in milliseconds
# Cache Management (v1.7.0+)
CYBERCHEF_CACHE_ENABLED=true # Enable/disable caching
# Resource Quotas (v1.7.0+)
CYBERCHEF_MAX_CONCURRENT_OPS=10 # Maximum concurrent operations
# Deprecation & Migration (v1.8.0+)
V2_COMPATIBILITY_MODE=false # Enable v2.0.0 behavior preview (elevates warnings to errors)
CYBERCHEF_SUPPRESS_DEPRECATIONS=false # Suppress deprecation warnings
# Transport (v1.9.0+)
CYBERCHEF_TRANSPORT=stdio # Transport type: stdio or http
CYBERCHEF_HTTP_PORT=3000 # HTTP transport port
CYBERCHEF_HTTP_HOST=127.0.0.1 # HTTP transport bind address
# Worker Thread Pool (v1.9.0+)
CYBERCHEF_WORKER_MIN_THREADS=1 # Minimum worker threads
CYBERCHEF_WORKER_MAX_THREADS=4 # Maximum worker threads
CYBERCHEF_WORKER_IDLE_TIMEOUT=30000 # Worker idle timeout in milliseconds
CYBERCHEF_WORKER_MIN_INPUT_SIZE=1024 # Minimum input size for worker routing (bytes)
# Performance (v1.4.0+)
CYBERCHEF_MAX_INPUT_SIZE=104857600 # Maximum input size (100MB)
CYBERCHEF_OPERATION_TIMEOUT=30000 # Operation timeout in milliseconds (30s)
CYBERCHEF_STREAMING_THRESHOLD=10485760 # Streaming threshold (10MB)
CYBERCHEF_ENABLE_STREAMING=true # Enable streaming for large operations
CYBERCHEF_ENABLE_WORKERS=false # Enable worker thread pool (disabled by default)
CYBERCHEF_CACHE_MAX_SIZE=104857600 # Cache maximum size (100MB)
CYBERCHEF_CACHE_MAX_ITEMS=1000 # Cache maximum items
High-Throughput Server (Large Files)
docker run -i --rm --memory=4g \
-e CYBERCHEF_MAX_INPUT_SIZE=524288000 \
-e CYBERCHEF_STREAMING_THRESHOLD=52428800 \
-e CYBERCHEF_CACHE_MAX_SIZE=524288000 \
-e CYBERCHEF_OPERATION_TIMEOUT=120000 \
ghcr.io/doublegate/cyberchef-mcp_v1:latest
Low-Memory Environment
docker run -i --rm --memory=512m \
-e CYBERCHEF_MAX_INPUT_SIZE=10485760 \
-e CYBERCHEF_STREAMING_THRESHOLD=5242880 \
-e CYBERCHEF_CACHE_MAX_SIZE=10485760 \
-e CYBERCHEF_CACHE_MAX_ITEMS=100 \
ghcr.io/doublegate/cyberchef-mcp_v1:latest
Claude Desktop with Custom Limits
{
"mcpServers": {
"cyberchef": {
"command": "docker",
"args": [
"run", "-i", "--rm",
"-e", "CYBERCHEF_MAX_INPUT_SIZE=209715200",
"-e", "CYBERCHEF_CACHE_MAX_SIZE=209715200",
"ghcr.io/doublegate/cyberchef-mcp_v1:latest"
]
}
}
}
Debug Logging for Troubleshooting (v1.5.0+)
docker run -i --rm \
-e LOG_LEVEL=debug \
-e CYBERCHEF_MAX_RETRIES=5 \
ghcr.io/doublegate/cyberchef-mcp_v1:latest
Worker Thread Pool for CPU-Intensive Operations (v1.9.0+)
docker run -i --rm \
-e CYBERCHEF_ENABLE_WORKERS=true \
-e CYBERCHEF_WORKER_MAX_THREADS=8 \
-e CYBERCHEF_WORKER_IDLE_TIMEOUT=60000 \
ghcr.io/doublegate/cyberchef-mcp_v1:latest
HTTP Transport for Browser/Remote Clients (v1.9.0+)
docker run --rm -p 3000:3000 \
-e CYBERCHEF_TRANSPORT=http \
-e CYBERCHEF_HTTP_PORT=3000 \
-e CYBERCHEF_HTTP_HOST=0.0.0.0 \
ghcr.io/doublegate/cyberchef-mcp_v1:latest
For detailed performance tuning guidance, see the Performance Tuning Guide.
Run the benchmark suite to measure performance on your hardware:
# Install dependencies
npm install
# Generate required configuration
npx grunt configTests
# Run benchmarks
npm run benchmark
The benchmark suite tests 20+ operations across multiple input sizes (1KB, 10KB, 100KB) in categories including:
This project implements comprehensive security hardening with continuous improvements:
-dev variant for compilation, distroless runtime for productiondocker run --read-only with tmpfs mount for /tmpdocker run -i --rm --read-only --tmpfs /tmp:rw,noexec,nosuid,size=100m cyberchef-mcpexit-code: '1' in CI/CDMath.random() with crypto.randomBytes()docker sbom commanddocker scout quickview and docker sbom commands to inspect attestations locally--read-only flag for immutable deploymentscrypto.randomBytes() or crypto.getRandomValues()# Recommended: Run with maximum security options (Chainguard distroless)
docker run -i --rm \
--read-only \
--tmpfs /tmp:rw,noexec,nosuid,size=100m \
--cap-drop=ALL \
--security-opt=no-new-privileges \
cyberchef-mcp
# Note: Chainguard distroless already runs as non-root (UID 65532)
# --read-only requires tmpfs mount for /tmp directory
For detailed information, see:
CyberChef MCP Server has a comprehensive development roadmap spanning 19 releases across 6 phases through August 2027.
| Phase | Releases | Timeline | Focus | Status | |-------|----------|----------|-------|--------| | Phase 1: Foundation | v1.2.0 - v1.4.6 | Q4 2025 - Q1 2026 | Security hardening, upstream sync, performance | Completed | | Phase 2: Enhancement | v1.5.0 - v1.7.3 | Q2 2026 | Streaming, recipe management, batch processing | Completed | | Phase 3: Maturity | v1.8.0 - v2.0.0 | Q3 2026 | API stabilization, external tool integration, v2.0.0 | v1.9.0 Released | | Phase 4: Expansion | v2.1.0 - v2.3.0 | Q4 2026 | Multi-modal, advanced transports, plugins | Planned | | Phase 5: Enterprise | v2.4.0 - v2.6.0 | Q1 2027 | OAuth 2.1, RBAC, Kubernetes, observability | Planned | | Phase 6: Evolution | v2.7.0 - v3.0.0 | Q2-Q3 2027 | Edge deployment, AI-native features, v3.0.0 | Planned |
v2.0.0 Planning: Comprehensive external project integration planning is now complete with 30 planning documents covering 80-120 new MCP tools from 8 security tool projects (Ciphey, cryptii, xortool, RsaCtfTool, John the Ripper, pwntools, katana, cyberchef-recipes). See External Project Integration for details.
See the Full Roadmap for detailed release plans and timelines.
Detailed documentation is organized in the docs/ directory:
If you want to modify the server code without Docker:
npm install
npx grunt configTests
npm run mcp
This project uses GitHub Actions to ensure stability and security:
Core Development Workflows:
core-ci.yml): Tests the underlying CyberChef logic and configuration generation on Node.js v22mcp-docker-build.yml): Builds, verifies, and security scans the cyberchef-mcp Docker imagepull_requests.yml): Automated testing and validation for pull requestsperformance-benchmarks.yml): Automated performance regression testing on code changes (v1.4.0+)Code Quality & Coverage:
codecov.yml, coverage flags, thresholds, PR commentingSecurity & Release Workflows:
security-scan.yml): Trivy vulnerability scanning, SBOM generation, weekly scheduled scanscodeql.yml): Automated security scanning for code vulnerabilities (CodeQL v4)mcp-release.yml): Publishes Docker image to GHCR with SBOM attachment on version tags (v*), automatically creates GitHub releasesUpstream Sync Automation (v1.3.0+):
upstream-monitor.yml): Monitors GCHQ/CyberChef for new releases weekly (Sundays at noon UTC), creates GitHub issues for reviewupstream-sync.yml): Selective file synchronization workflow - copies only src/core/operations/*.mjs files, prevents restoration of deleted web UI components, creates PR for reviewrollback.yml): Emergency rollback mechanism with state comparison and ref-proj guidanceAll workflows use the latest CodeQL Action v4 for security scanning and SARIF upload.
# Run all tests (requires Node.js 22+)
npm test
# Run MCP validation test suite (689 tests with Vitest)
npm run test:mcp
# Run MCP tests with coverage report
npm run test:coverage
# Run performance benchmarks (v1.4.0+)
npm run benchmark
# Test Node.js consumer compatibility
npm run testnodeconsumer
# Test UI (requires production build first)
npm run build
npm run testui
# Lint code
npm run lint
Test Coverage: The MCP server maintains comprehensive test coverage across 19 test suites:
Contributions to the MCP adapter are welcome! We appreciate:
git checkout -b feature/amazing-feature)feat:, fix:, docs:, etc.)For contributions to the core CyberChef operations, please credit the original GCHQ repository.
If you find this project useful, consider supporting its development:
CyberChef is released under the Apache 2.0 Licence and is covered by Crown Copyright.
This MCP server adapter maintains the same Apache 2.0 license.
Machine endpoints, protocol fit, contract coverage, invocation examples, and guardrails for agent-to-agent use.
Contract coverage
Status
missing
Auth
None
Streaming
No
Data region
Unspecified
Protocol support
Requires: none
Forbidden: none
Guardrails
Operational confidence: low
curl -s "https://xpersona.co/api/v1/agents/mcp-doublegate-cyberchef-mcp/snapshot"
curl -s "https://xpersona.co/api/v1/agents/mcp-doublegate-cyberchef-mcp/contract"
curl -s "https://xpersona.co/api/v1/agents/mcp-doublegate-cyberchef-mcp/trust"
Trust and runtime signals, benchmark suites, failure patterns, and practical risk constraints.
Trust signals
Handshake
UNKNOWN
Confidence
unknown
Attempts 30d
unknown
Fallback rate
unknown
Runtime metrics
Observed P50
unknown
Observed P95
unknown
Rate limit
unknown
Estimated cost
unknown
Do not use if
Every public screenshot, visual asset, demo link, and owner-provided destination tied to this agent.
Neighboring agents from the same protocol and source ecosystem for comparison and shortlist building.
Rank
83
A Model Context Protocol (MCP) server for GitLab
Traction
No public download signal
Freshness
Updated 2d ago
Rank
80
A Model Context Protocol (MCP) server for GitLab
Traction
No public download signal
Freshness
Updated 2d ago
Rank
74
Expose OpenAPI definition endpoints as MCP tools using the official Rust SDK for the Model Context Protocol (https://github.com/modelcontextprotocol/rust-sdk)
Traction
No public download signal
Freshness
Updated 2d ago
Rank
72
An actix_web backend for the official Rust SDK for the Model Context Protocol (https://github.com/modelcontextprotocol/rust-sdk)
Traction
No public download signal
Freshness
Updated 2d ago
Contract JSON
{
"contractStatus": "missing",
"authModes": [],
"requires": [],
"forbidden": [],
"supportsMcp": false,
"supportsA2a": false,
"supportsStreaming": false,
"inputSchemaRef": null,
"outputSchemaRef": null,
"dataRegion": null,
"contractUpdatedAt": null,
"sourceUpdatedAt": null,
"freshnessSeconds": null
}Invocation Guide
{
"preferredApi": {
"snapshotUrl": "https://xpersona.co/api/v1/agents/mcp-doublegate-cyberchef-mcp/snapshot",
"contractUrl": "https://xpersona.co/api/v1/agents/mcp-doublegate-cyberchef-mcp/contract",
"trustUrl": "https://xpersona.co/api/v1/agents/mcp-doublegate-cyberchef-mcp/trust"
},
"curlExamples": [
"curl -s \"https://xpersona.co/api/v1/agents/mcp-doublegate-cyberchef-mcp/snapshot\"",
"curl -s \"https://xpersona.co/api/v1/agents/mcp-doublegate-cyberchef-mcp/contract\"",
"curl -s \"https://xpersona.co/api/v1/agents/mcp-doublegate-cyberchef-mcp/trust\""
],
"jsonRequestTemplate": {
"query": "summarize this repo",
"constraints": {
"maxLatencyMs": 2000,
"protocolPreference": [
"MCP"
]
}
},
"jsonResponseTemplate": {
"ok": true,
"result": {
"summary": "...",
"confidence": 0.9
},
"meta": {
"source": "GITHUB_MCP",
"generatedAt": "2026-04-17T03:42:31.821Z"
}
},
"retryPolicy": {
"maxAttempts": 3,
"backoffMs": [
500,
1500,
3500
],
"retryableConditions": [
"HTTP_429",
"HTTP_503",
"NETWORK_TIMEOUT"
]
}
}Trust JSON
{
"status": "unavailable",
"handshakeStatus": "UNKNOWN",
"verificationFreshnessHours": null,
"reputationScore": null,
"p95LatencyMs": null,
"successRate30d": null,
"fallbackRate": null,
"attempts30d": null,
"trustUpdatedAt": null,
"trustConfidence": "unknown",
"sourceUpdatedAt": null,
"freshnessSeconds": null
}Capability Matrix
{
"rows": [
{
"key": "MCP",
"type": "protocol",
"support": "unknown",
"confidenceSource": "profile",
"notes": "Listed on profile"
},
{
"key": "cipher",
"type": "capability",
"support": "supported",
"confidenceSource": "profile",
"notes": "Declared in agent profile metadata"
},
{
"key": "cypher",
"type": "capability",
"support": "supported",
"confidenceSource": "profile",
"notes": "Declared in agent profile metadata"
},
{
"key": "encode",
"type": "capability",
"support": "supported",
"confidenceSource": "profile",
"notes": "Declared in agent profile metadata"
},
{
"key": "decode",
"type": "capability",
"support": "supported",
"confidenceSource": "profile",
"notes": "Declared in agent profile metadata"
},
{
"key": "encrypt",
"type": "capability",
"support": "supported",
"confidenceSource": "profile",
"notes": "Declared in agent profile metadata"
},
{
"key": "decrypt",
"type": "capability",
"support": "supported",
"confidenceSource": "profile",
"notes": "Declared in agent profile metadata"
},
{
"key": "base64",
"type": "capability",
"support": "supported",
"confidenceSource": "profile",
"notes": "Declared in agent profile metadata"
},
{
"key": "xor",
"type": "capability",
"support": "supported",
"confidenceSource": "profile",
"notes": "Declared in agent profile metadata"
},
{
"key": "charset",
"type": "capability",
"support": "supported",
"confidenceSource": "profile",
"notes": "Declared in agent profile metadata"
},
{
"key": "hex",
"type": "capability",
"support": "supported",
"confidenceSource": "profile",
"notes": "Declared in agent profile metadata"
},
{
"key": "encoding",
"type": "capability",
"support": "supported",
"confidenceSource": "profile",
"notes": "Declared in agent profile metadata"
},
{
"key": "format",
"type": "capability",
"support": "supported",
"confidenceSource": "profile",
"notes": "Declared in agent profile metadata"
},
{
"key": "cybersecurity",
"type": "capability",
"support": "supported",
"confidenceSource": "profile",
"notes": "Declared in agent profile metadata"
},
{
"key": "data manipulation",
"type": "capability",
"support": "supported",
"confidenceSource": "profile",
"notes": "Declared in agent profile metadata"
},
{
"key": "data analysis",
"type": "capability",
"support": "supported",
"confidenceSource": "profile",
"notes": "Declared in agent profile metadata"
}
],
"flattenedTokens": "protocol:MCP|unknown|profile capability:cipher|supported|profile capability:cypher|supported|profile capability:encode|supported|profile capability:decode|supported|profile capability:encrypt|supported|profile capability:decrypt|supported|profile capability:base64|supported|profile capability:xor|supported|profile capability:charset|supported|profile capability:hex|supported|profile capability:encoding|supported|profile capability:format|supported|profile capability:cybersecurity|supported|profile capability:data manipulation|supported|profile capability:data analysis|supported|profile"
}Facts JSON
[
{
"factKey": "vendor",
"category": "vendor",
"label": "Vendor",
"value": "Github",
"href": "https://gchq.github.io/CyberChef",
"sourceUrl": "https://gchq.github.io/CyberChef",
"sourceType": "profile",
"confidence": "medium",
"observedAt": "2026-02-25T03:15:26.081Z",
"isPublic": true
},
{
"factKey": "protocols",
"category": "compatibility",
"label": "Protocol compatibility",
"value": "MCP",
"href": "https://xpersona.co/api/v1/agents/mcp-doublegate-cyberchef-mcp/contract",
"sourceUrl": "https://xpersona.co/api/v1/agents/mcp-doublegate-cyberchef-mcp/contract",
"sourceType": "contract",
"confidence": "medium",
"observedAt": "2026-02-25T03:15:26.081Z",
"isPublic": true
},
{
"factKey": "traction",
"category": "adoption",
"label": "Adoption signal",
"value": "2 GitHub stars",
"href": "https://github.com/doublegate/CyberChef-MCP",
"sourceUrl": "https://github.com/doublegate/CyberChef-MCP",
"sourceType": "profile",
"confidence": "medium",
"observedAt": "2026-02-25T03:15:26.081Z",
"isPublic": true
},
{
"factKey": "handshake_status",
"category": "security",
"label": "Handshake status",
"value": "UNKNOWN",
"href": "https://xpersona.co/api/v1/agents/mcp-doublegate-cyberchef-mcp/trust",
"sourceUrl": "https://xpersona.co/api/v1/agents/mcp-doublegate-cyberchef-mcp/trust",
"sourceType": "trust",
"confidence": "medium",
"observedAt": null,
"isPublic": true
}
]Change Events JSON
[]
Sponsored
Ads related to cyberchef and adjacent AI workflows.