Rank
83
A Model Context Protocol (MCP) server for GitLab
Traction
No public download signal
Freshness
Updated 2d ago
Crawler Summary
Claude SEO Checker - MCP server for real-time SEO audits in Claude Code and Cursor. Check site health, validate meta tags, integrate Google Search Console, and get AI-powered recommendations. Rampify MCP Server - Claude SEO Checker $1 $1 $1 **Turn Claude Code into a powerful SEO checker.** Real-time site audits, Google Search Console integration, and AI-powered recommendations directly in your editor (Cursor, Claude Code). **$1** What is a Claude SEO Checker? A **Claude SEO checker** is an MCP (Model Context Protocol) server that adds SEO validation capabilities to Claude Code and Cursor. Unlike **AI rank Capability contract not published. No trust telemetry is available yet. Last updated 2/25/2026.
Freshness
Last checked 2/25/2026
Best For
@rampify/mcp-server is best for claude-seo-checker, mcp-seo, seo-checker workflows where MCP compatibility matters.
Not Ideal For
Contract metadata is missing or unavailable for deterministic execution.
Evidence Sources Checked
editorial-content, GITHUB MCP, runtime-metrics, public facts pack
Claude SEO Checker - MCP server for real-time SEO audits in Claude Code and Cursor. Check site health, validate meta tags, integrate Google Search Console, and get AI-powered recommendations. Rampify MCP Server - Claude SEO Checker $1 $1 $1 **Turn Claude Code into a powerful SEO checker.** Real-time site audits, Google Search Console integration, and AI-powered recommendations directly in your editor (Cursor, Claude Code). **$1** What is a Claude SEO Checker? A **Claude SEO checker** is an MCP (Model Context Protocol) server that adds SEO validation capabilities to Claude Code and Cursor. Unlike **AI rank
Public facts
4
Change events
1
Artifacts
0
Freshness
Feb 25, 2026
Capability contract not published. No trust telemetry is available yet. Last updated 2/25/2026.
Trust score
Unknown
Compatibility
MCP
Freshness
Feb 25, 2026
Vendor
Rampify Dev
Artifacts
0
Benchmarks
0
Last release
0.1.9
Key links, install path, and a quick operational read before the deeper crawl record.
Summary
Capability contract not published. No trust telemetry is available yet. Last updated 2/25/2026.
Setup snapshot
git clone https://github.com/rampify-dev/rampify-mcp.gitSetup complexity is MEDIUM. Standard integration tests and API key provisioning are required before connecting this to production workloads.
Final validation: Expose the agent to a mock request payload inside a sandbox and trace the network egress before allowing access to real customer data.
Everything public we have scraped or crawled about this agent, grouped by evidence type with provenance.
Vendor
Rampify Dev
Protocol compatibility
MCP
Handshake status
UNKNOWN
Crawlable docs
6 indexed pages on the official domain
Merged public release, docs, artifact, benchmark, pricing, and trust refresh events.
Extracted files, examples, snippets, parameters, dependencies, permissions, and artifact metadata.
Extracted files
0
Examples
6
Snippets
0
Languages
typescript
bash
npm install -g @rampify/mcp-server
bash
cd /path/to/your/project claude mcp add -s local -t stdio \ --env BACKEND_API_URL=https://www.rampify.dev \ --env API_KEY=sk_live_your_api_key_here \ --env SEO_CLIENT_DOMAIN=your-domain.com \ rampify -- npx -y @rampify/mcp-server # Reload your IDE window
bash
claude mcp add --scope user rampify npx \ -y @rampify/mcp-server \ --env BACKEND_API_URL=https://www.rampify.dev \ --env API_KEY=sk_live_your_api_key_here # Reload your IDE window
jsonc
{
"mcpServers": {
"rampify": {
"command": "npx",
"args": [
"-y",
"@rampify/mcp-server"
],
"env": {
// Always use production API
"BACKEND_API_URL": "https://www.rampify.dev",
// Get your API key from https://www.rampify.dev/settings/api-keys
"API_KEY": "sk_live_your_api_key_here",
// Optional: Set default domain for this project
"SEO_CLIENT_DOMAIN": "your-domain.com"
}
}
}
}jsonc
{
"mcpServers": {
"rampify": {
"command": "npx",
"args": [
"-y",
"@rampify/mcp-server"
],
"env": {
// Always use production API
"BACKEND_API_URL": "https://www.rampify.dev",
// Get your API key from https://www.rampify.dev/settings/api-keys
"API_KEY": "sk_live_your_api_key_here",
// Optional: Set default domain for this project
"SEO_CLIENT_DOMAIN": "your-domain.com"
}
}
}
}text
"What SEO tools are available?" "What can you do for SEO?" "List all SEO intelligence tools"
Full documentation captured from public sources, including the complete README when available.
Docs source
GITHUB MCP
Editorial quality
ready
Claude SEO Checker - MCP server for real-time SEO audits in Claude Code and Cursor. Check site health, validate meta tags, integrate Google Search Console, and get AI-powered recommendations. Rampify MCP Server - Claude SEO Checker $1 $1 $1 **Turn Claude Code into a powerful SEO checker.** Real-time site audits, Google Search Console integration, and AI-powered recommendations directly in your editor (Cursor, Claude Code). **$1** What is a Claude SEO Checker? A **Claude SEO checker** is an MCP (Model Context Protocol) server that adds SEO validation capabilities to Claude Code and Cursor. Unlike **AI rank
Turn Claude Code into a powerful SEO checker. Real-time site audits, Google Search Console integration, and AI-powered recommendations directly in your editor (Cursor, Claude Code).
A Claude SEO checker is an MCP (Model Context Protocol) server that adds SEO validation capabilities to Claude Code and Cursor. Unlike AI rank trackers that check if your site appears in Claude's AI responses, Rampify analyzes your website's technical SEO, validates meta tags, detects issues, and provides actionable fix recommendations—all from your terminal or IDE.
Not an AI rank tracker. This is a developer tool that brings SEO intelligence into your coding workflow.
| Feature | Claude SEO Checker (Rampify) | AI Rank Trackers | |---------|------------------------------|------------------| | What it checks | YOUR website's SEO (meta tags, schema, performance) | If your site appears in Claude AI responses | | Use case | Fix SEO issues before deployment | Track AI visibility | | Where it works | Your IDE (Claude Code, Cursor) | Separate dashboard | | Target audience | Developers building sites | Marketers tracking AI rank | | Data source | Your site + Google Search Console | Claude AI responses |
Keywords: claude seo checker, mcp seo server, seo tools for claude code, cursor seo tools, ai seo checker, claude code seo
Bring Google Search Console data, SEO insights, and AI-powered recommendations directly into your editor. No context switching, no delays.
npm install -g @rampify/mcp-server
The global installation makes the rampify-mcp command available system-wide.
Before configuring the MCP server, get your API key:
sk_live_...)Recommended: Configure MCP server per-project so each project knows its domain:
cd /path/to/your/project
claude mcp add -s local -t stdio \
--env BACKEND_API_URL=https://www.rampify.dev \
--env API_KEY=sk_live_your_api_key_here \
--env SEO_CLIENT_DOMAIN=your-domain.com \
rampify -- npx -y @rampify/mcp-server
# Reload your IDE window
Now you can use MCP tools without specifying domain:
get_page_seo - Automatically uses your project's domainget_issues - Automatically uses your project's domaincrawl_site - Automatically uses your project's domainFor global access across all projects (must specify domain in each request):
claude mcp add --scope user rampify npx \
-y @rampify/mcp-server \
--env BACKEND_API_URL=https://www.rampify.dev \
--env API_KEY=sk_live_your_api_key_here
# Reload your IDE window
Add to your Cursor settings UI or ~/.cursor/config.json:
{
"mcpServers": {
"rampify": {
"command": "npx",
"args": [
"-y",
"@rampify/mcp-server"
],
"env": {
// Always use production API
"BACKEND_API_URL": "https://www.rampify.dev",
// Get your API key from https://www.rampify.dev/settings/api-keys
"API_KEY": "sk_live_your_api_key_here",
// Optional: Set default domain for this project
"SEO_CLIENT_DOMAIN": "your-domain.com"
}
}
}
}
Add to your Claude Code MCP settings:
{
"mcpServers": {
"rampify": {
"command": "npx",
"args": [
"-y",
"@rampify/mcp-server"
],
"env": {
// Always use production API
"BACKEND_API_URL": "https://www.rampify.dev",
// Get your API key from https://www.rampify.dev/settings/api-keys
"API_KEY": "sk_live_your_api_key_here",
// Optional: Set default domain for this project
"SEO_CLIENT_DOMAIN": "your-domain.com"
}
}
}
}
BACKEND_API_URL (required): Rampify API endpoint - always use https://www.rampify.devAPI_KEY (required): Your API key from Rampify dashboard (starts with sk_live_...)SEO_CLIENT_DOMAIN (optional): Default domain for this project (e.g., yoursite.com)CACHE_TTL (optional): Cache duration in seconds (default: 3600)LOG_LEVEL (optional): debug, info, warn, or error (default: info)Ask Claude directly:
"What SEO tools are available?"
"What can you do for SEO?"
"List all SEO intelligence tools"
Claude will show you all available tools with descriptions.
Recommended: Use natural language (Claude will pick the right tool)
"What SEO issues does my site have?" → Calls get_issues
"Check this page's SEO" → Calls get_page_seo
"Crawl my site" → Calls crawl_site
Alternative: Call tools directly (if you know the exact name)
get_issues({ domain: "example.com" })
get_page_seo({ domain: "example.com", url_path: "/blog/post" })
crawl_site({ domain: "example.com" })
After Deployment:
1. "Crawl my site" (refresh data)
2. "Show me the issues" (review problems)
3. "Check this page's SEO" (verify specific pages)
Before Deployment:
1. "Check SEO of localhost:3000/new-page" (test locally)
2. Fix issues in editor
3. "Re-check SEO" (verify fixes)
4. Deploy when clean!
Regular Monitoring:
1. "What's my site's health score?"
2. "Show critical issues only"
3. Fix high-priority items
4. "Crawl my site" (refresh)
Content Planning:
1. "What should I write next?" (get GSC insights)
2. Review top performing pages and queries
3. Check query opportunities (CTR, rankings, gaps)
4. Create content targeting recommended topics
| Tool | Purpose | When to Use |
|------|---------|-------------|
| get_page_seo | Get SEO data for a specific page | Analyzing individual pages, checking performance |
| get_issues | Get all SEO issues with health score | Site-wide audits, finding problems |
| get_gsc_insights | Get GSC performance data with content recommendations | Discovering what to write next, finding ranking opportunities |
| generate_meta | Generate optimized meta tags | Fixing title/description issues, improving CTR |
| generate_schema | Auto-generate structured data | Adding schema.org JSON-LD to pages |
| crawl_site | Trigger fresh crawl | After deployments, to refresh data |
get_page_seoGet comprehensive SEO data and insights for a specific page. Works with both production sites AND local dev servers!
Parameters:
domain (optional): Site domain (e.g., "example.com" or "localhost:3000"). Uses SEO_CLIENT_DOMAIN env var if not provided.url_path (optional): Page URL path (e.g., "/blog/post")file_path (optional): Local file path (will be resolved to URL)content (optional): Current file contentExamples:
Production Site:
Ask Claude: "What's the SEO status of this page?" (while editing a file)
# Uses SEO_CLIENT_DOMAIN from env var
Or explicitly:
get_page_seo({ domain: "example.com", url_path: "/blog/post" })
Local Development Server:
Ask Claude: "Audit the local version of this page"
get_page_seo({ domain: "localhost:3000", url_path: "/blog/new-post" })
# Or set default to local:
SEO_CLIENT_DOMAIN=localhost:3000
# Now all queries default to local dev server
Response includes:
production_database, local_dev_server, or direct_contentTest pages BEFORE deployment:
Start your dev server:
npm run dev # Usually runs on localhost:3000
Query local pages:
Ask Claude: "Check SEO of localhost:3000/blog/draft-post"
Fix issues in your editor, then re-check:
Ask Claude: "Re-check SEO for this page on localhost"
Deploy when clean!
What gets analyzed locally:
Response format:
{
// Indicates data source: local_dev_server, production_database, or direct_content
"source": "local_dev_server",
// Exact URL that was analyzed
"fetched_from": "http://localhost:3000/blog/new-post",
"url": "http://localhost:3000/blog/new-post",
// Array of detected issues with severity and recommendations
"issues": [...],
// AI-generated summary and recommendations
"ai_summary": "**Local Development Analysis**..."
}
get_issuesGet SEO issues for entire site with health score. Returns a comprehensive report of all detected problems.
Parameters:
domain (optional): Site domain (uses SEO_CLIENT_DOMAIN if not provided)filters (optional):
severity: Array of severity levels to include (['critical', 'warning', 'info'])issue_types: Array of specific issue typeslimit: Max issues to return (1-100, default: 50)Examples:
Ask Claude: "What SEO issues does my site have?"
# Uses SEO_CLIENT_DOMAIN from env var
Ask Claude: "Show me only critical SEO issues"
# AI will filter by severity: critical
Ask Claude: "Check SEO issues for example.com"
get_issues({ domain: "example.com" })
Response includes:
Use cases:
get_gsc_insights (NEW)Get Google Search Console performance data with AI-powered content recommendations. Discover what to write next based on real search data.
Parameters:
domain (optional): Site domain (uses SEO_CLIENT_DOMAIN if not provided)period (optional): Time period for analysis - 7d, 28d, or 90d (default: 28d)include_recommendations (optional): Include AI-powered content recommendations (default: true)Examples:
Ask Claude: "What should I write next?"
# Uses SEO_CLIENT_DOMAIN, analyzes 28-day period
Ask Claude: "Show me my top performing pages from last week"
get_gsc_insights({ period: "7d" })
Ask Claude: "What queries am I ranking for?"
get_gsc_insights({ domain: "example.com", period: "28d" })
What it provides:
1. Performance Summary
2. Top Performing Pages
3. Query Opportunities (4 types automatically detected)
4. AI-Powered Content Recommendations
5. Query Clustering
Response includes:
{
"period": {
"start": "2025-10-27",
"end": "2025-11-24",
"days": 28
},
"summary": {
// Overall performance metrics for the time period
"total_clicks": 1247,
"total_impressions": 45382,
"avg_position": 12.3,
"avg_ctr": 0.027
},
"top_pages": [
{
// Top performing page with its metrics
"url": "/blog/context-driven-development",
"clicks": 324,
"impressions": 8920,
"avg_position": 3.2,
"ctr": 0.036,
"top_queries": [
{
// What query this page ranks for
"query": "context driven development",
"clicks": 156,
"position": 1.2
}
]
}
],
"opportunities": [
{
// Query opportunity with actionable recommendation
"query": "seo tools for developers",
"impressions": 3450,
"clicks": 12,
"position": 5.2,
"ctr": 0.003,
// Type: improve_ctr, improve_ranking, cannibalization, or keyword_gap
"opportunity_type": ["improve_ctr"],
"recommendation": "Improve CTR for 'seo tools for developers' - getting 3,450 impressions but only 12 clicks (0.3% CTR). Optimize meta title/description."
}
],
"content_recommendations": [
{
// AI-powered content suggestions based on real data
"title": "Optimize meta tags for high-impression queries",
"description": "You're appearing in search results but users aren't clicking...",
// Priority: high, medium, or low
"priority": "high",
// What data this is based on
"based_on": "high_impression_low_ctr",
// Target queries for this recommendation
"queries": ["seo tools for developers", "nextjs seo best practices"]
}
],
"meta": {
// Additional context about the data
"total_queries": 247,
"total_pages_with_data": 18,
"data_freshness": "GSC data has 2-3 days delay"
}
}
Use cases:
Content Strategy:
User: "What should I write next?"
→ Get top 3 content recommendations with target queries
→ See which topics have proven search interest
→ Discover keyword gaps to expand existing content
Performance Optimization:
User: "Which pages should I optimize?"
→ Find high-impression, low-CTR pages
→ Get specific meta tag improvement suggestions
→ See pages stuck on page 2 (quick wins)
Keyword Research:
User: "What queries am I ranking for?"
→ See all queries with impressions/clicks data
→ Identify cannibalization issues
→ Find related queries to expand content
Topic Authority:
User: "What topics should I create comprehensive guides on?"
→ Get query clusters (related searches)
→ See opportunities for consolidating authority
→ Discover emerging topic trends
When to use:
Requirements:
Pro tips:
generate_meta to optimize high-opportunity pagescrawl_siteTrigger a fresh site crawl and analysis. This is an active operation that fetches and analyzes all pages.
Parameters:
domain (optional): Site domain (uses SEO_CLIENT_DOMAIN if not provided)Examples:
Ask Claude: "Crawl my site after deploying changes"
# Uses SEO_CLIENT_DOMAIN from env var
Ask Claude: "Analyze example.com"
crawl_site({ domain: "example.com" })
What it does:
get_issues or get_page_seo shows fresh dataResponse includes:
When to use:
get_issues to ensure fresh dataNote: This is the only tool that actively crawls your site. get_issues and get_page_seo just fetch existing data.
generate_schemaAuto-generate structured data (schema.org JSON-LD) for any page. Detects page type and generates appropriate schema with validation.
Parameters:
domain (optional): Site domain (uses SEO_CLIENT_DOMAIN if not provided)url_path (required): Page URL path (e.g., "/blog/post")schema_type (optional): Specific schema type or "auto" to detect (default: "auto")Supported schema types:
Article / BlogPosting - Blog posts, articles, newsProduct - Product pages, e-commerceOrganization - About pages, company infoFAQPage - FAQ pages with Q&ABreadcrumbList - Auto-added for navigationExamples:
Ask Claude: "Generate schema for /blog/indexnow-faster-indexing"
# Auto-detects Article schema
Ask Claude: "Generate Product schema for /products/widget"
generate_schema({ url_path: "/products/widget", schema_type: "Product" })
Ask Claude: "Add structured data to this page"
# If editing a file, Claude will detect the URL and generate schema
What it does:
Response includes:
Use cases:
get_issuesExample output:
{
// Automatically detected from URL patterns and content
"detected_page_type": "Article",
// What schemas make sense for this page type
"recommended_schemas": ["Article", "BreadcrumbList"],
"schemas": [
{
"type": "Article",
// Ready-to-use JSON-LD structured data
"json_ld": { ... },
"validation": {
"valid": false,
// Things to fix before deploying
"warnings": ["Replace placeholder values with actual data"]
}
}
],
"implementation": {
// Framework-specific instructions
"where_to_add": "In your page component's metadata",
"code_snippet": "// Next.js code here",
"instructions": "1. Add code to page.tsx..."
}
}
Pro tip: After generating schema, test it with Google Rich Results Test
generate_meta (Enhanced with Client Profile Context)Generate optimized meta tags (title, description, Open Graph tags) for a page. Now uses your client profile to generate highly personalized, business-aware meta tags that align with your target audience, brand voice, and competitive positioning.
Parameters:
domain (optional): Site domain (uses SEO_CLIENT_DOMAIN if not provided)url_path (required): Page URL path (e.g., "/blog" or "/blog/post")include_og_tags (optional): Include Open Graph tags for social sharing (default: true)framework (optional): Framework format for code snippet - nextjs, html, astro, or remix (default: "nextjs")NEW: Client Profile Integration
The tool automatically fetches your client profile and uses context like:
Examples:
Ask Claude: "Generate better meta tags for /blog"
# Auto-analyzes content and generates optimized title/description
Ask Claude: "Fix the title too short issue on /blog/post"
generate_meta({ url_path: "/blog/post" })
Ask Claude: "Create meta tags without OG tags for /about"
generate_meta({ url_path: "/about", include_og_tags: false })
Ask Claude: "Generate HTML meta tags for /products/widget"
generate_meta({ url_path: "/products/widget", framework: "html" })
What it does:
Response includes:
Use cases:
Real-World Impact: Before vs. After
Without Profile Context (Generic):
Title: Project Management Software | Company
Description: Manage your projects efficiently with our powerful collaboration platform. Streamline workflows and boost productivity.
With Profile Context (Target audience: developers, Differentiators: "real-time collaboration, 50% faster"):
Title: Real-Time Dev Collaboration | 50% Faster | Company
Description: Built for developers: API-first project management with real-time sync. Ship 50% faster than competitors. Try free for 30 days →
Profile Warnings System:
If your profile is incomplete, you'll get helpful warnings:
{
"profile_warnings": [
// WARNING: Target audience not set - recommendations will be generic
"Target audience not set - recommendations will be generic. Add this in your business profile for better results.",
// WARNING: No target keywords set - can't optimize for ranking goals
"No target keywords set - can't optimize for ranking goals. Add keywords in your business profile.",
// TIP: Add your differentiators to make meta descriptions more compelling
"Add your differentiators in the business profile to make meta descriptions more compelling.",
// TIP: Set your brand voice to ensure consistent tone
"Set your brand voice in the business profile to ensure consistent tone."
]
}
Or if no profile exists at all:
{
"profile_warnings": [
// No client profile found - recommendations will be generic
"No client profile found. Fill out your profile at /clients/{id}/profile for personalized recommendations."
]
}
Example workflow:
1. User: "What SEO issues does my site have?"
→ get_issues shows "Title too short on /blog"
2. User: "Fix the title issue on /blog"
→ generate_meta analyzes /blog page
→ Fetches client profile for context
→ Shows warnings if profile incomplete
→ Claude generates optimized, personalized title
→ Returns Next.js code snippet to add to page
3. User copies code to app/blog/page.tsx
4. User: "Re-check SEO for /blog"
→ get_page_seo confirms title is now optimal
Setting Up Your Profile:
To get the most value from generate_meta:
/clients/{your-client-id}/profile in the dashboardSEO Best Practices (Built-in):
Framework-specific output:
Next.js (App Router):
export const metadata = {
title: "Your Optimized Title | Brand",
description: "Your compelling meta description...",
openGraph: {
title: "Your Optimized Title",
description: "Your compelling meta description...",
images: [{ url: "/path/to/image.jpg" }],
},
};
HTML:
<title>Your Optimized Title | Brand</title>
<meta name="description" content="Your compelling meta description...">
<meta property="og:title" content="Your Optimized Title">
<meta property="og:description" content="Your compelling meta description...">
Pro tips:
npm run watch
This will recompile TypeScript on every change.
# In one terminal, start your backend
cd /path/to/rampify
npm run dev
# In another terminal, build and run MCP server
cd packages/mcp-server
npm run dev
The MCP server will connect to your local backend at http://localhost:3000.
Set LOG_LEVEL=debug in your .env file to see detailed logs:
LOG_LEVEL=debug npm run dev
MCP Server (packages/mcp-server)
├── src/
│ ├── index.ts # MCP server entry point
│ ├── config.ts # Configuration loader
│ ├── tools/ # MCP tool implementations
│ │ ├── get-seo-context.ts
│ │ ├── scan-site.ts
│ │ └── index.ts # Tool registry
│ ├── services/ # Business logic
│ │ ├── api-client.ts # Backend API client
│ │ ├── cache.ts # Caching layer
│ │ └── url-resolver.ts # File path → URL mapping
│ ├── utils/
│ │ └── logger.ts # Logging utility
│ └── types/ # TypeScript types
│ ├── seo.ts
│ └── api.ts
└── build/ # Compiled JavaScript (generated)
The MCP server caches responses for 1 hour (configurable via CACHE_TTL) to improve performance.
Cache is cleared automatically when:
Solution: Add the site to your dashboard first at http://localhost:3000
Checklist:
npm run dev in root directory)BACKEND_API_URL correct in .env?LOG_LEVEL=debugChecklist:
npm run build)Common causes:
Solution:
npm run dev)localhost:3000 (not just localhost)Example error:
Could not connect to local dev server at http://localhost:3000/blog/post.
Make sure your dev server is running (e.g., npm run dev).
How to tell which source you're using:
Every response includes explicit source and fetched_from fields:
{
// Source: local_dev_server, production_database, or direct_content
"source": "local_dev_server",
// Exact URL that was fetched and analyzed
"fetched_from": "http://localhost:3000/page",
// ... rest of response
}
Pro tip: Set SEO_CLIENT_DOMAIN per project to avoid specifying domain every time:
SEO_CLIENT_DOMAIN=localhost:3000SEO_CLIENT_DOMAIN=yoursite.comget_page_seo - Get SEO data for a specific pageget_issues - Get all site issues with health scoreget_gsc_insights - Get GSC performance data with content recommendations (NEW)generate_meta - AI-powered title and meta description generationgenerate_schema - Auto-generate structured data (Article, Product, etc.)crawl_site - Trigger fresh site crawlsuggest_internal_links - Internal linking recommendationscheck_before_deploy - Pre-deployment SEO validationoptimize_blog_post - Deep optimization for blog contentoptimize_landing_page - Conversion-focused SEONeed help?
MIT
Machine endpoints, protocol fit, contract coverage, invocation examples, and guardrails for agent-to-agent use.
Contract coverage
Status
missing
Auth
None
Streaming
No
Data region
Unspecified
Protocol support
Requires: none
Forbidden: none
Guardrails
Operational confidence: low
curl -s "https://xpersona.co/api/v1/agents/mcp-rampify-dev-rampify-mcp/snapshot"
curl -s "https://xpersona.co/api/v1/agents/mcp-rampify-dev-rampify-mcp/contract"
curl -s "https://xpersona.co/api/v1/agents/mcp-rampify-dev-rampify-mcp/trust"
Trust and runtime signals, benchmark suites, failure patterns, and practical risk constraints.
Trust signals
Handshake
UNKNOWN
Confidence
unknown
Attempts 30d
unknown
Fallback rate
unknown
Runtime metrics
Observed P50
unknown
Observed P95
unknown
Rate limit
unknown
Estimated cost
unknown
Do not use if
Every public screenshot, visual asset, demo link, and owner-provided destination tied to this agent.
Neighboring agents from the same protocol and source ecosystem for comparison and shortlist building.
Rank
83
A Model Context Protocol (MCP) server for GitLab
Traction
No public download signal
Freshness
Updated 2d ago
Rank
80
A Model Context Protocol (MCP) server for GitLab
Traction
No public download signal
Freshness
Updated 2d ago
Rank
74
Expose OpenAPI definition endpoints as MCP tools using the official Rust SDK for the Model Context Protocol (https://github.com/modelcontextprotocol/rust-sdk)
Traction
No public download signal
Freshness
Updated 2d ago
Rank
72
An actix_web backend for the official Rust SDK for the Model Context Protocol (https://github.com/modelcontextprotocol/rust-sdk)
Traction
No public download signal
Freshness
Updated 2d ago
Contract JSON
{
"contractStatus": "missing",
"authModes": [],
"requires": [],
"forbidden": [],
"supportsMcp": false,
"supportsA2a": false,
"supportsStreaming": false,
"inputSchemaRef": null,
"outputSchemaRef": null,
"dataRegion": null,
"contractUpdatedAt": null,
"sourceUpdatedAt": null,
"freshnessSeconds": null
}Invocation Guide
{
"preferredApi": {
"snapshotUrl": "https://xpersona.co/api/v1/agents/mcp-rampify-dev-rampify-mcp/snapshot",
"contractUrl": "https://xpersona.co/api/v1/agents/mcp-rampify-dev-rampify-mcp/contract",
"trustUrl": "https://xpersona.co/api/v1/agents/mcp-rampify-dev-rampify-mcp/trust"
},
"curlExamples": [
"curl -s \"https://xpersona.co/api/v1/agents/mcp-rampify-dev-rampify-mcp/snapshot\"",
"curl -s \"https://xpersona.co/api/v1/agents/mcp-rampify-dev-rampify-mcp/contract\"",
"curl -s \"https://xpersona.co/api/v1/agents/mcp-rampify-dev-rampify-mcp/trust\""
],
"jsonRequestTemplate": {
"query": "summarize this repo",
"constraints": {
"maxLatencyMs": 2000,
"protocolPreference": [
"MCP"
]
}
},
"jsonResponseTemplate": {
"ok": true,
"result": {
"summary": "...",
"confidence": 0.9
},
"meta": {
"source": "GITHUB_MCP",
"generatedAt": "2026-04-17T04:48:40.886Z"
}
},
"retryPolicy": {
"maxAttempts": 3,
"backoffMs": [
500,
1500,
3500
],
"retryableConditions": [
"HTTP_429",
"HTTP_503",
"NETWORK_TIMEOUT"
]
}
}Trust JSON
{
"status": "unavailable",
"handshakeStatus": "UNKNOWN",
"verificationFreshnessHours": null,
"reputationScore": null,
"p95LatencyMs": null,
"successRate30d": null,
"fallbackRate": null,
"attempts30d": null,
"trustUpdatedAt": null,
"trustConfidence": "unknown",
"sourceUpdatedAt": null,
"freshnessSeconds": null
}Capability Matrix
{
"rows": [
{
"key": "MCP",
"type": "protocol",
"support": "unknown",
"confidenceSource": "profile",
"notes": "Listed on profile"
},
{
"key": "claude-seo-checker",
"type": "capability",
"support": "supported",
"confidenceSource": "profile",
"notes": "Declared in agent profile metadata"
},
{
"key": "mcp-seo",
"type": "capability",
"support": "supported",
"confidenceSource": "profile",
"notes": "Declared in agent profile metadata"
},
{
"key": "seo-checker",
"type": "capability",
"support": "supported",
"confidenceSource": "profile",
"notes": "Declared in agent profile metadata"
},
{
"key": "claude-code",
"type": "capability",
"support": "supported",
"confidenceSource": "profile",
"notes": "Declared in agent profile metadata"
},
{
"key": "cursor",
"type": "capability",
"support": "supported",
"confidenceSource": "profile",
"notes": "Declared in agent profile metadata"
},
{
"key": "mcp",
"type": "capability",
"support": "supported",
"confidenceSource": "profile",
"notes": "Declared in agent profile metadata"
},
{
"key": "seo",
"type": "capability",
"support": "supported",
"confidenceSource": "profile",
"notes": "Declared in agent profile metadata"
},
{
"key": "ai-seo",
"type": "capability",
"support": "supported",
"confidenceSource": "profile",
"notes": "Declared in agent profile metadata"
},
{
"key": "developer-tools",
"type": "capability",
"support": "supported",
"confidenceSource": "profile",
"notes": "Declared in agent profile metadata"
},
{
"key": "google-search-console",
"type": "capability",
"support": "supported",
"confidenceSource": "profile",
"notes": "Declared in agent profile metadata"
},
{
"key": "seo-audit",
"type": "capability",
"support": "supported",
"confidenceSource": "profile",
"notes": "Declared in agent profile metadata"
},
{
"key": "meta-tags",
"type": "capability",
"support": "supported",
"confidenceSource": "profile",
"notes": "Declared in agent profile metadata"
},
{
"key": "cli",
"type": "capability",
"support": "supported",
"confidenceSource": "profile",
"notes": "Declared in agent profile metadata"
}
],
"flattenedTokens": "protocol:MCP|unknown|profile capability:claude-seo-checker|supported|profile capability:mcp-seo|supported|profile capability:seo-checker|supported|profile capability:claude-code|supported|profile capability:cursor|supported|profile capability:mcp|supported|profile capability:seo|supported|profile capability:ai-seo|supported|profile capability:developer-tools|supported|profile capability:google-search-console|supported|profile capability:seo-audit|supported|profile capability:meta-tags|supported|profile capability:cli|supported|profile"
}Facts JSON
[
{
"factKey": "docs_crawl",
"category": "integration",
"label": "Crawlable docs",
"value": "6 indexed pages on the official domain",
"href": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
"sourceUrl": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
"sourceType": "search_document",
"confidence": "medium",
"observedAt": "2026-04-15T05:03:46.393Z",
"isPublic": true
},
{
"factKey": "vendor",
"category": "vendor",
"label": "Vendor",
"value": "Rampify Dev",
"href": "https://github.com/rampify-dev/rampify-mcp#readme",
"sourceUrl": "https://github.com/rampify-dev/rampify-mcp#readme",
"sourceType": "profile",
"confidence": "medium",
"observedAt": "2026-02-25T03:11:18.269Z",
"isPublic": true
},
{
"factKey": "protocols",
"category": "compatibility",
"label": "Protocol compatibility",
"value": "MCP",
"href": "https://xpersona.co/api/v1/agents/mcp-rampify-dev-rampify-mcp/contract",
"sourceUrl": "https://xpersona.co/api/v1/agents/mcp-rampify-dev-rampify-mcp/contract",
"sourceType": "contract",
"confidence": "medium",
"observedAt": "2026-02-25T03:11:18.269Z",
"isPublic": true
},
{
"factKey": "handshake_status",
"category": "security",
"label": "Handshake status",
"value": "UNKNOWN",
"href": "https://xpersona.co/api/v1/agents/mcp-rampify-dev-rampify-mcp/trust",
"sourceUrl": "https://xpersona.co/api/v1/agents/mcp-rampify-dev-rampify-mcp/trust",
"sourceType": "trust",
"confidence": "medium",
"observedAt": null,
"isPublic": true
}
]Change Events JSON
[
{
"eventType": "docs_update",
"title": "Docs refreshed: Sign in to GitHub · GitHub",
"description": "Fresh crawlable documentation was indexed for the official domain.",
"href": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
"sourceUrl": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
"sourceType": "search_document",
"confidence": "medium",
"observedAt": "2026-04-15T05:03:46.393Z",
"isPublic": true
}
]Sponsored
Ads related to @rampify/mcp-server and adjacent AI workflows.