Crawler Summary

dev-performance answer-first brief

Generates personal developer productivity and influence reports from Jira, GitHub, Confluence, and Google Calendar data. Use when the user asks about their performance, productivity, influence, metrics, weekly report, monthly report, quarterly report, yearly report, promotion preparation, improvement tracking, goal tracking, or self-calibration. --- name: dev-performance description: Generates personal developer productivity and influence reports from Jira, GitHub, Confluence, and Google Calendar data. Use when the user asks about their performance, productivity, influence, metrics, weekly report, monthly report, quarterly report, yearly report, promotion preparation, improvement tracking, goal tracking, or self-calibration. --- Developer Performance Reporti Capability contract not published. No trust telemetry is available yet. Last updated 4/14/2026.

Freshness

Last checked 4/14/2026

Best For

dev-performance is best for general automation workflows where MCP compatibility matters.

Not Ideal For

Contract metadata is missing or unavailable for deterministic execution.

Evidence Sources Checked

editorial-content, GITHUB OPENCLEW, runtime-metrics, public facts pack

Claim this agent
Agent DossierGitHubSafety: 94/100

dev-performance

Generates personal developer productivity and influence reports from Jira, GitHub, Confluence, and Google Calendar data. Use when the user asks about their performance, productivity, influence, metrics, weekly report, monthly report, quarterly report, yearly report, promotion preparation, improvement tracking, goal tracking, or self-calibration. --- name: dev-performance description: Generates personal developer productivity and influence reports from Jira, GitHub, Confluence, and Google Calendar data. Use when the user asks about their performance, productivity, influence, metrics, weekly report, monthly report, quarterly report, yearly report, promotion preparation, improvement tracking, goal tracking, or self-calibration. --- Developer Performance Reporti

MCPself-declared

Public facts

4

Change events

1

Artifacts

0

Freshness

Apr 14, 2026

Verifiededitorial-contentNo verified compatibility signals

Capability contract not published. No trust telemetry is available yet. Last updated 4/14/2026.

Trust evidence available

Trust score

Unknown

Compatibility

MCP

Freshness

Apr 14, 2026

Vendor

Sergeatx

Artifacts

0

Benchmarks

0

Last release

Unpublished

Executive Summary

Key links, install path, and a quick operational read before the deeper crawl record.

Verifiededitorial-content

Summary

Capability contract not published. No trust telemetry is available yet. Last updated 4/14/2026.

Setup snapshot

git clone https://github.com/SergeATX/cursor-dev-performance-skill.git
  1. 1

    Setup complexity is LOW. This package is likely designed for quick installation with minimal external side-effects.

  2. 2

    Final validation: Expose the agent to a mock request payload inside a sandbox and trace the network egress before allowing access to real customer data.

Evidence Ledger

Everything public we have scraped or crawled about this agent, grouped by evidence type with provenance.

Verifiededitorial-content
Vendor (1)

Vendor

Sergeatx

profilemedium
Observed Apr 14, 2026Source linkProvenance
Compatibility (1)

Protocol compatibility

MCP

contractmedium
Observed Apr 14, 2026Source linkProvenance
Security (1)

Handshake status

UNKNOWN

trustmedium
Observed unknownSource linkProvenance
Integration (1)

Crawlable docs

6 indexed pages on the official domain

search_documentmedium
Observed Apr 15, 2026Source linkProvenance

Release & Crawl Timeline

Merged public release, docs, artifact, benchmark, pricing, and trust refresh events.

Self-declaredagent-index

Artifacts Archive

Extracted files, examples, snippets, parameters, dependencies, permissions, and artifact metadata.

Self-declaredGITHUB OPENCLEW

Extracted files

0

Examples

0

Snippets

0

Languages

typescript

Parameters

Docs & README

Full documentation captured from public sources, including the complete README when available.

Self-declaredGITHUB OPENCLEW

Docs source

GITHUB OPENCLEW

Editorial quality

ready

Generates personal developer productivity and influence reports from Jira, GitHub, Confluence, and Google Calendar data. Use when the user asks about their performance, productivity, influence, metrics, weekly report, monthly report, quarterly report, yearly report, promotion preparation, improvement tracking, goal tracking, or self-calibration. --- name: dev-performance description: Generates personal developer productivity and influence reports from Jira, GitHub, Confluence, and Google Calendar data. Use when the user asks about their performance, productivity, influence, metrics, weekly report, monthly report, quarterly report, yearly report, promotion preparation, improvement tracking, goal tracking, or self-calibration. --- Developer Performance Reporti

Full README

name: dev-performance description: Generates personal developer productivity and influence reports from Jira, GitHub, Confluence, and Google Calendar data. Use when the user asks about their performance, productivity, influence, metrics, weekly report, monthly report, quarterly report, yearly report, promotion preparation, improvement tracking, goal tracking, or self-calibration.

Developer Performance Reporting

When Triggered

The user asks about their personal performance for a time period. Examples:

  • "How did I do last week?"
  • "Monthly performance report for January"
  • "Show me my quarterly stats"
  • "Prepare my promotion evidence for Q4"
  • "Show my improvement over the last 6 weeks"
  • "Compare my last two quarters"
  • "What's my influence radius this month?"

Step 1: Load Configuration

Read config.md to get:

  • Jira cloud ID, account ID, project keys
  • GitHub username, org, and repos to query
  • Google Calendar ID(s) and work hours (if configured)

Step 2: Determine Time Range

Parse the user's request into a concrete date range:

| Request | Start Date | End Date | |---------|-----------|----------| | "last week" | Previous Monday | Previous Sunday | | "this week" | Current Monday | Today | | "last month" | 1st of previous month | Last day of previous month | | "this month" | 1st of current month | Today | | "last quarter" | 1st of previous quarter | Last day of previous quarter | | "Q{N} {YEAR}" | 1st of that quarter | Last day of that quarter | | "last year" | Jan 1 of previous year | Dec 31 of previous year | | "{YEAR}" (e.g. "2025") | Jan 1 of that year | Dec 31 of that year | | "last N weeks" | N weeks ago | Today | | "last N months" | N months ago | Today |

Step 3: Collect Jira Data

Read queries/jira-queries.md for exact JQL templates.

Use the user-jira MCP server with searchJiraIssuesUsingJql tool. Execute these queries:

  1. Completed tickets -- substituting account ID and date range
  2. Currently in progress -- for WIP count
  3. Tickets I created -- for initiative score
  4. Re-opened tickets -- for quality signal

For each completed ticket, compute lead time from created to resolutiondate.

For monthly+ reports, also compute cycle time: use getJiraIssue with expand=changelog on each completed ticket. Parse changelog.histories for status transitions to find time spent in active work (In Progress / Code Review → Closed).

Request fields: summary, status, issuetype, priority, project, created, resolutiondate

Step 4: Collect GitHub Data

Read queries/github-queries.md for exact CLI commands.

Run gh CLI commands for each repo in config:

  1. PRs merged in period -- with additions/deletions for PR size
  2. Reviews given in period -- with author for influence radius
  3. PRs opened in period -- for open/merge ratio

Compute: average PR size, time-to-merge, review-to-PR ratio, distinct authors reviewed.

For monthly+ reports, also collect: 4. Review turnaround -- via gh api repos/{REPO}/pulls/{PR}/reviews for response times 5. CI pass rate -- via gh api repos/{REPO}/commits/{SHA}/check-runs on first commit of each PR 6. Revert frequency -- search for PRs with "revert" or "hotfix" in title

Step 5: Collect Confluence Data (for monthly+ reports)

Read queries/confluence-queries.md for CQL templates.

Use the user-jira MCP server with searchConfluenceUsingCql tool:

  1. Pages created by me in period -- knowledge creation
  2. Pages contributed to in period -- knowledge maintenance

Compute knowledge distribution score: (pages_created * 2) + pages_updated

Step 6: Collect Google Calendar Data

Read queries/calendar-queries.md for query patterns and classification rules.

Use the google-calendar MCP server with list-events tool. For each calendar ID in config:

  1. List events in the date range -- exclude declined and cancelled events, exclude all-day events
  2. Classify meetings by type: 1:1, team standup, team meeting, cross-team, interview, external, incident, other
  3. Compute meeting load: total meetings, total meeting hours, average meeting hours/day
  4. Compute focus time: identify blocks of 2+ consecutive hours during work hours with no meetings
  5. Compute collaboration breadth: count distinct attendee emails (excluding self)
  6. Meeting type distribution: count and hours per meeting type
  7. After-hours meetings: count events outside configured work hours (default 9:00-18:00, Mon-Fri)

If the Google Calendar MCP is not configured, skip this step and note "Calendar data not available -- configure Google Calendar MCP for meeting/focus metrics" in the report.

Step 7: Collect Slack Data (future -- pending admin approval)

Skip this step. Slack integration is deferred until the official Slack MCP app is approved by workspace admins. When available, queries will be defined in queries/slack-queries.md.

Step 8: Compute Derived Metrics

| Metric | Formula | |--------|---------| | Bug-to-feature ratio | bugs / (stories + tasks) | | Review-to-PR ratio | reviews given / PRs opened | | Influence radius | distinct people across Jira comments + GitHub reviews | | Initiative score | self-created tickets + self-initiated PRs | | Quality signals | re-opens + reverts (lower is better) | | CI discipline | PRs with clean first push / total PRs (percentage) | | Review responsiveness | median time from PR creation to my first review | | Knowledge distribution | (confluence pages created * 2) + pages updated | | Meeting load | total meeting hours / working days in period | | Focus ratio | focus hours / total available work hours | | Collaboration breadth | distinct people from meetings + reviews + Jira comments | | Meeting-to-delivery ratio | total meeting hours / tickets completed |

Step 9: Format Report

Read the appropriate template from reports/:

Fill in all placeholders with collected data. Include observations section with 1-3 sentences noting anything unusual.

For quarterly reports, also include:

  • Month-over-month breakdown tables showing trends
  • Confluence pages created/updated
  • Promotion-ready narrative section (write in third person, evidence-backed)
  • Quarter-over-quarter comparison if previous data exists

For goal/improvement tracking (when user mentions improvement goals, targets, or growth tracking):

  • Include the Goal Tracking Mode section from the quarterly template
  • Show week-over-week progress against specific improvement targets
  • Highlight improvement trajectory with direction indicators

Data Validation & Sanity Checks

Before presenting the report, validate the data:

  1. Zero-result check: If all Jira queries return zero results, verify the account ID and date range are correct. Prompt the user: "No Jira activity found for {date range}. Is your account ID correct in config.md?"
  2. Outlier detection: Flag any cycle time >30 business days or time-to-merge >2 weeks as outliers. Include them in calculations but call them out.
  3. Negative values: If any computed delta (cycle time, lead time, turnaround) is negative, the data is inconsistent. Report "N/A -- data inconsistency" for that metric.
  4. Date range sanity: If the requested period is in the future or >1 year ago, confirm with the user before querying.
  5. GitHub rate limits: If gh CLI returns an error mentioning rate limits, tell the user and suggest waiting or reducing the repo list.
  6. Cross-source consistency: If Jira shows 15 tickets completed but GitHub shows 0 PRs merged, note this in observations -- it may indicate non-code work (config changes, documentation, ops) or a config issue.
  7. Large result sets: For quarterly/yearly reports, if any query returns >100 results, report totals but limit detail tables to the 20 most significant items (highest priority, largest PRs).
  8. Calendar sanity: If calendar shows >12 hours of meetings in a single day, flag as a likely data issue (overlapping events or misconfigured calendar). If focus ratio is 0% for a full week, note it as unusual.

Performance Optimization

  • Weekly reports: Skip cycle time, review turnaround, CI pass rate, and revert queries. These are monthly+ metrics. This cuts query time by ~60%.
  • Monthly reports: Run all queries. For per-PR detail queries (review turnaround, CI pass rate), sample 15 most recent PRs if >15 exist.
  • Quarterly reports: Run monthly queries for each month separately, then aggregate. This gives month-over-month breakdown for free. For per-PR queries, sample 15 per month (45 max).
  • Yearly reports: Run quarterly aggregates. For per-PR queries, sample 10 per quarter (40 max). Limit detail tables to top 20.
  • Skip inactive repos: If a repo has 0 merged PRs for the period, skip all follow-up queries for that repo.
  • Reuse data: PR createdAt from the merged-PRs query should be reused for review turnaround calculations -- don't re-fetch.
  • Calendar queries are lightweight: A single list-events call covers a full month. Include calendar data at all report levels (weekly through yearly).

Important Notes

  • All data is for the configured user only -- this is a personal performance tool
  • When a query returns no results, report zero, don't skip the metric
  • When a metric can't be computed (missing data), note it as "N/A -- requires {missing piece}"
  • For time-to-merge and cycle time, report in hours for periods <7 days, business days for longer periods
  • Present numbers honestly -- don't editorialize about whether they're "good" or "bad" unless the user asks for assessment

Contract & API

Machine endpoints, protocol fit, contract coverage, invocation examples, and guardrails for agent-to-agent use.

MissingGITHUB OPENCLEW

Contract coverage

Status

missing

Auth

None

Streaming

No

Data region

Unspecified

Protocol support

MCP: self-declared

Requires: none

Forbidden: none

Guardrails

Operational confidence: low

No positive guardrails captured.
Invocation examples
curl -s "https://xpersona.co/api/v1/agents/sergeatx-cursor-dev-performance-skill/snapshot"
curl -s "https://xpersona.co/api/v1/agents/sergeatx-cursor-dev-performance-skill/contract"
curl -s "https://xpersona.co/api/v1/agents/sergeatx-cursor-dev-performance-skill/trust"

Reliability & Benchmarks

Trust and runtime signals, benchmark suites, failure patterns, and practical risk constraints.

Missingruntime-metrics

Trust signals

Handshake

UNKNOWN

Confidence

unknown

Attempts 30d

unknown

Fallback rate

unknown

Runtime metrics

Observed P50

unknown

Observed P95

unknown

Rate limit

unknown

Estimated cost

unknown

Do not use if

Contract metadata is missing or unavailable for deterministic execution.
No benchmark suites or observed failure patterns are available.

Media & Demo

Every public screenshot, visual asset, demo link, and owner-provided destination tied to this agent.

Missingno-media
No screenshots, media assets, or demo links are available.

Related Agents

Neighboring agents from the same protocol and source ecosystem for comparison and shortlist building.

Self-declaredprotocol-neighbors
GITLAB_AI_CATALOGgitlab-mcp

Rank

83

A Model Context Protocol (MCP) server for GitLab

Traction

No public download signal

Freshness

Updated 2d ago

MCP
GITLAB_PUBLIC_PROJECTSgitlab-mcp

Rank

80

A Model Context Protocol (MCP) server for GitLab

Traction

No public download signal

Freshness

Updated 2d ago

MCP
GITLAB_AI_CATALOGrmcp-openapi

Rank

74

Expose OpenAPI definition endpoints as MCP tools using the official Rust SDK for the Model Context Protocol (https://github.com/modelcontextprotocol/rust-sdk)

Traction

No public download signal

Freshness

Updated 2d ago

MCP
GITLAB_AI_CATALOGrmcp-actix-web

Rank

72

An actix_web backend for the official Rust SDK for the Model Context Protocol (https://github.com/modelcontextprotocol/rust-sdk)

Traction

No public download signal

Freshness

Updated 2d ago

MCP
Machine Appendix

Contract JSON

{
  "contractStatus": "missing",
  "authModes": [],
  "requires": [],
  "forbidden": [],
  "supportsMcp": false,
  "supportsA2a": false,
  "supportsStreaming": false,
  "inputSchemaRef": null,
  "outputSchemaRef": null,
  "dataRegion": null,
  "contractUpdatedAt": null,
  "sourceUpdatedAt": null,
  "freshnessSeconds": null
}

Invocation Guide

{
  "preferredApi": {
    "snapshotUrl": "https://xpersona.co/api/v1/agents/sergeatx-cursor-dev-performance-skill/snapshot",
    "contractUrl": "https://xpersona.co/api/v1/agents/sergeatx-cursor-dev-performance-skill/contract",
    "trustUrl": "https://xpersona.co/api/v1/agents/sergeatx-cursor-dev-performance-skill/trust"
  },
  "curlExamples": [
    "curl -s \"https://xpersona.co/api/v1/agents/sergeatx-cursor-dev-performance-skill/snapshot\"",
    "curl -s \"https://xpersona.co/api/v1/agents/sergeatx-cursor-dev-performance-skill/contract\"",
    "curl -s \"https://xpersona.co/api/v1/agents/sergeatx-cursor-dev-performance-skill/trust\""
  ],
  "jsonRequestTemplate": {
    "query": "summarize this repo",
    "constraints": {
      "maxLatencyMs": 2000,
      "protocolPreference": [
        "MCP"
      ]
    }
  },
  "jsonResponseTemplate": {
    "ok": true,
    "result": {
      "summary": "...",
      "confidence": 0.9
    },
    "meta": {
      "source": "GITHUB_OPENCLEW",
      "generatedAt": "2026-04-16T23:29:15.020Z"
    }
  },
  "retryPolicy": {
    "maxAttempts": 3,
    "backoffMs": [
      500,
      1500,
      3500
    ],
    "retryableConditions": [
      "HTTP_429",
      "HTTP_503",
      "NETWORK_TIMEOUT"
    ]
  }
}

Trust JSON

{
  "status": "unavailable",
  "handshakeStatus": "UNKNOWN",
  "verificationFreshnessHours": null,
  "reputationScore": null,
  "p95LatencyMs": null,
  "successRate30d": null,
  "fallbackRate": null,
  "attempts30d": null,
  "trustUpdatedAt": null,
  "trustConfidence": "unknown",
  "sourceUpdatedAt": null,
  "freshnessSeconds": null
}

Capability Matrix

{
  "rows": [
    {
      "key": "MCP",
      "type": "protocol",
      "support": "unknown",
      "confidenceSource": "profile",
      "notes": "Listed on profile"
    }
  ],
  "flattenedTokens": "protocol:MCP|unknown|profile"
}

Facts JSON

[
  {
    "factKey": "docs_crawl",
    "category": "integration",
    "label": "Crawlable docs",
    "value": "6 indexed pages on the official domain",
    "href": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
    "sourceUrl": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
    "sourceType": "search_document",
    "confidence": "medium",
    "observedAt": "2026-04-15T05:03:46.393Z",
    "isPublic": true
  },
  {
    "factKey": "vendor",
    "category": "vendor",
    "label": "Vendor",
    "value": "Sergeatx",
    "href": "https://github.com/SergeATX/cursor-dev-performance-skill",
    "sourceUrl": "https://github.com/SergeATX/cursor-dev-performance-skill",
    "sourceType": "profile",
    "confidence": "medium",
    "observedAt": "2026-04-14T22:26:09.205Z",
    "isPublic": true
  },
  {
    "factKey": "protocols",
    "category": "compatibility",
    "label": "Protocol compatibility",
    "value": "MCP",
    "href": "https://xpersona.co/api/v1/agents/sergeatx-cursor-dev-performance-skill/contract",
    "sourceUrl": "https://xpersona.co/api/v1/agents/sergeatx-cursor-dev-performance-skill/contract",
    "sourceType": "contract",
    "confidence": "medium",
    "observedAt": "2026-04-14T22:26:09.205Z",
    "isPublic": true
  },
  {
    "factKey": "handshake_status",
    "category": "security",
    "label": "Handshake status",
    "value": "UNKNOWN",
    "href": "https://xpersona.co/api/v1/agents/sergeatx-cursor-dev-performance-skill/trust",
    "sourceUrl": "https://xpersona.co/api/v1/agents/sergeatx-cursor-dev-performance-skill/trust",
    "sourceType": "trust",
    "confidence": "medium",
    "observedAt": null,
    "isPublic": true
  }
]

Change Events JSON

[
  {
    "eventType": "docs_update",
    "title": "Docs refreshed: Sign in to GitHub · GitHub",
    "description": "Fresh crawlable documentation was indexed for the official domain.",
    "href": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
    "sourceUrl": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
    "sourceType": "search_document",
    "confidence": "medium",
    "observedAt": "2026-04-15T05:03:46.393Z",
    "isPublic": true
  }
]

Sponsored

Ads related to dev-performance and adjacent AI workflows.