Xpersona Agent
Data Analyst
Data visualization, report generation, SQL queries, and spreadsheet automation. Transform your AI agent into a data-savvy analyst that turns raw data into actionable insights. Skill: Data Analyst Owner: oyi77 Summary: Data visualization, report generation, SQL queries, and spreadsheet automation. Transform your AI agent into a data-savvy analyst that turns raw data into actionable insights. Tags: latest:1.0.0 Version history: v1.0.0 | 2026-02-06T20:42:17.278Z | auto Initial release of the Data Analyst skill: - Provides SQL query patterns for common analyses, including cohort and funnel ana
clawhub skill install kn7cpmgq5bpf1mp69bpd7n9as180nssd:data-analystOverall rank
#62
Adoption
7.4K downloads
Trust
Unknown
Freshness
Feb 28, 2026
Freshness
Last checked Feb 28, 2026
Best For
Data Analyst is best for general automation workflows where documented compatibility matters.
Not Ideal For
Contract metadata is missing or unavailable for deterministic execution.
Evidence Sources Checked
editorial-content, CLAWHUB, runtime-metrics, public facts pack
Overview
Key links, install path, reliability highlights, and the shortest practical read before diving into the crawl record.
Verifiededitorial-content
Overview
Key links, install path, reliability highlights, and the shortest practical read before diving into the crawl record.
Overview
Executive Summary
Data visualization, report generation, SQL queries, and spreadsheet automation. Transform your AI agent into a data-savvy analyst that turns raw data into actionable insights. Skill: Data Analyst Owner: oyi77 Summary: Data visualization, report generation, SQL queries, and spreadsheet automation. Transform your AI agent into a data-savvy analyst that turns raw data into actionable insights. Tags: latest:1.0.0 Version history: v1.0.0 | 2026-02-06T20:42:17.278Z | auto Initial release of the Data Analyst skill: - Provides SQL query patterns for common analyses, including cohort and funnel ana Capability contract not published. No trust telemetry is available yet. 7.4K downloads reported by the source. Last updated 4/15/2026.
Trust score
Unknown
Compatibility
Profile only
Freshness
Feb 28, 2026
Vendor
Clawhub
Artifacts
0
Benchmarks
0
Last release
1.0.0
Install & run
Setup Snapshot
clawhub skill install kn7cpmgq5bpf1mp69bpd7n9as180nssd:data-analyst- 1
Setup complexity is LOW. This package is likely designed for quick installation with minimal external side-effects.
- 2
Final validation: Expose the agent to a mock request payload inside a sandbox and trace the network egress before allowing access to real customer data.
Evidence & Timeline
Public facts grouped by evidence type, plus release and crawl events with provenance and freshness.
Verifiededitorial-content
Evidence & Timeline
Public facts grouped by evidence type, plus release and crawl events with provenance and freshness.
Public facts
Evidence Ledger
Vendor (1)
Vendor
Clawhub
Release (1)
Latest release
1.0.0
Adoption (1)
Adoption signal
7.4K downloads
Security (1)
Handshake status
UNKNOWN
Artifacts & Docs
Parameters, dependencies, examples, extracted files, editorial overview, and the complete README when available.
Self-declaredCLAWHUB
Artifacts & Docs
Parameters, dependencies, examples, extracted files, editorial overview, and the complete README when available.
Captured outputs
Artifacts Archive
Extracted files
2
Examples
6
Snippets
0
Languages
Unknown
Executable Examples
markdown
### Data Sources - Primary DB: [Connection string or description] - Spreadsheets: [Google Sheets URL / local path] - Data warehouse: [BigQuery/Snowflake/etc.]
bash
./scripts/data-init.sh
sql
-- Row count
SELECT COUNT(*) FROM table_name;
-- Sample data
SELECT * FROM table_name LIMIT 10;
-- Column statistics
SELECT
column_name,
COUNT(*) as count,
COUNT(DISTINCT column_name) as unique_values,
MIN(column_name) as min_val,
MAX(column_name) as max_val
FROM table_name
GROUP BY column_name;sql
-- Daily aggregation
SELECT
DATE(created_at) as date,
COUNT(*) as daily_count,
SUM(amount) as daily_total
FROM transactions
GROUP BY DATE(created_at)
ORDER BY date DESC;
-- Month-over-month comparison
SELECT
DATE_TRUNC('month', created_at) as month,
COUNT(*) as count,
LAG(COUNT(*)) OVER (ORDER BY DATE_TRUNC('month', created_at)) as prev_month,
(COUNT(*) - LAG(COUNT(*)) OVER (ORDER BY DATE_TRUNC('month', created_at))) /
NULLIF(LAG(COUNT(*)) OVER (ORDER BY DATE_TRUNC('month', created_at)), 0) * 100 as growth_pct
FROM transactions
GROUP BY DATE_TRUNC('month', created_at)
ORDER BY month;sql
-- User cohort by signup month
SELECT
DATE_TRUNC('month', u.created_at) as cohort_month,
DATE_TRUNC('month', o.created_at) as activity_month,
COUNT(DISTINCT u.id) as users
FROM users u
LEFT JOIN orders o ON u.id = o.user_id
GROUP BY cohort_month, activity_month
ORDER BY cohort_month, activity_month;sql
-- Conversion funnel
WITH funnel AS (
SELECT
COUNT(DISTINCT CASE WHEN event = 'page_view' THEN user_id END) as views,
COUNT(DISTINCT CASE WHEN event = 'signup' THEN user_id END) as signups,
COUNT(DISTINCT CASE WHEN event = 'purchase' THEN user_id END) as purchases
FROM events
WHERE date >= CURRENT_DATE - INTERVAL '30 days'
)
SELECT
views,
signups,
ROUND(signups * 100.0 / NULLIF(views, 0), 2) as signup_rate,
purchases,
ROUND(purchases * 100.0 / NULLIF(signups, 0), 2) as purchase_rate
FROM funnel;Extracted Files
SKILL.md
---
name: data-analyst
version: 1.0.0
description: "Data visualization, report generation, SQL queries, and spreadsheet automation. Transform your AI agent into a data-savvy analyst that turns raw data into actionable insights."
author: openclaw
---
# Data Analyst Skill š
**Turn your AI agent into a data analysis powerhouse.**
Query databases, analyze spreadsheets, create visualizations, and generate insights that drive decisions.
---
## What This Skill Does
ā
**SQL Queries** ā Write and execute queries against databases
ā
**Spreadsheet Analysis** ā Process CSV, Excel, Google Sheets data
ā
**Data Visualization** ā Create charts, graphs, and dashboards
ā
**Report Generation** ā Automated reports with insights
ā
**Data Cleaning** ā Handle missing data, outliers, formatting
ā
**Statistical Analysis** ā Descriptive stats, trends, correlations
---
## Quick Start
1. Configure your data sources in `TOOLS.md`:
```markdown
### Data Sources
- Primary DB: [Connection string or description]
- Spreadsheets: [Google Sheets URL / local path]
- Data warehouse: [BigQuery/Snowflake/etc.]
```
2. Set up your workspace:
```bash
./scripts/data-init.sh
```
3. Start analyzing!
---
## SQL Query Patterns
### Common Query Templates
**Basic Data Exploration**
```sql
-- Row count
SELECT COUNT(*) FROM table_name;
-- Sample data
SELECT * FROM table_name LIMIT 10;
-- Column statistics
SELECT
column_name,
COUNT(*) as count,
COUNT(DISTINCT column_name) as unique_values,
MIN(column_name) as min_val,
MAX(column_name) as max_val
FROM table_name
GROUP BY column_name;
```
**Time-Based Analysis**
```sql
-- Daily aggregation
SELECT
DATE(created_at) as date,
COUNT(*) as daily_count,
SUM(amount) as daily_total
FROM transactions
GROUP BY DATE(created_at)
ORDER BY date DESC;
-- Month-over-month comparison
SELECT
DATE_TRUNC('month', created_at) as month,
COUNT(*) as count,
LAG(COUNT(*)) OVER (ORDER BY DATE_TRUNC('month', created_at)) as prev_month,
(COUNT(*) - LAG(COUNT(*)) OVER (ORDER BY DATE_TRUNC('month', created_at))) /
NULLIF(LAG(COUNT(*)) OVER (ORDER BY DATE_TRUNC('month', created_at)), 0) * 100 as growth_pct
FROM transactions
GROUP BY DATE_TRUNC('month', created_at)
ORDER BY month;
```
**Cohort Analysis**
```sql
-- User cohort by signup month
SELECT
DATE_TRUNC('month', u.created_at) as cohort_month,
DATE_TRUNC('month', o.created_at) as activity_month,
COUNT(DISTINCT u.id) as users
FROM users u
LEFT JOIN orders o ON u.id = o.user_id
GROUP BY cohort_month, activity_month
ORDER BY cohort_month, activity_month;
```
**Funnel Analysis**
```sql
-- Conversion funnel
WITH funnel AS (
SELECT
COUNT(DISTINCT CASE WHEN event = 'page_view' THEN user_id END) as views,
COUNT(DISTINCT CASE WHEN event = 'signup' THEN user_id END) as signups,
COUNT(DISTINCT CASE WHEN event = 'purchase' THEN user_id END) as purchases
FROM events
WHERE date >= CURRENT_DATE - INTERVAL '30 days'_meta.json
{
"ownerId": "kn7cpmgq5bpf1mp69bpd7n9as180nssd",
"slug": "data-analyst",
"version": "1.0.0",
"publishedAt": 1770410537278
}Editorial read
Docs & README
Docs source
CLAWHUB
Editorial quality
ready
Data visualization, report generation, SQL queries, and spreadsheet automation. Transform your AI agent into a data-savvy analyst that turns raw data into actionable insights. Skill: Data Analyst Owner: oyi77 Summary: Data visualization, report generation, SQL queries, and spreadsheet automation. Transform your AI agent into a data-savvy analyst that turns raw data into actionable insights. Tags: latest:1.0.0 Version history: v1.0.0 | 2026-02-06T20:42:17.278Z | auto Initial release of the Data Analyst skill: - Provides SQL query patterns for common analyses, including cohort and funnel ana
Full README
Skill: Data Analyst
Owner: oyi77
Summary: Data visualization, report generation, SQL queries, and spreadsheet automation. Transform your AI agent into a data-savvy analyst that turns raw data into actionable insights.
Tags: latest:1.0.0
Version history:
v1.0.0 | 2026-02-06T20:42:17.278Z | auto
Initial release of the Data Analyst skill:
- Provides SQL query patterns for common analyses, including cohort and funnel analysis.
- Enables spreadsheet processing and data cleaning techniques.
- Offers Python code samples for data analysis and visualization.
- Includes guides for chart selection and terminal-friendly ASCII charts.
- Delivers templates and checklists for data audits and report generation.
Archive index:
Archive v1.0.0: 4 files, 9709 bytes
Files: scripts/data-init.sh (5761b), scripts/query.sh (3299b), SKILL.md (13992b), _meta.json (131b)
File v1.0.0:SKILL.md
name: data-analyst version: 1.0.0 description: "Data visualization, report generation, SQL queries, and spreadsheet automation. Transform your AI agent into a data-savvy analyst that turns raw data into actionable insights." author: openclaw
Data Analyst Skill š
Turn your AI agent into a data analysis powerhouse.
Query databases, analyze spreadsheets, create visualizations, and generate insights that drive decisions.
What This Skill Does
ā SQL Queries ā Write and execute queries against databases ā Spreadsheet Analysis ā Process CSV, Excel, Google Sheets data ā Data Visualization ā Create charts, graphs, and dashboards ā Report Generation ā Automated reports with insights ā Data Cleaning ā Handle missing data, outliers, formatting ā Statistical Analysis ā Descriptive stats, trends, correlations
Quick Start
- Configure your data sources in
TOOLS.md:
### Data Sources
- Primary DB: [Connection string or description]
- Spreadsheets: [Google Sheets URL / local path]
- Data warehouse: [BigQuery/Snowflake/etc.]
- Set up your workspace:
./scripts/data-init.sh
- Start analyzing!
SQL Query Patterns
Common Query Templates
Basic Data Exploration
-- Row count
SELECT COUNT(*) FROM table_name;
-- Sample data
SELECT * FROM table_name LIMIT 10;
-- Column statistics
SELECT
column_name,
COUNT(*) as count,
COUNT(DISTINCT column_name) as unique_values,
MIN(column_name) as min_val,
MAX(column_name) as max_val
FROM table_name
GROUP BY column_name;
Time-Based Analysis
-- Daily aggregation
SELECT
DATE(created_at) as date,
COUNT(*) as daily_count,
SUM(amount) as daily_total
FROM transactions
GROUP BY DATE(created_at)
ORDER BY date DESC;
-- Month-over-month comparison
SELECT
DATE_TRUNC('month', created_at) as month,
COUNT(*) as count,
LAG(COUNT(*)) OVER (ORDER BY DATE_TRUNC('month', created_at)) as prev_month,
(COUNT(*) - LAG(COUNT(*)) OVER (ORDER BY DATE_TRUNC('month', created_at))) /
NULLIF(LAG(COUNT(*)) OVER (ORDER BY DATE_TRUNC('month', created_at)), 0) * 100 as growth_pct
FROM transactions
GROUP BY DATE_TRUNC('month', created_at)
ORDER BY month;
Cohort Analysis
-- User cohort by signup month
SELECT
DATE_TRUNC('month', u.created_at) as cohort_month,
DATE_TRUNC('month', o.created_at) as activity_month,
COUNT(DISTINCT u.id) as users
FROM users u
LEFT JOIN orders o ON u.id = o.user_id
GROUP BY cohort_month, activity_month
ORDER BY cohort_month, activity_month;
Funnel Analysis
-- Conversion funnel
WITH funnel AS (
SELECT
COUNT(DISTINCT CASE WHEN event = 'page_view' THEN user_id END) as views,
COUNT(DISTINCT CASE WHEN event = 'signup' THEN user_id END) as signups,
COUNT(DISTINCT CASE WHEN event = 'purchase' THEN user_id END) as purchases
FROM events
WHERE date >= CURRENT_DATE - INTERVAL '30 days'
)
SELECT
views,
signups,
ROUND(signups * 100.0 / NULLIF(views, 0), 2) as signup_rate,
purchases,
ROUND(purchases * 100.0 / NULLIF(signups, 0), 2) as purchase_rate
FROM funnel;
Data Cleaning
Common Data Quality Issues
| Issue | Detection | Solution |
|-------|-----------|----------|
| Missing values | IS NULL or empty string | Impute, drop, or flag |
| Duplicates | GROUP BY with HAVING COUNT(*) > 1 | Deduplicate with rules |
| Outliers | Z-score > 3 or IQR method | Investigate, cap, or exclude |
| Inconsistent formats | Sample and pattern match | Standardize with transforms |
| Invalid values | Range checks, referential integrity | Validate and correct |
Data Cleaning SQL Patterns
-- Find duplicates
SELECT email, COUNT(*)
FROM users
GROUP BY email
HAVING COUNT(*) > 1;
-- Find nulls
SELECT
COUNT(*) as total,
SUM(CASE WHEN email IS NULL THEN 1 ELSE 0 END) as null_emails,
SUM(CASE WHEN name IS NULL THEN 1 ELSE 0 END) as null_names
FROM users;
-- Standardize text
UPDATE products
SET category = LOWER(TRIM(category));
-- Remove outliers (IQR method)
WITH stats AS (
SELECT
PERCENTILE_CONT(0.25) WITHIN GROUP (ORDER BY value) as q1,
PERCENTILE_CONT(0.75) WITHIN GROUP (ORDER BY value) as q3
FROM data
)
SELECT * FROM data, stats
WHERE value BETWEEN q1 - 1.5*(q3-q1) AND q3 + 1.5*(q3-q1);
Data Cleaning Checklist
# Data Quality Audit: [Dataset]
## Row-Level Checks
- [ ] Total row count: [X]
- [ ] Duplicate rows: [X]
- [ ] Rows with any null: [X]
## Column-Level Checks
| Column | Type | Nulls | Unique | Min | Max | Issues |
|--------|------|-------|--------|-----|-----|--------|
| [col] | [type] | [n] | [n] | [v] | [v] | [notes] |
## Data Lineage
- Source: [Where data came from]
- Last updated: [Date]
- Known issues: [List]
## Cleaning Actions Taken
1. [Action and reason]
2. [Action and reason]
Spreadsheet Analysis
CSV/Excel Processing with Python
import pandas as pd
# Load data
df = pd.read_csv('data.csv') # or pd.read_excel('data.xlsx')
# Basic exploration
print(df.shape) # (rows, columns)
print(df.info()) # Column types and nulls
print(df.describe()) # Numeric statistics
# Data cleaning
df = df.drop_duplicates()
df['date'] = pd.to_datetime(df['date'])
df['amount'] = df['amount'].fillna(0)
# Analysis
summary = df.groupby('category').agg({
'amount': ['sum', 'mean', 'count'],
'quantity': 'sum'
}).round(2)
# Export
summary.to_csv('analysis_output.csv')
Common Pandas Operations
# Filtering
filtered = df[df['status'] == 'active']
filtered = df[df['amount'] > 1000]
filtered = df[df['date'].between('2024-01-01', '2024-12-31')]
# Aggregation
by_category = df.groupby('category')['amount'].sum()
pivot = df.pivot_table(values='amount', index='month', columns='category', aggfunc='sum')
# Window functions
df['running_total'] = df['amount'].cumsum()
df['pct_change'] = df['amount'].pct_change()
df['rolling_avg'] = df['amount'].rolling(window=7).mean()
# Merging
merged = pd.merge(df1, df2, on='id', how='left')
Data Visualization
Chart Selection Guide
| Data Type | Best Chart | Use When | |-----------|------------|----------| | Trend over time | Line chart | Showing patterns/changes over time | | Category comparison | Bar chart | Comparing discrete categories | | Part of whole | Pie/Donut | Showing proportions (ā¤5 categories) | | Distribution | Histogram | Understanding data spread | | Correlation | Scatter plot | Relationship between two variables | | Many categories | Horizontal bar | Ranking or comparing many items | | Geographic | Map | Location-based data |
Python Visualization with Matplotlib/Seaborn
import matplotlib.pyplot as plt
import seaborn as sns
# Set style
plt.style.use('seaborn-v0_8-whitegrid')
sns.set_palette("husl")
# Line chart (trends)
plt.figure(figsize=(10, 6))
plt.plot(df['date'], df['value'], marker='o')
plt.title('Trend Over Time')
plt.xlabel('Date')
plt.ylabel('Value')
plt.xticks(rotation=45)
plt.tight_layout()
plt.savefig('trend.png', dpi=150)
# Bar chart (comparisons)
plt.figure(figsize=(10, 6))
sns.barplot(data=df, x='category', y='amount')
plt.title('Amount by Category')
plt.xticks(rotation=45)
plt.tight_layout()
plt.savefig('comparison.png', dpi=150)
# Heatmap (correlations)
plt.figure(figsize=(10, 8))
sns.heatmap(df.corr(), annot=True, cmap='coolwarm', center=0)
plt.title('Correlation Matrix')
plt.tight_layout()
plt.savefig('correlation.png', dpi=150)
ASCII Charts (Quick Terminal Visualization)
When you can't generate images, use ASCII:
Revenue by Month (in $K)
========================
Jan: āāāāāāāāāāāāāāāā 160
Feb: āāāāāāāāāāāāāāāāāā 180
Mar: āāāāāāāāāāāāāāāāāāāāāāāā 240
Apr: āāāāāāāāāāāāāāāāāāāāāā 220
May: āāāāāāāāāāāāāāāāāāāāāāāāāā 260
Jun: āāāāāāāāāāāāāāāāāāāāāāāāāāāā 280
Report Generation
Standard Report Template
# [Report Name]
**Period:** [Date range]
**Generated:** [Date]
**Author:** [Agent/Human]
## Executive Summary
[2-3 sentences with key findings]
## Key Metrics
| Metric | Current | Previous | Change |
|--------|---------|----------|--------|
| [Metric] | [Value] | [Value] | [+/-X%] |
## Detailed Analysis
### [Section 1]
[Analysis with supporting data]
### [Section 2]
[Analysis with supporting data]
## Visualizations
[Insert charts]
## Insights
1. **[Insight]**: [Supporting evidence]
2. **[Insight]**: [Supporting evidence]
## Recommendations
1. [Actionable recommendation]
2. [Actionable recommendation]
## Methodology
- Data source: [Source]
- Date range: [Range]
- Filters applied: [Filters]
- Known limitations: [Limitations]
## Appendix
[Supporting data tables]
Automated Report Script
#!/bin/bash
# generate-report.sh
# Pull latest data
python scripts/extract_data.py --output data/latest.csv
# Run analysis
python scripts/analyze.py --input data/latest.csv --output reports/
# Generate report
python scripts/format_report.py --template weekly --output reports/weekly-$(date +%Y-%m-%d).md
echo "Report generated: reports/weekly-$(date +%Y-%m-%d).md"
Statistical Analysis
Descriptive Statistics
| Statistic | What It Tells You | Use Case | |-----------|-------------------|----------| | Mean | Average value | Central tendency | | Median | Middle value | Robust to outliers | | Mode | Most common | Categorical data | | Std Dev | Spread around mean | Variability | | Min/Max | Range | Data boundaries | | Percentiles | Distribution shape | Benchmarking |
Quick Stats with Python
# Full descriptive statistics
stats = df['amount'].describe()
print(stats)
# Additional stats
print(f"Median: {df['amount'].median()}")
print(f"Mode: {df['amount'].mode()[0]}")
print(f"Skewness: {df['amount'].skew()}")
print(f"Kurtosis: {df['amount'].kurtosis()}")
# Correlation
correlation = df['sales'].corr(df['marketing_spend'])
print(f"Correlation: {correlation:.3f}")
Statistical Tests Quick Reference
| Test | Use Case | Python |
|------|----------|--------|
| T-test | Compare two means | scipy.stats.ttest_ind(a, b) |
| Chi-square | Categorical independence | scipy.stats.chi2_contingency(table) |
| ANOVA | Compare 3+ means | scipy.stats.f_oneway(a, b, c) |
| Pearson | Linear correlation | scipy.stats.pearsonr(x, y) |
Analysis Workflow
Standard Analysis Process
-
Define the Question
- What are we trying to answer?
- What decisions will this inform?
-
Understand the Data
- What data is available?
- What's the structure and quality?
-
Clean and Prepare
- Handle missing values
- Fix data types
- Remove duplicates
-
Explore
- Descriptive statistics
- Initial visualizations
- Identify patterns
-
Analyze
- Deep dive into findings
- Statistical tests if needed
- Validate hypotheses
-
Communicate
- Clear visualizations
- Actionable insights
- Recommendations
Analysis Request Template
# Analysis Request
## Question
[What are we trying to answer?]
## Context
[Why does this matter? What decision will it inform?]
## Data Available
- [Dataset 1]: [Description]
- [Dataset 2]: [Description]
## Expected Output
- [Deliverable 1]
- [Deliverable 2]
## Timeline
[When is this needed?]
## Notes
[Any constraints or considerations]
Scripts
data-init.sh
Initialize your data analysis workspace.
query.sh
Quick SQL query execution.
# Run query from file
./scripts/query.sh --file queries/daily-report.sql
# Run inline query
./scripts/query.sh "SELECT COUNT(*) FROM users"
# Save output to file
./scripts/query.sh --file queries/export.sql --output data/export.csv
analyze.py
Python analysis toolkit.
# Basic analysis
python scripts/analyze.py --input data/sales.csv
# With specific analysis type
python scripts/analyze.py --input data/sales.csv --type cohort
# Generate report
python scripts/analyze.py --input data/sales.csv --report weekly
Integration Tips
With Other Skills
| Skill | Integration | |-------|-------------| | Marketing | Analyze campaign performance, content metrics | | Sales | Pipeline analytics, conversion analysis | | Business Dev | Market research data, competitor analysis |
Common Data Sources
- Databases: PostgreSQL, MySQL, SQLite
- Warehouses: BigQuery, Snowflake, Redshift
- Spreadsheets: Google Sheets, Excel, CSV
- APIs: REST endpoints, GraphQL
- Files: JSON, Parquet, XML
Best Practices
- Start with the question ā Know what you're trying to answer
- Validate your data ā Garbage in = garbage out
- Document everything ā Queries, assumptions, decisions
- Visualize appropriately ā Right chart for right data
- Show your work ā Methodology matters
- Lead with insights ā Not just data dumps
- Make it actionable ā "So what?" ā "Now what?"
- Version your queries ā Track changes over time
Common Mistakes
ā Confirmation bias ā Looking for data to support a conclusion ā Correlation ā causation ā Be careful with claims ā Cherry-picking ā Using only favorable data ā Ignoring outliers ā Investigate before removing ā Over-complicating ā Simple analysis often wins ā No context ā Numbers without comparison are meaningless
License
License: MIT ā use freely, modify, distribute.
"The goal is to turn data into information, and information into insight." ā Carly Fiorina
File v1.0.0:_meta.json
{ "ownerId": "kn7cpmgq5bpf1mp69bpd7n9as180nssd", "slug": "data-analyst", "version": "1.0.0", "publishedAt": 1770410537278 }
API & Reliability
Machine endpoints, contract coverage, trust signals, runtime metrics, benchmarks, and guardrails for agent-to-agent use.
MissingCLAWHUB
API & Reliability
Machine endpoints, contract coverage, trust signals, runtime metrics, benchmarks, and guardrails for agent-to-agent use.
Machine interfaces
Contract & API
Contract coverage
Status
missing
Auth
None
Streaming
No
Data region
Unspecified
Protocol support
Requires: none
Forbidden: none
Guardrails
Operational confidence: low
Invocation examples
curl -s "https://xpersona.co/api/v1/agents/clawhub-oyi77-data-analyst/snapshot"
curl -s "https://xpersona.co/api/v1/agents/clawhub-oyi77-data-analyst/contract"
curl -s "https://xpersona.co/api/v1/agents/clawhub-oyi77-data-analyst/trust"
Operational fit
Reliability & Benchmarks
Trust signals
Handshake
UNKNOWN
Confidence
unknown
Attempts 30d
unknown
Fallback rate
unknown
Runtime metrics
Observed P50
unknown
Observed P95
unknown
Rate limit
unknown
Estimated cost
unknown
Do not use if
Machine Appendix
Raw contract, invocation, trust, capability, facts, and change-event payloads for machine-side inspection.
MissingCLAWHUB
Machine Appendix
Raw contract, invocation, trust, capability, facts, and change-event payloads for machine-side inspection.
Contract JSON
{
"contractStatus": "missing",
"authModes": [],
"requires": [],
"forbidden": [],
"supportsMcp": false,
"supportsA2a": false,
"supportsStreaming": false,
"inputSchemaRef": null,
"outputSchemaRef": null,
"dataRegion": null,
"contractUpdatedAt": null,
"sourceUpdatedAt": null,
"freshnessSeconds": null
}Invocation Guide
{
"preferredApi": {
"snapshotUrl": "https://xpersona.co/api/v1/agents/clawhub-oyi77-data-analyst/snapshot",
"contractUrl": "https://xpersona.co/api/v1/agents/clawhub-oyi77-data-analyst/contract",
"trustUrl": "https://xpersona.co/api/v1/agents/clawhub-oyi77-data-analyst/trust"
},
"curlExamples": [
"curl -s \"https://xpersona.co/api/v1/agents/clawhub-oyi77-data-analyst/snapshot\"",
"curl -s \"https://xpersona.co/api/v1/agents/clawhub-oyi77-data-analyst/contract\"",
"curl -s \"https://xpersona.co/api/v1/agents/clawhub-oyi77-data-analyst/trust\""
],
"jsonRequestTemplate": {
"query": "summarize this repo",
"constraints": {
"maxLatencyMs": 2000,
"protocolPreference": []
}
},
"jsonResponseTemplate": {
"ok": true,
"result": {
"summary": "...",
"confidence": 0.9
},
"meta": {
"source": "CLAWHUB",
"generatedAt": "2026-04-17T03:05:45.828Z"
}
},
"retryPolicy": {
"maxAttempts": 3,
"backoffMs": [
500,
1500,
3500
],
"retryableConditions": [
"HTTP_429",
"HTTP_503",
"NETWORK_TIMEOUT"
]
}
}Trust JSON
{
"status": "unavailable",
"handshakeStatus": "UNKNOWN",
"verificationFreshnessHours": null,
"reputationScore": null,
"p95LatencyMs": null,
"successRate30d": null,
"fallbackRate": null,
"attempts30d": null,
"trustUpdatedAt": null,
"trustConfidence": "unknown",
"sourceUpdatedAt": null,
"freshnessSeconds": null
}Capability Matrix
{
"rows": [],
"flattenedTokens": ""
}Facts JSON
[
{
"factKey": "vendor",
"category": "vendor",
"label": "Vendor",
"value": "Clawhub",
"href": "https://clawhub.ai/oyi77/data-analyst",
"sourceUrl": "https://clawhub.ai/oyi77/data-analyst",
"sourceType": "profile",
"confidence": "medium",
"observedAt": "2026-04-15T00:45:39.800Z",
"isPublic": true
},
{
"factKey": "traction",
"category": "adoption",
"label": "Adoption signal",
"value": "7.4K downloads",
"href": "https://clawhub.ai/oyi77/data-analyst",
"sourceUrl": "https://clawhub.ai/oyi77/data-analyst",
"sourceType": "profile",
"confidence": "medium",
"observedAt": "2026-04-15T00:45:39.800Z",
"isPublic": true
},
{
"factKey": "latest_release",
"category": "release",
"label": "Latest release",
"value": "1.0.0",
"href": "https://clawhub.ai/oyi77/data-analyst",
"sourceUrl": "https://clawhub.ai/oyi77/data-analyst",
"sourceType": "release",
"confidence": "medium",
"observedAt": "2026-02-06T20:42:17.278Z",
"isPublic": true
},
{
"factKey": "handshake_status",
"category": "security",
"label": "Handshake status",
"value": "UNKNOWN",
"href": "https://xpersona.co/api/v1/agents/clawhub-oyi77-data-analyst/trust",
"sourceUrl": "https://xpersona.co/api/v1/agents/clawhub-oyi77-data-analyst/trust",
"sourceType": "trust",
"confidence": "medium",
"observedAt": null,
"isPublic": true
}
]Change Events JSON
[
{
"eventType": "release",
"title": "Release 1.0.0",
"description": "Initial release of the Data Analyst skill: - Provides SQL query patterns for common analyses, including cohort and funnel analysis. - Enables spreadsheet processing and data cleaning techniques. - Offers Python code samples for data analysis and visualization. - Includes guides for chart selection and terminal-friendly ASCII charts. - Delivers templates and checklists for data audits and report generation.",
"href": "https://clawhub.ai/oyi77/data-analyst",
"sourceUrl": "https://clawhub.ai/oyi77/data-analyst",
"sourceType": "release",
"confidence": "medium",
"observedAt": "2026-02-06T20:42:17.278Z",
"isPublic": true
}
]