Rank
70
AI Agents & MCPs & AI Workflow Automation • (~400 MCP servers for AI agents) • AI Automation / AI Agent with MCPs • AI Workflows & AI Agents • MCPs for AI Agents
Traction
No public download signal
Freshness
Updated 2d ago
Xpersona Agent
Simple website analytics your AI agent controls end-to-end. Track page views, events, funnels, retention, and A/B experiments across all your projects. Use w... Skill: Agent Analytics Owner: dannyshmueli Summary: Simple website analytics your AI agent controls end-to-end. Track page views, events, funnels, retention, and A/B experiments across all your projects. Use w... Tags: analytics:1.0.1, latest:3.7.0, tracking:1.0.1, web:1.0.1 Version history: v3.7.0 | 2026-02-24T07:35:46.785Z | user Added security & trust section addressing npx supply chain and input safety. Improved
clawhub skill install kn7caxjvqk9fengp67p290smnn800sv9:agent-analyticsOverall rank
#62
Adoption
1.1K downloads
Trust
Unknown
Freshness
Mar 1, 2026
Freshness
Last checked Mar 1, 2026
Best For
Agent Analytics is best for general automation workflows where OpenClaw compatibility matters.
Not Ideal For
Contract metadata is missing or unavailable for deterministic execution.
Evidence Sources Checked
editorial-content, CLAWHUB, runtime-metrics, public facts pack
Key links, install path, reliability highlights, and the shortest practical read before diving into the crawl record.
Overview
Simple website analytics your AI agent controls end-to-end. Track page views, events, funnels, retention, and A/B experiments across all your projects. Use w... Skill: Agent Analytics Owner: dannyshmueli Summary: Simple website analytics your AI agent controls end-to-end. Track page views, events, funnels, retention, and A/B experiments across all your projects. Use w... Tags: analytics:1.0.1, latest:3.7.0, tracking:1.0.1, web:1.0.1 Version history: v3.7.0 | 2026-02-24T07:35:46.785Z | user Added security & trust section addressing npx supply chain and input safety. Improved Capability contract not published. No trust telemetry is available yet. 1.1K downloads reported by the source. Last updated 4/15/2026.
Trust score
Unknown
Compatibility
OpenClaw
Freshness
Mar 1, 2026
Vendor
Clawhub
Artifacts
0
Benchmarks
0
Last release
3.7.0
Install & run
clawhub skill install kn7caxjvqk9fengp67p290smnn800sv9:agent-analyticsSetup complexity is classified as HIGH. You must provision dedicated cloud infrastructure or an isolated VM. Do not run this directly on your local workstation.
Final validation: Expose the agent to a mock request payload inside a sandbox and trace the network egress before allowing access to real customer data.
Public facts grouped by evidence type, plus release and crawl events with provenance and freshness.
Public facts
Vendor
Clawhub
Protocol compatibility
OpenClaw
Latest release
3.7.0
Adoption signal
1.1K downloads
Handshake status
UNKNOWN
Parameters, dependencies, examples, extracted files, editorial overview, and the complete README when available.
Captured outputs
Extracted files
2
Examples
6
Snippets
0
Languages
Unknown
bash
# 1. Login (one time — uses your API key) npx @agent-analytics/cli login --token aak_YOUR_API_KEY # 2. Create the project (returns a project write token) npx @agent-analytics/cli create my-site --domain https://mysite.com # 3. Add the snippet (Step 1 below) using the returned token # 4. Deploy, click around, verify: npx @agent-analytics/cli events my-site
bash
npx @agent-analytics/cli properties-received PROJECT_NAME
html
<a href="..." onclick="window.aa?.track('EVENT_NAME', {id: 'ELEMENT_ID'})">bash
npx @agent-analytics/cli experiments create my-site \ --name signup_cta --variants control,new_cta --goal signup
html
<h1 data-aa-experiment="signup_cta" data-aa-variant-new_cta="Start Free Trial">Sign Up</h1>
bash
npx @agent-analytics/cli experiments get exp_abc123
SKILL.md
---
name: agent-analytics
description: "Simple website analytics your AI agent controls end-to-end. Track page views, events, funnels, retention, and A/B experiments across all your projects. Use when: adding website tracking, checking site traffic, setting up conversion funnels, running A/B experiments, or replacing Mixpanel / Plausible / PostHog with something lightweight and agent-operated. No dashboard needed."
version: 3.7.0
author: dannyshmueli
repository: https://github.com/Agent-Analytics/agent-analytics-cli
homepage: https://agentanalytics.sh
tags:
- analytics
- tracking
- web
- events
- experiments
- live
- website-tracking
- page-views
- funnels
- retention
- ab-testing
- simple-analytics
- privacy
- agent-first
- plausible-alternative
- mixpanel-alternative
- growth
metadata: {"openclaw":{"requires":{"env":["AGENT_ANALYTICS_API_KEY"],"anyBins":["npx"]},"primaryEnv":"AGENT_ANALYTICS_API_KEY"}}
---
# Agent Analytics — Website analytics your AI agent fully operates
Simple, privacy-first website analytics and growth toolkit that your AI agent controls end-to-end. Track page views, custom events, conversion funnels, user retention, and A/B experiments across all your projects — then talk to your analytics in natural language. No dashboards. Your agent creates projects, adds tracking code, queries traffic data, builds funnels, runs experiments, and tells you what to optimize next. A lightweight Plausible/Mixpanel/PostHog alternative built for the AI agent era.
## Security & trust
- **Open source**: Full source at [github.com/Agent-Analytics/agent-analytics-cli](https://github.com/Agent-Analytics/agent-analytics-cli) — inspect every command before running
- **Read-only by default**: The CLI only reads analytics data. Write operations (creating projects, experiments) require explicit user-provided API keys
- **No arbitrary code execution**: All CLI commands use structured flags (`--days`, `--property`, `--steps`). No eval, no shell interpolation, no dynamic code generation
- **Scoped permissions**: The API key controls access. The CLI never requests filesystem, network, or system-level permissions beyond HTTP calls to `api.agentanalytics.sh`
- **Published on npm**: [@agent-analytics/cli](https://www.npmjs.com/package/@agent-analytics/cli) — versioned, auditable, standard npm supply chain
## Philosophy
You are NOT Mixpanel. Don't track everything. Track only what answers: **"Is this project alive and growing?"**
For a typical site, that's 3-5 custom events max on top of automatic page views.
## First-time setup
**Get an API key:** Sign up at [agentanalytics.sh](https://agentanalytics.sh) and generate a key from the dashboard. Alternatively, self-host the open-source version from [GitHub](https://github.com/Agent-Analytics/agent-analytics).
If the project doesn't have tracking yet:
```bash
# 1. Login (one time — uses your API key)
npx @agent-analytics/cli login --token aak_YOUR_API_KEY
# 2. Create t_meta.json
{
"ownerId": "kn7caxjvqk9fengp67p290smnn800sv9",
"slug": "agent-analytics",
"version": "3.7.0",
"publishedAt": 1771918546785
}Editorial read
Docs source
CLAWHUB
Editorial quality
ready
Simple website analytics your AI agent controls end-to-end. Track page views, events, funnels, retention, and A/B experiments across all your projects. Use w... Skill: Agent Analytics Owner: dannyshmueli Summary: Simple website analytics your AI agent controls end-to-end. Track page views, events, funnels, retention, and A/B experiments across all your projects. Use w... Tags: analytics:1.0.1, latest:3.7.0, tracking:1.0.1, web:1.0.1 Version history: v3.7.0 | 2026-02-24T07:35:46.785Z | user Added security & trust section addressing npx supply chain and input safety. Improved
Skill: Agent Analytics
Owner: dannyshmueli
Summary: Simple website analytics your AI agent controls end-to-end. Track page views, events, funnels, retention, and A/B experiments across all your projects. Use w...
Tags: analytics:1.0.1, latest:3.7.0, tracking:1.0.1, web:1.0.1
Version history:
v3.7.0 | 2026-02-24T07:35:46.785Z | user
Added security & trust section addressing npx supply chain and input safety. Improved description and tags for discoverability.
v3.6.0 | 2026-02-24T07:17:19.881Z | user
Optimized description for vector search discoverability. Added REST API as primary query path. Added tags for funnels, retention, ab-testing.
v3.5.0 | 2026-02-24T07:15:35.196Z | user
Improved discoverability: added REST API as primary path (no npx dependency), expanded description with use-case triggers and competitor keywords, added tags for funnels/retention/ab-testing/privacy
v3.4.0 | 2026-02-23T19:23:52.030Z | user
Add ad-hoc query section with contains filter, country group_by, session_count. Expand which-endpoint table with query examples. Add --filter and --group-by to key flags.
v0.3.1 | 2026-02-22T19:24:49.803Z | user
Add URL param variant forcing docs
v3.3.0 | 2026-02-21T18:28:50.267Z | user
Remove external URLs flagged by scanner, trim verbose HTML/JS examples, condense experiment variant docs
v3.2.0 | 2026-02-21T17:54:07.260Z | user
Add funnel breakdown (--breakdown flag) and retention analysis
v3.1.1 | 2026-02-21T16:26:34.351Z | user
Clarify max 8 steps for funnel CLI flag
v3.1.0 | 2026-02-21T15:53:04.518Z | user
Add funnel analysis: CLI command, API guide, endpoint routing table, and interpretation guide for drop-off analysis
v3.0.0 | 2026-02-21T13:29:14.566Z | user
Add live real-time TUI dashboard, full CLI reference with all commands (live, query, sessions, properties, whoami, revoke-key), drop curl examples in favor of CLI, refresh branding
v2.6.0 | 2026-02-20T15:01:20.906Z | user
Add growth playbook: teaches agents HOW to grow
v2.5.2 | 2026-02-20T14:13:39.076Z | user
Re-publish to clear pending virus scan
v2.5.1 | 2026-02-19T07:20:14.822Z | user
Improve listing description: lead with autonomous optimization loop and open-source headless positioning
v2.5.0 | 2026-02-19T07:04:37.972Z | user
Add declarative experiments (data-aa-experiment HTML attributes) and anti-flicker snippet as recommended A/B testing approach
v2.4.0 | 2026-02-18T22:16:17.698Z | user
Add A/B experiment support
v2.3.0 | 2026-02-15T22:16:10.673Z | user
Improved analysis guidance: decision tree for which endpoint to call, response-to-narrative examples for all 5 endpoints, multi-project overview pattern, engagement interpretation rules
v2.2.0 | 2026-02-15T22:11:21.366Z | user
Add pre-computed analytics endpoints: insights, breakdown, pages, sessions-dist, heatmap
v2.1.0 | 2026-02-13T20:07:06.979Z | user
Update all CLI references to @agent-analytics/cli scoped package
v2.0.0 | 2026-02-13T19:37:18.273Z | user
Standardize on create as primary command (init kept as alias)
v1.3.0 | 2026-02-12T21:27:24.461Z | user
Teach agents to analyze data: period-over-period comparison, derived metrics, anomaly detection, target output format, companion skill instructions
v0.1.5 | 2026-02-12T21:13:09.519Z | user
Rename AGENT_ANALYTICS_KEY env var to AGENT_ANALYTICS_API_KEY
v1.2.0 | 2026-02-12T20:46:38.909Z | user
Fix metadata.openclaw namespace so scanner detects declared env vars/binaries, add primaryEnv, add write token security note
v1.1.2 | 2026-02-12T20:42:10.038Z | user
Fix display name
v1.1.1 | 2026-02-12T20:40:10.032Z | user
Add where to get API key, recommend chart-image and table-image-generator companion skills
v1.1.0 | 2026-02-12T19:56:36.665Z | user
Ship only SKILL.md, declare required env vars and binaries, fix privacy wording
v1.0.1 | 2026-02-10T08:39:33.397Z | user
Improved description and messaging — more marketable, leads with agent-native value prop
v1.0.0 | 2026-02-10T08:36:01.997Z | user
Initial release — ClawHub-ready SKILL.md
Archive index:
Archive v3.7.0: 2 files, 11760 bytes
Files: _meta.json (134b), SKILL.md (29337b)
File v3.7.0:SKILL.md
name: agent-analytics description: "Simple website analytics your AI agent controls end-to-end. Track page views, events, funnels, retention, and A/B experiments across all your projects. Use when: adding website tracking, checking site traffic, setting up conversion funnels, running A/B experiments, or replacing Mixpanel / Plausible / PostHog with something lightweight and agent-operated. No dashboard needed." version: 3.7.0 author: dannyshmueli repository: https://github.com/Agent-Analytics/agent-analytics-cli homepage: https://agentanalytics.sh tags:
Simple, privacy-first website analytics and growth toolkit that your AI agent controls end-to-end. Track page views, custom events, conversion funnels, user retention, and A/B experiments across all your projects — then talk to your analytics in natural language. No dashboards. Your agent creates projects, adds tracking code, queries traffic data, builds funnels, runs experiments, and tells you what to optimize next. A lightweight Plausible/Mixpanel/PostHog alternative built for the AI agent era.
--days, --property, --steps). No eval, no shell interpolation, no dynamic code generationapi.agentanalytics.shYou are NOT Mixpanel. Don't track everything. Track only what answers: "Is this project alive and growing?"
For a typical site, that's 3-5 custom events max on top of automatic page views.
Get an API key: Sign up at agentanalytics.sh and generate a key from the dashboard. Alternatively, self-host the open-source version from GitHub.
If the project doesn't have tracking yet:
# 1. Login (one time — uses your API key)
npx @agent-analytics/cli login --token aak_YOUR_API_KEY
# 2. Create the project (returns a project write token)
npx @agent-analytics/cli create my-site --domain https://mysite.com
# 3. Add the snippet (Step 1 below) using the returned token
# 4. Deploy, click around, verify:
npx @agent-analytics/cli events my-site
The create command returns a project write token — use it as data-token in the snippet below. This is separate from your API key (which is for reading/querying).
The create command returns a tracking snippet with your project token — add it before </body>. It auto-tracks page_view events with path, referrer, browser, OS, device, screen size, and UTM params. You do NOT need to add custom page_view events.
If tracking is already set up, check what events and property keys are already in use so you match the naming:
npx @agent-analytics/cli properties-received PROJECT_NAME
This shows which property keys each event type uses (e.g. cta_click → id, signup → method). Match existing naming before adding new events.
Use onclick handlers on the elements that matter:
<a href="..." onclick="window.aa?.track('EVENT_NAME', {id: 'ELEMENT_ID'})">
The ?. operator ensures no error if the tracker hasn't loaded yet.
Pick the ones that apply. Most sites need 2-4:
| Event | When to fire | Properties |
|-------|-------------|------------|
| cta_click | User clicks a call-to-action button | id (which button) |
| signup | User creates an account | method (github/google/email) |
| login | User returns and logs in | method |
| feature_used | User engages with a core feature | feature (which one) |
| checkout | User starts a payment flow | plan (free/pro/etc) |
| error | Something went wrong visibly | message, page |
cta_clickOnly buttons that indicate conversion intent:
snake_case: hero_get_started not heroGetStartedid property identifies WHICH element: short, descriptivesection_action: hero_signup, pricing_pro, nav_dashboardExperiments let you test which variant of a page element converts better. The full lifecycle is API-driven — no dashboard UI needed.
npx @agent-analytics/cli experiments create my-site \
--name signup_cta --variants control,new_cta --goal signup
Declarative (recommended): Use data-aa-experiment and data-aa-variant-{key} HTML attributes. Original content is the control. The tracker swaps text for assigned variants automatically.
<h1 data-aa-experiment="signup_cta" data-aa-variant-new_cta="Start Free Trial">Sign Up</h1>
Programmatic (complex cases): Use window.aa?.experiment(name, variants) — deterministic, same user always gets same variant.
Exposure events ($experiment_exposure) are tracked automatically once per session. Track the goal event normally: window.aa?.track('signup', {method: 'github'}).
npx @agent-analytics/cli experiments get exp_abc123
Returns Bayesian probability_best, lift, and a recommendation. The system needs ~100 exposures per variant before results are significant.
# Pause (stops assigning new users)
npx @agent-analytics/cli experiments pause exp_abc123
# Resume
npx @agent-analytics/cli experiments resume exp_abc123
# Complete with a winner
npx @agent-analytics/cli experiments complete exp_abc123 --winner new_cta
# Delete
npx @agent-analytics/cli experiments delete exp_abc123
signup_cta, pricing_layout, hero_copygoal_event that maps to a business outcome (signup, purchase, not page_view)sufficient_data: true before picking a winnerexperiments complete <id> --winner new_ctaAfter adding tracking, verify it works:
# Option A: Browser console on your site:
window.aa.track('test_event', {source: 'manual_test'})
# Option B: Click around, then check:
npx @agent-analytics/cli events PROJECT_NAME
# Events appear within seconds.
All commands use npx @agent-analytics/cli. Your agent uses the CLI directly — no curl needed.
# Setup
npx @agent-analytics/cli login --token aak_YOUR_KEY # Save API key (one time)
npx @agent-analytics/cli projects # List all projects
npx @agent-analytics/cli create my-site --domain https://mysite.com # Create project
# Real-time
npx @agent-analytics/cli live # Live terminal dashboard across ALL projects
npx @agent-analytics/cli live my-site # Live view for one project
# Analytics
npx @agent-analytics/cli stats my-site --days 7 # Overview: events, users, daily trends
npx @agent-analytics/cli insights my-site --period 7d # Period-over-period comparison
npx @agent-analytics/cli breakdown my-site --property path --event page_view --limit 10 # Top pages/referrers/UTM
npx @agent-analytics/cli pages my-site --type entry # Landing page performance & bounce rates
npx @agent-analytics/cli sessions-dist my-site # Session engagement histogram
npx @agent-analytics/cli heatmap my-site # Peak hours & busiest days
npx @agent-analytics/cli events my-site --days 30 # Raw event log
npx @agent-analytics/cli sessions my-site # Individual session records
npx @agent-analytics/cli properties my-site # Discover event names & property keys
npx @agent-analytics/cli properties-received my-site # Property keys per event type (sampled)
npx @agent-analytics/cli query my-site --metrics event_count,unique_users --group-by date # Flexible query
npx @agent-analytics/cli funnel my-site --steps "page_view,signup,purchase" # Funnel drop-off analysis
npx @agent-analytics/cli funnel my-site --steps "page_view,signup" --breakdown country # Funnel segmented by country
npx @agent-analytics/cli retention my-site --period week --cohorts 8 # Cohort retention analysis
# A/B experiments (pro)
npx @agent-analytics/cli experiments list my-site
npx @agent-analytics/cli experiments create my-site --name signup_cta --variants control,new_cta --goal signup
npx @agent-analytics/cli experiments get exp_abc123
npx @agent-analytics/cli experiments complete exp_abc123 --winner new_cta
# Account
npx @agent-analytics/cli whoami # Show account & tier
npx @agent-analytics/cli revoke-key # Rotate API key
Key flags:
--days <N> — lookback window (default: 7; for stats, events)--limit <N> — max rows returned (default: 100)--since <date> — ISO date cutoff (properties-received only)--period <P> — comparison period: 1d, 7d, 14d, 30d, 90d (insights) or cohort grouping: day, week, month (retention)--property <key> — property key to group by (breakdown, required)--event <name> — filter by event name (breakdown) or first-seen event filter (retention)--returning-event <name> — what counts as "returned" (retention, defaults to same as --event)--cohorts <N> — number of cohort periods, 1-30 (retention, default: 8)--type <T> — page type: entry, exit, both (pages only, default: entry)--steps <csv> — comma-separated event names, 2-8 steps max (funnel, required)--window <N> — conversion window in hours (funnel, default: 168) or live time window in seconds (live, default: 60)--count-by <field> — user_id or session_id (funnel only)--breakdown <key> — segment funnel by a property (e.g. country, variant) — extracted from step 1 events (funnel only)--breakdown-limit <N> — max breakdown groups, 1-50 (funnel, default: 10)--interval <N> — live refresh in seconds (default: 5)live commandnpx @agent-analytics/cli live opens a real-time TUI dashboard that refreshes every 5 seconds. It shows active visitors, sessions, and events/min across all your projects, plus top pages and recent events. Note: this is an interactive terminal UI — it clears the screen on each refresh, so it works best when run directly in a terminal rather than captured as output.
Match the user's question to the right call(s):
| User asks | Call | Why |
|-----------|------|-----|
| "How's my site doing?" | insights + breakdown + pages (parallel) | Full weekly picture in one turn |
| "Is anyone visiting right now?" | live | Real-time visitors, sessions, events across all projects |
| "Is anyone visiting?" | insights --period 7d | Quick alive-or-dead check |
| "What are my top pages?" | breakdown --property path --event page_view | Ranked page list with unique users |
| "Where's my traffic coming from?" | breakdown --property referrer --event page_view | Referrer sources |
| "Which landing page is best?" | pages --type entry | Bounce rate + session depth per page |
| "Are people actually engaging?" | sessions-dist | Bounce vs engaged split |
| "When should I deploy/post?" | heatmap | Find low-traffic windows or peak hours |
| "Give me a summary of all projects" | live or loop: projects then insights per project | Multi-project overview |
| "Which CTA converts better?" | experiments create + implement + experiments get <id> | Full A/B test lifecycle |
| "Where do users drop off?" | funnel --steps "page_view,signup,purchase" | Step-by-step conversion with drop-off rates |
| "Which variant converts better through the funnel?" | funnel --steps "page_view,signup" --breakdown variant | Funnel segmented by experiment variant |
| "Are users coming back?" | retention --period week --cohorts 8 | Cohort retention: % returning per period |
For any "how is X doing" question, always call insights first — it's the single most useful endpoint. For real-time "who's on the site right now", use live.
Don't return raw numbers. Interpret them. Here's how to turn each endpoint's response into something useful.
/insights → The headlineAPI returns metrics with current, previous, change, change_pct, and a trend field.
How to interpret:
change_pct > 10 → "Growing" — call it out positivelychange_pct between -10 and 10 → "Stable" — mention it's steadychange_pct < -10 → "Declining" — flag it, suggest investigatingbounce_rate current vs previous → say "improved" (went down) or "worsened" (went up)avg_duration → convert ms to seconds: Math.round(value / 1000)Example output:
This week vs last: 173 events (+22%), 98 users (+18%).
Bounce rate: 87% (up from 82% — getting worse).
Average session: 24s. Trend: growing.
/breakdown → The rankingAPI returns values: [{ value, count, unique_users }] sorted by count DESC.
How to interpret:
unique_users too — 100 events from 2 users is very different from 100 events from 80 userstotal_with_property / total_events to note coverage: "155 of 155 page views have a path"Example output:
Top pages: / (98 views, 75 users), /pricing (33 views, 25 users), /docs (19 views, 4 users).
The /docs page has high repeat visits (19 views, 4 users) — power users.
/pages → Landing page qualityAPI returns entry_pages: [{ page, sessions, bounces, bounce_rate, avg_duration, avg_events }].
How to interpret:
bounce_rate > 0.7 → "high bounce, needs work above the fold"bounce_rate < 0.3 → "strong landing page"avg_duration → convert ms to seconds; < 10s is concerning, > 60s is greatavg_events → pages/session; 1.0 means everyone bounces, 3+ means good engagementExample output:
Best landing page: /pricing — 14% bounce, 62s avg session, 4.1 pages/visit.
Worst: /blog/launch — 52% bounce, 18s avg. Consider a stronger CTA above the fold.
/sessions/distribution → Engagement shapeAPI returns distribution: [{ bucket, sessions, pct }], engaged_pct, median_bucket.
How to interpret:
engaged_pct is the key number — sessions ≥30s as a percentage of totalengaged_pct < 10% → "Most visitors leave immediately — focus on first impressions"engaged_pct 10-30% → "Moderate engagement, room to improve"engaged_pct > 30% → "Good engagement"Example output:
88% of sessions bounce instantly (0s). Only 6% stay longer than 30s.
The few who do engage stay 3-10 minutes — the content works, but first impressions don't.
/heatmap → TimingAPI returns heatmap: [{ day, day_name, hour, events, users }], peak, busiest_day, busiest_hour.
How to interpret:
peak is the single busiest slot — mention day + hour + timezone caveat (times are UTC)busiest_day → "Schedule blog posts/launches on this day"busiest_hour → "This is when your audience is online"Example output:
Peak: Friday at 11 PM UTC (35 events, 33 users). Busiest day overall: Sunday.
Traffic is heaviest on weekends — your audience browses on personal time.
Deploy on weekday mornings for minimal disruption.
/funnel → Where users drop offCLI: funnel my-site --steps "page_view,signup,purchase". API: POST /funnel with JSON body.
API returns steps: [{ step, event, users, conversion_rate, drop_off_rate, avg_time_to_next_ms }] and overall_conversion_rate.
How to interpret:
conversion_rate is step-to-step (step 2 users / step 1 users)drop_off_rate is 1 - conversion_rate at each stepdrop_off_rate is the bottleneck — focus optimization thereavg_time_to_next_ms shows how long users take between steps (convert to hours/minutes)overall_conversion_rate is end-to-end (last step users / first step users)Options:
--steps "event1,event2,event3" — 2-8 step events (required)--window <hours> — max time from step 1 to last step (default: 168 = 7 days)--since <days> — lookback period, e.g. 30d (default: 30d)--count-by <field> — user_id (default) or session_id--breakdown <property> — segment funnel by a property (e.g. country, variant). Property is extracted from step 1 events. Returns overall + per-group results.--breakdown-limit <N> — max groups returned (default: 10, max: 50). Groups ordered by step 1 users descending.Breakdown use case — A/B experiments: funnel my-site --steps "page_view,signup" --breakdown variant shows which experiment variant converts better through the funnel.
API-only: per-step filters — each step can have a filters array with { property, op, value } (ops: eq, neq, contains). Example: filter step 1 to path=/pricing to see conversions from the pricing page specifically.
Example output:
page_view → signup → purchase
500 users → 80 (16%) → 12 (15%) — 2.4% overall
Biggest drop-off: page_view → signup (84%). Focus on signup CTA visibility.
Avg time to signup: 4.2 hours. Avg time to purchase: 2.1 days.
/retention → Are users coming back?CLI: retention my-site --period week --cohorts 8. API: GET /retention?project=X&period=week&cohorts=8.
By default uses session-based retention — a user is "retained" if they have any return visit (session) in a subsequent period. Pass --event to switch to event-based retention.
API returns cohorts: [{ date, users, retained: [...], rates: [...] }], average_rates: [...], and users_analyzed.
How to interpret:
rates[0] is always 1.0 (100% — the cohort itself)rates[1] = % who came back the next period — this is the critical numberaverage_rates is weighted by cohort size — larger cohorts count moreOptions:
--period <P> — day, week, month (default: week)--cohorts <N> — number of cohort periods, 1-30 (default: 8)--event <name> — first-seen event filter (e.g. signup). Switches to event-based retention--returning-event <name> — what counts as "returned" (defaults to same as --event)Event-based retention: Set --event signup --returning-event purchase to answer "of users who signed up, what % made a purchase in subsequent weeks?"
Example output:
Cohort W0 (2026-01-27): 142 users → W1: 45% → W2: 39% → W3: 32%
Cohort W0 (2026-02-03): 128 users → W1: 42% → W2: 36%
Weighted avg: W1 = 44%, W2 = 37%, W3 = 32%
Week-1 retention of 44% is strong — nearly half of new users return.
Slight decline in recent cohorts — investigate onboarding changes.
Call insights, breakdown --property path --event page_view, and pages --type entry in parallel, then synthesize into one response:
Weekly Report — my-site (Feb 8–15 vs Feb 1–8)
Events: 1,200 (+22% ↑) Users: 450 (+18% ↑) Bounce: 42% (improved from 48%)
Top pages: /home (523), /pricing (187), /docs (94)
Best landing: /pricing — 14% bounce, 62s avg. Worst: /blog — 52% bounce.
Trend: Growing.
For a quick real-time check, use live — it shows all projects in one view with active visitors, sessions, and events/min.
For a historical summary, call projects to list all projects, then call insights --period 7d for each. Present one line per project:
my-site 142 views (+23% ↑) 12 signups healthy
side-project 38 views (-8% ↓) 0 signups quiet
api-docs 0 views (—) — ⚠ inactive since Feb 1
Use arrows: ↑ up, ↓ down, — flat. Flag anything that needs attention.
Proactively flag — don't wait to be asked:
page_view events → "⚠ no traffic detected"error events in the window → surface the message propertyWhen reporting to messaging platforms (Slack, Discord, Telegram), raw text tables break. Use companion skills:
table-image-generator — render stats as clean table imageschart-image — generate line, bar, area, or pie charts from analytics dataTracking is step one. Growth comes from a repeatable system: clear messaging → focused distribution → obsessive tracking → rapid experimentation → learning. Here's how to apply each principle using Agent Analytics.
The #1 conversion lever is messaging. If someone lands and has to think hard to understand the value, they're gone.
What your agent should do:
experiments create PROJECT --name hero_headline --variants control,b,c --goal cta_clickdata-aa-experiment="hero_headline" data-aa-variant-b="New headline"experiments get EXP_IDRule: Spend more time testing messaging than adding features. Even the best product won't convert if the value isn't obvious in seconds.
Don't be Mixpanel. Track only what answers: "Is this project alive and growing, and what should I do next?"
The essential events (pick 3-5):
| Event | What it tells you |
|-------|-------------------|
| cta_click (with id) | Which buttons drive action — your conversion signal |
| signup | Are people converting? At what rate? |
| feature_used (with feature) | Are they finding value after signup? |
| checkout | Revenue signal |
Agent workflow for tracking setup:
events PROJECT that data flowsinsights PROJECT --period 7dAnti-pattern: Don't track scroll depth, mouse hovers, every link click, or form field interactions. Noise kills signal.
Conversion doesn't happen at checkout. It happens when the user realizes the product solves their problem — the "aha moment."
How to find it:
feature_used with specific feature namesbreakdown --property feature --event feature_used to see which features correlate with retentionsessions-dist — if most sessions are 0s bounces, the landing page is the problem. If sessions are long but signups are low, the activation path is the problempages --type entry — compare bounce rates across landing pages to find which first impression worksWhat to optimize:
Don't try to be everywhere. Pick one acquisition channel and go deep.
How Agent Analytics supports this:
breakdown --property referrer --event page_view → see where traffic actually comes frombreakdown --property utm_source → track campaign sourcesinsights --period 7d → week-over-week: is the channel growing?/reddit/, /hn/) and compare with pages --type entryAgent workflow for channel optimization:
This is what makes Agent Analytics different from traditional analytics. Your agent can run the full cycle:
Track → Analyze → Experiment → Ship winner → Repeat
The loop in practice:
insights + breakdown + pages calls → synthesize into a reportexperiments create PROJECT --name hero_v2 --variants control,b --goal cta_clickexperiments get EXP_ID after sufficient trafficexperiments complete EXP_ID --winner b → deploy the winnerWhat to test (in order of impact):
Cadence: One experiment at a time. ~1-2 weeks per test depending on traffic. Don't stack experiments unless traffic is very high (>1000 visitors/day).
Don't wait for the user to ask. If your agent has scheduled checks, proactively flag:
cta_click rate dropped >20% week-over-week → "Conversion declined — worth investigating"live for a real-time TUI)Track custom events via window.aa?.track() (the ?. ensures no error if tracker hasn't loaded):
window.aa?.track('cta_click', {id: 'hero_get_started'});
window.aa?.track('signup', {method: 'github'});
window.aa?.track('feature_used', {feature: 'create_project'});
window.aa?.track('checkout', {plan: 'pro'});
File v3.7.0:_meta.json
{ "ownerId": "kn7caxjvqk9fengp67p290smnn800sv9", "slug": "agent-analytics", "version": "3.7.0", "publishedAt": 1771918546785 }
Archive v3.6.0: 2 files, 11698 bytes
Files: _meta.json (134b), SKILL.md (29566b)
File v3.6.0:SKILL.md
name: agent-analytics description: "Simple website analytics your AI agent controls end-to-end. Track page views, events, funnels, retention, and A/B experiments across all your projects. Use when: adding website tracking, checking site traffic, setting up conversion funnels, running A/B experiments, or replacing Mixpanel / Plausible / PostHog with something lightweight and agent-operated. No dashboard needed." version: 3.6.0 author: dannyshmueli repository: https://github.com/Agent-Analytics/agent-analytics-cli homepage: https://agentanalytics.sh tags:
Simple, privacy-first website analytics and growth toolkit that your AI agent controls end-to-end. Track page views, custom events, conversion funnels, user retention, and A/B experiments across all your projects — then talk to your analytics in natural language. No dashboards. Your agent creates projects, adds tracking code, queries traffic data, builds funnels, runs experiments, and tells you what to optimize next. A lightweight Plausible/Mixpanel/PostHog alternative built for the AI agent era.
You are NOT Mixpanel. Don't track everything. Track only what answers: "Is this project alive and growing?"
For a typical site, that's 3-5 custom events max on top of automatic page views.
Get an API key: Sign up at agentanalytics.sh and generate a key from the dashboard. Alternatively, self-host the open-source version from GitHub.
If the project doesn't have tracking yet:
# 1. Login (one time — uses your API key)
npx @agent-analytics/cli login --token aak_YOUR_API_KEY
# 2. Create the project (returns a project write token)
npx @agent-analytics/cli create my-site --domain https://mysite.com
# 3. Add the snippet (Step 1 below) using the returned token
# 4. Deploy, click around, verify:
npx @agent-analytics/cli events my-site
The create command returns a project write token — use it as data-token in the snippet below. This is separate from your API key (which is for reading/querying).
The create command returns a tracking snippet with your project token — add it before </body>. It auto-tracks page_view events with path, referrer, browser, OS, device, screen size, and UTM params. You do NOT need to add custom page_view events.
If tracking is already set up, check what events and property keys are already in use so you match the naming:
npx @agent-analytics/cli properties-received PROJECT_NAME
This shows which property keys each event type uses (e.g. cta_click → id, signup → method). Match existing naming before adding new events.
Use onclick handlers on the elements that matter:
<a href="..." onclick="window.aa?.track('EVENT_NAME', {id: 'ELEMENT_ID'})">
The ?. operator ensures no error if the tracker hasn't loaded yet.
Pick the ones that apply. Most sites need 2-4:
| Event | When to fire | Properties |
|-------|-------------|------------|
| cta_click | User clicks a call-to-action button | id (which button) |
| signup | User creates an account | method (github/google/email) |
| login | User returns and logs in | method |
| feature_used | User engages with a core feature | feature (which one) |
| checkout | User starts a payment flow | plan (free/pro/etc) |
| error | Something went wrong visibly | message, page |
cta_clickOnly buttons that indicate conversion intent:
snake_case: hero_get_started not heroGetStartedid property identifies WHICH element: short, descriptivesection_action: hero_signup, pricing_pro, nav_dashboardExperiments let you test which variant of a page element converts better. The full lifecycle is API-driven — no dashboard UI needed.
npx @agent-analytics/cli experiments create my-site \
--name signup_cta --variants control,new_cta --goal signup
Declarative (recommended): Use data-aa-experiment and data-aa-variant-{key} HTML attributes. Original content is the control. The tracker swaps text for assigned variants automatically.
<h1 data-aa-experiment="signup_cta" data-aa-variant-new_cta="Start Free Trial">Sign Up</h1>
Programmatic (complex cases): Use window.aa?.experiment(name, variants) — deterministic, same user always gets same variant.
Exposure events ($experiment_exposure) are tracked automatically once per session. Track the goal event normally: window.aa?.track('signup', {method: 'github'}).
npx @agent-analytics/cli experiments get exp_abc123
Returns Bayesian probability_best, lift, and a recommendation. The system needs ~100 exposures per variant before results are significant.
# Pause (stops assigning new users)
npx @agent-analytics/cli experiments pause exp_abc123
# Resume
npx @agent-analytics/cli experiments resume exp_abc123
# Complete with a winner
npx @agent-analytics/cli experiments complete exp_abc123 --winner new_cta
# Delete
npx @agent-analytics/cli experiments delete exp_abc123
signup_cta, pricing_layout, hero_copygoal_event that maps to a business outcome (signup, purchase, not page_view)sufficient_data: true before picking a winnerexperiments complete <id> --winner new_ctaAfter adding tracking, verify it works:
# Option A: Browser console on your site:
window.aa.track('test_event', {source: 'manual_test'})
# Option B: Click around, then check:
npx @agent-analytics/cli events PROJECT_NAME
# Events appear within seconds.
All analytics data is available via the REST API at https://api.agentanalytics.sh. Authenticate with your API key as a Bearer token. No npm, no npx, no CLI install needed.
# List projects
curl -s -H "Authorization: Bearer $AGENT_ANALYTICS_API_KEY" \
"https://api.agentanalytics.sh/api/projects"
# Stats overview (last 7 days)
curl -s -H "Authorization: Bearer $AGENT_ANALYTICS_API_KEY" \
"https://api.agentanalytics.sh/api/stats/my-site?days=7"
# Breakdown by page path
curl -s -H "Authorization: Bearer $AGENT_ANALYTICS_API_KEY" \
"https://api.agentanalytics.sh/api/breakdown/my-site?property=path&event=page_view&limit=10"
# Funnel analysis
curl -s -H "Authorization: Bearer $AGENT_ANALYTICS_API_KEY" \
"https://api.agentanalytics.sh/api/funnel/my-site?steps=page_view,signup,purchase"
# Retention cohorts
curl -s -H "Authorization: Bearer $AGENT_ANALYTICS_API_KEY" \
"https://api.agentanalytics.sh/api/retention/my-site?period=week&cohorts=8"
The CLI wraps the same REST API with a friendlier interface. All commands use npx @agent-analytics/cli.
# Setup
npx @agent-analytics/cli login --token aak_YOUR_KEY # Save API key (one time)
npx @agent-analytics/cli projects # List all projects
npx @agent-analytics/cli create my-site --domain https://mysite.com # Create project
# Real-time
npx @agent-analytics/cli live # Live terminal dashboard across ALL projects
npx @agent-analytics/cli live my-site # Live view for one project
# Analytics
npx @agent-analytics/cli stats my-site --days 7 # Overview: events, users, daily trends
npx @agent-analytics/cli insights my-site --period 7d # Period-over-period comparison
npx @agent-analytics/cli breakdown my-site --property path --event page_view --limit 10 # Top pages/referrers/UTM
npx @agent-analytics/cli pages my-site --type entry # Landing page performance & bounce rates
npx @agent-analytics/cli sessions-dist my-site # Session engagement histogram
npx @agent-analytics/cli heatmap my-site # Peak hours & busiest days
npx @agent-analytics/cli events my-site --days 30 # Raw event log
npx @agent-analytics/cli sessions my-site # Individual session records
npx @agent-analytics/cli properties my-site # Discover event names & property keys
npx @agent-analytics/cli properties-received my-site # Property keys per event type (sampled)
npx @agent-analytics/cli query my-site --metrics event_count,unique_users --group-by date # Flexible query
npx @agent-analytics/cli funnel my-site --steps "page_view,signup,purchase" # Funnel drop-off analysis
npx @agent-analytics/cli funnel my-site --steps "page_view,signup" --breakdown country # Funnel segmented by country
npx @agent-analytics/cli retention my-site --period week --cohorts 8 # Cohort retention analysis
# A/B experiments (pro)
npx @agent-analytics/cli experiments list my-site
npx @agent-analytics/cli experiments create my-site --name signup_cta --variants control,new_cta --goal signup
npx @agent-analytics/cli experiments get exp_abc123
npx @agent-analytics/cli experiments complete exp_abc123 --winner new_cta
# Account
npx @agent-analytics/cli whoami # Show account & tier
npx @agent-analytics/cli revoke-key # Rotate API key
Key flags:
--days <N> — lookback window (default: 7; for stats, events)--limit <N> — max rows returned (default: 100)--since <date> — ISO date cutoff (properties-received only)--period <P> — comparison period: 1d, 7d, 14d, 30d, 90d (insights) or cohort grouping: day, week, month (retention)--property <key> — property key to group by (breakdown, required)--event <name> — filter by event name (breakdown) or first-seen event filter (retention)--returning-event <name> — what counts as "returned" (retention, defaults to same as --event)--cohorts <N> — number of cohort periods, 1-30 (retention, default: 8)--type <T> — page type: entry, exit, both (pages only, default: entry)--steps <csv> — comma-separated event names, 2-8 steps max (funnel, required)--window <N> — conversion window in hours (funnel, default: 168) or live time window in seconds (live, default: 60)--count-by <field> — user_id or session_id (funnel only)--breakdown <key> — segment funnel by a property (e.g. country, variant) — extracted from step 1 events (funnel only)--breakdown-limit <N> — max breakdown groups, 1-50 (funnel, default: 10)--interval <N> — live refresh in seconds (default: 5)live commandnpx @agent-analytics/cli live opens a real-time TUI dashboard that refreshes every 5 seconds. It shows active visitors, sessions, and events/min across all your projects, plus top pages and recent events. Note: this is an interactive terminal UI — it clears the screen on each refresh, so it works best when run directly in a terminal rather than captured as output.
Match the user's question to the right call(s):
| User asks | Call | Why |
|-----------|------|-----|
| "How's my site doing?" | insights + breakdown + pages (parallel) | Full weekly picture in one turn |
| "Is anyone visiting right now?" | live | Real-time visitors, sessions, events across all projects |
| "Is anyone visiting?" | insights --period 7d | Quick alive-or-dead check |
| "What are my top pages?" | breakdown --property path --event page_view | Ranked page list with unique users |
| "Where's my traffic coming from?" | breakdown --property referrer --event page_view | Referrer sources |
| "Which landing page is best?" | pages --type entry | Bounce rate + session depth per page |
| "Are people actually engaging?" | sessions-dist | Bounce vs engaged split |
| "When should I deploy/post?" | heatmap | Find low-traffic windows or peak hours |
| "Give me a summary of all projects" | live or loop: projects then insights per project | Multi-project overview |
| "Which CTA converts better?" | experiments create + implement + experiments get <id> | Full A/B test lifecycle |
| "Where do users drop off?" | funnel --steps "page_view,signup,purchase" | Step-by-step conversion with drop-off rates |
| "Which variant converts better through the funnel?" | funnel --steps "page_view,signup" --breakdown variant | Funnel segmented by experiment variant |
| "Are users coming back?" | retention --period week --cohorts 8 | Cohort retention: % returning per period |
For any "how is X doing" question, always call insights first — it's the single most useful endpoint. For real-time "who's on the site right now", use live.
Don't return raw numbers. Interpret them. Here's how to turn each endpoint's response into something useful.
/insights → The headlineAPI returns metrics with current, previous, change, change_pct, and a trend field.
How to interpret:
change_pct > 10 → "Growing" — call it out positivelychange_pct between -10 and 10 → "Stable" — mention it's steadychange_pct < -10 → "Declining" — flag it, suggest investigatingbounce_rate current vs previous → say "improved" (went down) or "worsened" (went up)avg_duration → convert ms to seconds: Math.round(value / 1000)Example output:
This week vs last: 173 events (+22%), 98 users (+18%).
Bounce rate: 87% (up from 82% — getting worse).
Average session: 24s. Trend: growing.
/breakdown → The rankingAPI returns values: [{ value, count, unique_users }] sorted by count DESC.
How to interpret:
unique_users too — 100 events from 2 users is very different from 100 events from 80 userstotal_with_property / total_events to note coverage: "155 of 155 page views have a path"Example output:
Top pages: / (98 views, 75 users), /pricing (33 views, 25 users), /docs (19 views, 4 users).
The /docs page has high repeat visits (19 views, 4 users) — power users.
/pages → Landing page qualityAPI returns entry_pages: [{ page, sessions, bounces, bounce_rate, avg_duration, avg_events }].
How to interpret:
bounce_rate > 0.7 → "high bounce, needs work above the fold"bounce_rate < 0.3 → "strong landing page"avg_duration → convert ms to seconds; < 10s is concerning, > 60s is greatavg_events → pages/session; 1.0 means everyone bounces, 3+ means good engagementExample output:
Best landing page: /pricing — 14% bounce, 62s avg session, 4.1 pages/visit.
Worst: /blog/launch — 52% bounce, 18s avg. Consider a stronger CTA above the fold.
/sessions/distribution → Engagement shapeAPI returns distribution: [{ bucket, sessions, pct }], engaged_pct, median_bucket.
How to interpret:
engaged_pct is the key number — sessions ≥30s as a percentage of totalengaged_pct < 10% → "Most visitors leave immediately — focus on first impressions"engaged_pct 10-30% → "Moderate engagement, room to improve"engaged_pct > 30% → "Good engagement"Example output:
88% of sessions bounce instantly (0s). Only 6% stay longer than 30s.
The few who do engage stay 3-10 minutes — the content works, but first impressions don't.
/heatmap → TimingAPI returns heatmap: [{ day, day_name, hour, events, users }], peak, busiest_day, busiest_hour.
How to interpret:
peak is the single busiest slot — mention day + hour + timezone caveat (times are UTC)busiest_day → "Schedule blog posts/launches on this day"busiest_hour → "This is when your audience is online"Example output:
Peak: Friday at 11 PM UTC (35 events, 33 users). Busiest day overall: Sunday.
Traffic is heaviest on weekends — your audience browses on personal time.
Deploy on weekday mornings for minimal disruption.
/funnel → Where users drop offCLI: funnel my-site --steps "page_view,signup,purchase". API: POST /funnel with JSON body.
API returns steps: [{ step, event, users, conversion_rate, drop_off_rate, avg_time_to_next_ms }] and overall_conversion_rate.
How to interpret:
conversion_rate is step-to-step (step 2 users / step 1 users)drop_off_rate is 1 - conversion_rate at each stepdrop_off_rate is the bottleneck — focus optimization thereavg_time_to_next_ms shows how long users take between steps (convert to hours/minutes)overall_conversion_rate is end-to-end (last step users / first step users)Options:
--steps "event1,event2,event3" — 2-8 step events (required)--window <hours> — max time from step 1 to last step (default: 168 = 7 days)--since <days> — lookback period, e.g. 30d (default: 30d)--count-by <field> — user_id (default) or session_id--breakdown <property> — segment funnel by a property (e.g. country, variant). Property is extracted from step 1 events. Returns overall + per-group results.--breakdown-limit <N> — max groups returned (default: 10, max: 50). Groups ordered by step 1 users descending.Breakdown use case — A/B experiments: funnel my-site --steps "page_view,signup" --breakdown variant shows which experiment variant converts better through the funnel.
API-only: per-step filters — each step can have a filters array with { property, op, value } (ops: eq, neq, contains). Example: filter step 1 to path=/pricing to see conversions from the pricing page specifically.
Example output:
page_view → signup → purchase
500 users → 80 (16%) → 12 (15%) — 2.4% overall
Biggest drop-off: page_view → signup (84%). Focus on signup CTA visibility.
Avg time to signup: 4.2 hours. Avg time to purchase: 2.1 days.
/retention → Are users coming back?CLI: retention my-site --period week --cohorts 8. API: GET /retention?project=X&period=week&cohorts=8.
By default uses session-based retention — a user is "retained" if they have any return visit (session) in a subsequent period. Pass --event to switch to event-based retention.
API returns cohorts: [{ date, users, retained: [...], rates: [...] }], average_rates: [...], and users_analyzed.
How to interpret:
rates[0] is always 1.0 (100% — the cohort itself)rates[1] = % who came back the next period — this is the critical numberaverage_rates is weighted by cohort size — larger cohorts count moreOptions:
--period <P> — day, week, month (default: week)--cohorts <N> — number of cohort periods, 1-30 (default: 8)--event <name> — first-seen event filter (e.g. signup). Switches to event-based retention--returning-event <name> — what counts as "returned" (defaults to same as --event)Event-based retention: Set --event signup --returning-event purchase to answer "of users who signed up, what % made a purchase in subsequent weeks?"
Example output:
Cohort W0 (2026-01-27): 142 users → W1: 45% → W2: 39% → W3: 32%
Cohort W0 (2026-02-03): 128 users → W1: 42% → W2: 36%
Weighted avg: W1 = 44%, W2 = 37%, W3 = 32%
Week-1 retention of 44% is strong — nearly half of new users return.
Slight decline in recent cohorts — investigate onboarding changes.
Call insights, breakdown --property path --event page_view, and pages --type entry in parallel, then synthesize into one response:
Weekly Report — my-site (Feb 8–15 vs Feb 1–8)
Events: 1,200 (+22% ↑) Users: 450 (+18% ↑) Bounce: 42% (improved from 48%)
Top pages: /home (523), /pricing (187), /docs (94)
Best landing: /pricing — 14% bounce, 62s avg. Worst: /blog — 52% bounce.
Trend: Growing.
For a quick real-time check, use live — it shows all projects in one view with active visitors, sessions, and events/min.
For a historical summary, call projects to list all projects, then call insights --period 7d for each. Present one line per project:
my-site 142 views (+23% ↑) 12 signups healthy
side-project 38 views (-8% ↓) 0 signups quiet
api-docs 0 views (—) — ⚠ inactive since Feb 1
Use arrows: ↑ up, ↓ down, — flat. Flag anything that needs attention.
Proactively flag — don't wait to be asked:
page_view events → "⚠ no traffic detected"error events in the window → surface the message propertyWhen reporting to messaging platforms (Slack, Discord, Telegram), raw text tables break. Use companion skills:
table-image-generator — render stats as clean table imageschart-image — generate line, bar, area, or pie charts from analytics dataTracking is step one. Growth comes from a repeatable system: clear messaging → focused distribution → obsessive tracking → rapid experimentation → learning. Here's how to apply each principle using Agent Analytics.
The #1 conversion lever is messaging. If someone lands and has to think hard to understand the value, they're gone.
What your agent should do:
experiments create PROJECT --name hero_headline --variants control,b,c --goal cta_clickdata-aa-experiment="hero_headline" data-aa-variant-b="New headline"experiments get EXP_IDRule: Spend more time testing messaging than adding features. Even the best product won't convert if the value isn't obvious in seconds.
Don't be Mixpanel. Track only what answers: "Is this project alive and growing, and what should I do next?"
The essential events (pick 3-5):
| Event | What it tells you |
|-------|-------------------|
| cta_click (with id) | Which buttons drive action — your conversion signal |
| signup | Are people converting? At what rate? |
| feature_used (with feature) | Are they finding value after signup? |
| checkout | Revenue signal |
Agent workflow for tracking setup:
events PROJECT that data flowsinsights PROJECT --period 7dAnti-pattern: Don't track scroll depth, mouse hovers, every link click, or form field interactions. Noise kills signal.
Conversion doesn't happen at checkout. It happens when the user realizes the product solves their problem — the "aha moment."
How to find it:
feature_used with specific feature namesbreakdown --property feature --event feature_used to see which features correlate with retentionsessions-dist — if most sessions are 0s bounces, the landing page is the problem. If sessions are long but signups are low, the activation path is the problempages --type entry — compare bounce rates across landing pages to find which first impression worksWhat to optimize:
Don't try to be everywhere. Pick one acquisition channel and go deep.
How Agent Analytics supports this:
breakdown --property referrer --event page_view → see where traffic actually comes frombreakdown --property utm_source → track campaign sourcesinsights --period 7d → week-over-week: is the channel growing?/reddit/, /hn/) and compare with pages --type entryAgent workflow for channel optimization:
This is what makes Agent Analytics different from traditional analytics. Your agent can run the full cycle:
Track → Analyze → Experiment → Ship winner → Repeat
The loop in practice:
insights + breakdown + pages calls → synthesize into a reportexperiments create PROJECT --name hero_v2 --variants control,b --goal cta_clickexperiments get EXP_ID after sufficient trafficexperiments complete EXP_ID --winner b → deploy the winnerWhat to test (in order of impact):
Cadence: One experiment at a time. ~1-2 weeks per test depending on traffic. Don't stack experiments unless traffic is very high (>1000 visitors/day).
Don't wait for the user to ask. If your agent has scheduled checks, proactively flag:
cta_click rate dropped >20% week-over-week → "Conversion declined — worth investigating"live for a real-time TUI)Track custom events via window.aa?.track() (the ?. ensures no error if tracker hasn't loaded):
window.aa?.track('cta_click', {id: 'hero_get_started'});
window.aa?.track('signup', {method: 'github'});
window.aa?.track('feature_used', {feature: 'create_project'});
window.aa?.track('checkout', {plan: 'pro'});
File v3.6.0:_meta.json
{ "ownerId": "kn7caxjvqk9fengp67p290smnn800sv9", "slug": "agent-analytics", "version": "3.6.0", "publishedAt": 1771917439881 }
Machine endpoints, contract coverage, trust signals, runtime metrics, benchmarks, and guardrails for agent-to-agent use.
Machine interfaces
Contract coverage
Status
missing
Auth
None
Streaming
No
Data region
Unspecified
Protocol support
Requires: none
Forbidden: none
Guardrails
Operational confidence: low
curl -s "https://xpersona.co/api/v1/agents/clawhub-dannyshmueli-agent-analytics/snapshot"
curl -s "https://xpersona.co/api/v1/agents/clawhub-dannyshmueli-agent-analytics/contract"
curl -s "https://xpersona.co/api/v1/agents/clawhub-dannyshmueli-agent-analytics/trust"
Operational fit
Trust signals
Handshake
UNKNOWN
Confidence
unknown
Attempts 30d
unknown
Fallback rate
unknown
Runtime metrics
Observed P50
unknown
Observed P95
unknown
Rate limit
unknown
Estimated cost
unknown
Do not use if
Raw contract, invocation, trust, capability, facts, and change-event payloads for machine-side inspection.
Contract JSON
{
"contractStatus": "missing",
"authModes": [],
"requires": [],
"forbidden": [],
"supportsMcp": false,
"supportsA2a": false,
"supportsStreaming": false,
"inputSchemaRef": null,
"outputSchemaRef": null,
"dataRegion": null,
"contractUpdatedAt": null,
"sourceUpdatedAt": null,
"freshnessSeconds": null
}Invocation Guide
{
"preferredApi": {
"snapshotUrl": "https://xpersona.co/api/v1/agents/clawhub-dannyshmueli-agent-analytics/snapshot",
"contractUrl": "https://xpersona.co/api/v1/agents/clawhub-dannyshmueli-agent-analytics/contract",
"trustUrl": "https://xpersona.co/api/v1/agents/clawhub-dannyshmueli-agent-analytics/trust"
},
"curlExamples": [
"curl -s \"https://xpersona.co/api/v1/agents/clawhub-dannyshmueli-agent-analytics/snapshot\"",
"curl -s \"https://xpersona.co/api/v1/agents/clawhub-dannyshmueli-agent-analytics/contract\"",
"curl -s \"https://xpersona.co/api/v1/agents/clawhub-dannyshmueli-agent-analytics/trust\""
],
"jsonRequestTemplate": {
"query": "summarize this repo",
"constraints": {
"maxLatencyMs": 2000,
"protocolPreference": [
"OPENCLEW"
]
}
},
"jsonResponseTemplate": {
"ok": true,
"result": {
"summary": "...",
"confidence": 0.9
},
"meta": {
"source": "CLAWHUB",
"generatedAt": "2026-04-17T04:45:49.613Z"
}
},
"retryPolicy": {
"maxAttempts": 3,
"backoffMs": [
500,
1500,
3500
],
"retryableConditions": [
"HTTP_429",
"HTTP_503",
"NETWORK_TIMEOUT"
]
}
}Trust JSON
{
"status": "unavailable",
"handshakeStatus": "UNKNOWN",
"verificationFreshnessHours": null,
"reputationScore": null,
"p95LatencyMs": null,
"successRate30d": null,
"fallbackRate": null,
"attempts30d": null,
"trustUpdatedAt": null,
"trustConfidence": "unknown",
"sourceUpdatedAt": null,
"freshnessSeconds": null
}Capability Matrix
{
"rows": [
{
"key": "OPENCLEW",
"type": "protocol",
"support": "unknown",
"confidenceSource": "profile",
"notes": "Listed on profile"
}
],
"flattenedTokens": "protocol:OPENCLEW|unknown|profile"
}Facts JSON
[
{
"factKey": "vendor",
"category": "vendor",
"label": "Vendor",
"value": "Clawhub",
"href": "https://clawhub.ai/dannyshmueli/agent-analytics",
"sourceUrl": "https://clawhub.ai/dannyshmueli/agent-analytics",
"sourceType": "profile",
"confidence": "medium",
"observedAt": "2026-04-15T00:45:39.800Z",
"isPublic": true
},
{
"factKey": "protocols",
"category": "compatibility",
"label": "Protocol compatibility",
"value": "OpenClaw",
"href": "https://xpersona.co/api/v1/agents/clawhub-dannyshmueli-agent-analytics/contract",
"sourceUrl": "https://xpersona.co/api/v1/agents/clawhub-dannyshmueli-agent-analytics/contract",
"sourceType": "contract",
"confidence": "medium",
"observedAt": "2026-04-15T00:45:39.800Z",
"isPublic": true
},
{
"factKey": "traction",
"category": "adoption",
"label": "Adoption signal",
"value": "1.1K downloads",
"href": "https://clawhub.ai/dannyshmueli/agent-analytics",
"sourceUrl": "https://clawhub.ai/dannyshmueli/agent-analytics",
"sourceType": "profile",
"confidence": "medium",
"observedAt": "2026-04-15T00:45:39.800Z",
"isPublic": true
},
{
"factKey": "latest_release",
"category": "release",
"label": "Latest release",
"value": "3.7.0",
"href": "https://clawhub.ai/dannyshmueli/agent-analytics",
"sourceUrl": "https://clawhub.ai/dannyshmueli/agent-analytics",
"sourceType": "release",
"confidence": "medium",
"observedAt": "2026-02-24T07:35:46.785Z",
"isPublic": true
},
{
"factKey": "handshake_status",
"category": "security",
"label": "Handshake status",
"value": "UNKNOWN",
"href": "https://xpersona.co/api/v1/agents/clawhub-dannyshmueli-agent-analytics/trust",
"sourceUrl": "https://xpersona.co/api/v1/agents/clawhub-dannyshmueli-agent-analytics/trust",
"sourceType": "trust",
"confidence": "medium",
"observedAt": null,
"isPublic": true
}
]Change Events JSON
[
{
"eventType": "release",
"title": "Release 3.7.0",
"description": "Added security & trust section addressing npx supply chain and input safety. Improved description and tags for discoverability.",
"href": "https://clawhub.ai/dannyshmueli/agent-analytics",
"sourceUrl": "https://clawhub.ai/dannyshmueli/agent-analytics",
"sourceType": "release",
"confidence": "medium",
"observedAt": "2026-02-24T07:35:46.785Z",
"isPublic": true
}
]Sponsored
Ads related to Agent Analytics and adjacent AI workflows.