Rank
70
AI Agents & MCPs & AI Workflow Automation • (~400 MCP servers for AI agents) • AI Automation / AI Agent with MCPs • AI Workflows & AI Agents • MCPs for AI Agents
Traction
No public download signal
Freshness
Updated 2d ago
Xpersona Agent
Complete Voice of Customer system — collect, analyze, and operationalize customer feedback at scale. Covers NPS/CSAT/CES measurement, customer interview methodology, feedback taxonomy, feature request prioritization, sentiment analysis, closed-loop workflows, and VoC-driven product decisions. Use when building feedback systems, running customer interviews, measuring satisfaction, analyzing feature requests, reducing churn, or closing the feedback loop. Trigger on "customer feedback", "voice of customer", "NPS", "CSAT", "CES", "feature requests", "feedback system", "customer interviews", "satisfaction survey", "churn analysis". --- name: afrexai-voc-engine description: Complete Voice of Customer system — collect, analyze, and operationalize customer feedback at scale. Covers NPS/CSAT/CES measurement, customer interview methodology, feedback taxonomy, feature request prioritization, sentiment analysis, closed-loop workflows, and VoC-driven product decisions. Use when building feedback systems, running customer interviews, measuring satisfact
clawhub skill install skills:1kalin:afrexai-voc-engineOverall rank
#62
Adoption
No public adoption signal
Trust
Unknown
Freshness
Feb 25, 2026
Freshness
Last checked Feb 25, 2026
Best For
afrexai-voc-engine is best for team, resolved, experience workflows where OpenClaw compatibility matters.
Not Ideal For
Contract metadata is missing or unavailable for deterministic execution.
Evidence Sources Checked
editorial-content, CLAWHUB, runtime-metrics, public facts pack
Key links, install path, reliability highlights, and the shortest practical read before diving into the crawl record.
Overview
Complete Voice of Customer system — collect, analyze, and operationalize customer feedback at scale. Covers NPS/CSAT/CES measurement, customer interview methodology, feedback taxonomy, feature request prioritization, sentiment analysis, closed-loop workflows, and VoC-driven product decisions. Use when building feedback systems, running customer interviews, measuring satisfaction, analyzing feature requests, reducing churn, or closing the feedback loop. Trigger on "customer feedback", "voice of customer", "NPS", "CSAT", "CES", "feature requests", "feedback system", "customer interviews", "satisfaction survey", "churn analysis". --- name: afrexai-voc-engine description: Complete Voice of Customer system — collect, analyze, and operationalize customer feedback at scale. Covers NPS/CSAT/CES measurement, customer interview methodology, feedback taxonomy, feature request prioritization, sentiment analysis, closed-loop workflows, and VoC-driven product decisions. Use when building feedback systems, running customer interviews, measuring satisfact Capability contract not published. No trust telemetry is available yet. Last updated 4/15/2026.
Trust score
Unknown
Compatibility
OpenClaw
Freshness
Feb 25, 2026
Vendor
Openclaw
Artifacts
0
Benchmarks
0
Last release
Unpublished
Install & run
clawhub skill install skills:1kalin:afrexai-voc-engineSetup complexity is LOW. This package is likely designed for quick installation with minimal external side-effects.
Final validation: Expose the agent to a mock request payload inside a sandbox and trace the network egress before allowing access to real customer data.
Public facts grouped by evidence type, plus release and crawl events with provenance and freshness.
Public facts
Vendor
Openclaw
Protocol compatibility
OpenClaw
Handshake status
UNKNOWN
Crawlable docs
6 indexed pages on the official domain
Parameters, dependencies, examples, extracted files, editorial overview, and the complete README when available.
Captured outputs
Extracted files
0
Examples
6
Snippets
0
Languages
typescript
Parameters
yaml
voc_program:
company: "[Company Name]"
product: "[Product/Service]"
stage: "[Pre-PMF | Growth | Scale | Enterprise]"
customer_count: "[approximate]"
current_nps: "[score or unknown]"
current_churn: "[monthly % or unknown]"
primary_goal: "[reduce churn | improve NPS | prioritize roadmap | find PMF]"
segments:
- name: "[Enterprise / SMB / Consumer]"
size: "[count]"
arr_contribution: "[%]"
priority: "[1-3]"
existing_channels:
- "[list current feedback collection methods]"
gaps:
- "[what feedback are you NOT collecting?]"yaml
collection_channels:
# ALWAYS-ON (continuous)
in_app_widget:
placement: "[specific screens/moments]"
trigger: "User-initiated (feedback button)"
format: "Open text + optional category selector"
volume: "High"
quality: "Medium (context-rich but brief)"
support_tickets:
source: "[Intercom / Zendesk / Help Scout / email]"
tagging: "Auto-tag feedback themes (see taxonomy)"
volume: "High"
quality: "High (real problems, real context)"
feature_request_board:
tool: "[Canny / ProductBoard / custom]"
voting: true
status_updates: true # Close the loop!
volume: "Medium"
quality: "High (considered requests)"
# PERIODIC (scheduled)
nps_survey:
frequency: "Quarterly"
trigger: "Email to active users (>30 days tenure)"
follow_up: "Open text for score explanation"
segments: "[by plan tier, tenure, usage level]"
csat_survey:
trigger: "After key interactions (onboarding complete, support resolved, feature shipped)"
format: "1-5 stars + optional comment"
ces_survey:
trigger: "After task completion (setup, first report, integration)"
question: "How easy was it to [specific task]? (1-7)"
# EVENT-DRIVEN (triggered)
onboarding_check_in:
trigger: "Day 7, Day 30, Day 90"
format: "Short email survey (3 questions max)"
cancellation_survey:
trigger: "On churn/downgrade"
format: "Required reason + optional comment"
options:
- "Too expensive"
- "Missing features I need"
- "Too complex / hard to use"
- "Switched to competitor"
- "No longer need this type of product"
- "Poor support experience"
- "Other: [free text]"
win_loss_interview:
trigger: "After closed-won or closed-lost deal"
format: "15-min call or async survey"
renewal_feedback:
trigger: "30 days before renewal"
format: "Health check + satisfaction + roadmap preview"yaml
interview:
type: "[Discovery / Satisfaction / Churn / Win-Loss]"
participant: "[name, role, company, segment]"
date: "[YYYY-MM-DD]"
interviewer: "[name]"
pre_interview:
- Review account history (usage data, support tickets, NPS scores)
- Check CRM for relationship context
- Prepare 3 hypotheses to test
warm_up: # 3-5 min
- "Tell me about your role and what a typical week looks like."
- "How long have you been using [product]?"
- "[Acknowledge something specific about their usage]"
context: # 5-10 min
- "What problem were you trying to solve when you found us?"
- "What were you using before? What worked/didn't?"
- "Walk me through how [product] fits into your workflow."
core_exploration: # 15-20 min
- "Walk me through the last time you used [specific feature]."
- "What's the most valuable thing [product] does for you?"
- "What's the most frustrating thing?"
- "Tell me about a time [product] didn't work the way you expected."
- "What workaround have you built because we're missing something?"
outcome_assessment: # 5-10 min
- "How do you measure the value you get from [product]?"
- "What would happen if you lost access tomorrow?"
- "On a scale of 1-10, how likely would you recommend us? Why that number?"
future_vision: # 5 min
- "If you had a magic wand, what would [product] do that it doesn't today?"
- "What's changing in your world that we should know about?"
close: # 2 min
- "Anything I didn't ask that I should have?"
- "Would you be open to joining our advisory group / beta testing?"
- Thank + follow-up timeline
post_interview:
key_quotes: []
surprises: []
hypotheses_confirmed: []
hypotheses_invalidated: []
action_items: []yaml
synthesis:
batch: "[Interview batch name]"
dates: "[range]"
participants: "[count, segments]"
themes:
- theme: "[Name]"
frequency: "[X of Y participants mentioned]"
severity: "[Critical / High / Medium / Low]"
representative_quotes:
- "[exact quote]" — [participant type]
- "[exact quote]" — [participant type]
implications: "[what this means for product/business]"
recommended_action: "[specific next step]"
surprises:
- "[Things you didn't expect]"
segments_divergence:
- "[Where different segments disagreed]"
confidence_level: "[High / Medium / Low]"
next_steps: []yaml
feedback_taxonomy:
product:
usability:
- "Confusing UI / navigation"
- "Feature hard to find"
- "Too many steps to complete task"
- "Mobile experience issues"
functionality:
- "Feature doesn't work as expected"
- "Missing capability"
- "Performance / speed"
- "Integration issues"
reliability:
- "Bugs / errors"
- "Data loss / corruption"
- "Downtime / availability"
experience:
onboarding:
- "Setup too complex"
- "Documentation unclear"
- "Time to value too long"
support:
- "Response time"
- "Resolution quality"
- "Self-service gaps"
communication:
- "Product updates unclear"
- "Billing confusion"
- "Status page / transparency"
value:
pricing:
- "Too expensive"
- "Wrong packaging / tiers"
- "Hidden costs"
- "Competitor cheaper"
roi:
- "Can't measure value"
- "Not delivering promised results"
- "Value decreased over time"
strategic:
market_fit:
- "Wrong audience"
- "Use case mismatch"
- "Outgrew the product"
competitive:
- "Competitor has feature X"
- "Switching to competitor"
- "Industry trend we're missing"text
IF contains("cancel", "churn", "leaving", "switching") → tag: churn_risk, urgency: 5
IF contains("bug", "error", "broken", "crash") → tag: bug_report, urgency: 4
IF contains("wish", "would be nice", "if only") → tag: feature_request, urgency: 2
IF contains("love", "amazing", "best") → tag: positive_signal, sentiment: +2
IF contains("competitor", "alternative", "other tool") → tag: competitive_intel
IF contains("price", "cost", "expensive", "cheaper") → tag: pricing_feedback
IF contains("confusing", "hard to", "can't figure") → tag: usability_issue, urgency: 3Editorial read
Docs source
CLAWHUB
Editorial quality
ready
Complete Voice of Customer system — collect, analyze, and operationalize customer feedback at scale. Covers NPS/CSAT/CES measurement, customer interview methodology, feedback taxonomy, feature request prioritization, sentiment analysis, closed-loop workflows, and VoC-driven product decisions. Use when building feedback systems, running customer interviews, measuring satisfaction, analyzing feature requests, reducing churn, or closing the feedback loop. Trigger on "customer feedback", "voice of customer", "NPS", "CSAT", "CES", "feature requests", "feedback system", "customer interviews", "satisfaction survey", "churn analysis". --- name: afrexai-voc-engine description: Complete Voice of Customer system — collect, analyze, and operationalize customer feedback at scale. Covers NPS/CSAT/CES measurement, customer interview methodology, feedback taxonomy, feature request prioritization, sentiment analysis, closed-loop workflows, and VoC-driven product decisions. Use when building feedback systems, running customer interviews, measuring satisfact
Complete system for collecting, analyzing, and operationalizing customer feedback to drive product decisions, reduce churn, and increase expansion revenue.
voc_program:
company: "[Company Name]"
product: "[Product/Service]"
stage: "[Pre-PMF | Growth | Scale | Enterprise]"
customer_count: "[approximate]"
current_nps: "[score or unknown]"
current_churn: "[monthly % or unknown]"
primary_goal: "[reduce churn | improve NPS | prioritize roadmap | find PMF]"
segments:
- name: "[Enterprise / SMB / Consumer]"
size: "[count]"
arr_contribution: "[%]"
priority: "[1-3]"
existing_channels:
- "[list current feedback collection methods]"
gaps:
- "[what feedback are you NOT collecting?]"
Score your current program (1-5 per dimension):
| Dimension | 1 (Ad Hoc) | 3 (Structured) | 5 (Operationalized) | |-----------|-----------|----------------|---------------------| | Collection | Sporadic, reactive only | Multiple channels, some regularity | Automated, multi-channel, triggered | | Analysis | Read individual comments | Tag and categorize | Quantified themes, trend tracking | | Distribution | Stays with support team | Shared in meetings | Real-time dashboards, auto-routed | | Action | Occasional fixes | Quarterly roadmap input | Closed-loop, feedback drives decisions | | Measurement | No tracking | NPS or CSAT exists | Full metric suite, benchmarked |
Score interpretation:
Design feedback collection around the customer journey:
collection_channels:
# ALWAYS-ON (continuous)
in_app_widget:
placement: "[specific screens/moments]"
trigger: "User-initiated (feedback button)"
format: "Open text + optional category selector"
volume: "High"
quality: "Medium (context-rich but brief)"
support_tickets:
source: "[Intercom / Zendesk / Help Scout / email]"
tagging: "Auto-tag feedback themes (see taxonomy)"
volume: "High"
quality: "High (real problems, real context)"
feature_request_board:
tool: "[Canny / ProductBoard / custom]"
voting: true
status_updates: true # Close the loop!
volume: "Medium"
quality: "High (considered requests)"
# PERIODIC (scheduled)
nps_survey:
frequency: "Quarterly"
trigger: "Email to active users (>30 days tenure)"
follow_up: "Open text for score explanation"
segments: "[by plan tier, tenure, usage level]"
csat_survey:
trigger: "After key interactions (onboarding complete, support resolved, feature shipped)"
format: "1-5 stars + optional comment"
ces_survey:
trigger: "After task completion (setup, first report, integration)"
question: "How easy was it to [specific task]? (1-7)"
# EVENT-DRIVEN (triggered)
onboarding_check_in:
trigger: "Day 7, Day 30, Day 90"
format: "Short email survey (3 questions max)"
cancellation_survey:
trigger: "On churn/downgrade"
format: "Required reason + optional comment"
options:
- "Too expensive"
- "Missing features I need"
- "Too complex / hard to use"
- "Switched to competitor"
- "No longer need this type of product"
- "Poor support experience"
- "Other: [free text]"
win_loss_interview:
trigger: "After closed-won or closed-lost deal"
format: "15-min call or async survey"
renewal_feedback:
trigger: "30 days before renewal"
format: "Health check + satisfaction + roadmap preview"
| Customer Base | Confidence 95%, ±5% | Confidence 90%, ±5% | Minimum Viable | |--------------|---------------------|---------------------|----------------| | 100 | 80 | 74 | 30 | | 500 | 217 | 176 | 50 | | 1,000 | 278 | 213 | 75 | | 5,000 | 357 | 258 | 100 | | 10,000+ | 370 | 264 | 150 |
Expected response rates by channel:
| Type | When | Duration | Sample | Goal | |------|------|----------|--------|------| | Discovery | Pre-build, exploring problem | 45-60 min | 8-12 | Validate problem exists | | Usability | Feature in development | 30-45 min | 5-7 | Test assumptions | | Satisfaction | Ongoing, quarterly | 30 min | 5-8 per segment | Deep qualitative NPS | | Churn | After cancellation | 15-20 min | All willing | Understand why they left | | Win/Loss | After deal close | 20-30 min | 5+ per quarter | Sales process feedback | | Advisory | Strategic, quarterly | 60 min | 3-5 power users | Co-create roadmap |
interview:
type: "[Discovery / Satisfaction / Churn / Win-Loss]"
participant: "[name, role, company, segment]"
date: "[YYYY-MM-DD]"
interviewer: "[name]"
pre_interview:
- Review account history (usage data, support tickets, NPS scores)
- Check CRM for relationship context
- Prepare 3 hypotheses to test
warm_up: # 3-5 min
- "Tell me about your role and what a typical week looks like."
- "How long have you been using [product]?"
- "[Acknowledge something specific about their usage]"
context: # 5-10 min
- "What problem were you trying to solve when you found us?"
- "What were you using before? What worked/didn't?"
- "Walk me through how [product] fits into your workflow."
core_exploration: # 15-20 min
- "Walk me through the last time you used [specific feature]."
- "What's the most valuable thing [product] does for you?"
- "What's the most frustrating thing?"
- "Tell me about a time [product] didn't work the way you expected."
- "What workaround have you built because we're missing something?"
outcome_assessment: # 5-10 min
- "How do you measure the value you get from [product]?"
- "What would happen if you lost access tomorrow?"
- "On a scale of 1-10, how likely would you recommend us? Why that number?"
future_vision: # 5 min
- "If you had a magic wand, what would [product] do that it doesn't today?"
- "What's changing in your world that we should know about?"
close: # 2 min
- "Anything I didn't ask that I should have?"
- "Would you be open to joining our advisory group / beta testing?"
- Thank + follow-up timeline
post_interview:
key_quotes: []
surprises: []
hypotheses_confirmed: []
hypotheses_invalidated: []
action_items: []
The Mom Test (Rob Fitzpatrick):
Five Whys for Root Cause:
Silence technique: After they answer, count to 5 silently. They'll often add the most valuable insight in the silence.
Mirror technique: Repeat their last 3 words as a question. They'll elaborate without you leading.
After every 5 interviews in a batch:
synthesis:
batch: "[Interview batch name]"
dates: "[range]"
participants: "[count, segments]"
themes:
- theme: "[Name]"
frequency: "[X of Y participants mentioned]"
severity: "[Critical / High / Medium / Low]"
representative_quotes:
- "[exact quote]" — [participant type]
- "[exact quote]" — [participant type]
implications: "[what this means for product/business]"
recommended_action: "[specific next step]"
surprises:
- "[Things you didn't expect]"
segments_divergence:
- "[Where different segments disagreed]"
confidence_level: "[High / Medium / Low]"
next_steps: []
feedback_taxonomy:
product:
usability:
- "Confusing UI / navigation"
- "Feature hard to find"
- "Too many steps to complete task"
- "Mobile experience issues"
functionality:
- "Feature doesn't work as expected"
- "Missing capability"
- "Performance / speed"
- "Integration issues"
reliability:
- "Bugs / errors"
- "Data loss / corruption"
- "Downtime / availability"
experience:
onboarding:
- "Setup too complex"
- "Documentation unclear"
- "Time to value too long"
support:
- "Response time"
- "Resolution quality"
- "Self-service gaps"
communication:
- "Product updates unclear"
- "Billing confusion"
- "Status page / transparency"
value:
pricing:
- "Too expensive"
- "Wrong packaging / tiers"
- "Hidden costs"
- "Competitor cheaper"
roi:
- "Can't measure value"
- "Not delivering promised results"
- "Value decreased over time"
strategic:
market_fit:
- "Wrong audience"
- "Use case mismatch"
- "Outgrew the product"
competitive:
- "Competitor has feature X"
- "Switching to competitor"
- "Industry trend we're missing"
For every piece of feedback, score:
| Dimension | Scale | Guide | |-----------|-------|-------| | Sentiment | -2 to +2 | -2=angry, -1=frustrated, 0=neutral, +1=positive, +2=delighted | | Urgency | 1-5 | 1=nice-to-have, 3=important, 5=blocking/churning | | Frequency | Count | How many unique customers mention this | | Revenue at risk | $ | ARR of customers mentioning this theme | | Effort to fix | S/M/L/XL | Engineering estimate |
When processing feedback, apply tags automatically:
IF contains("cancel", "churn", "leaving", "switching") → tag: churn_risk, urgency: 5
IF contains("bug", "error", "broken", "crash") → tag: bug_report, urgency: 4
IF contains("wish", "would be nice", "if only") → tag: feature_request, urgency: 2
IF contains("love", "amazing", "best") → tag: positive_signal, sentiment: +2
IF contains("competitor", "alternative", "other tool") → tag: competitive_intel
IF contains("price", "cost", "expensive", "cheaper") → tag: pricing_feedback
IF contains("confusing", "hard to", "can't figure") → tag: usability_issue, urgency: 3
Question: "How likely are you to recommend [product] to a colleague? (0-10)"
Scoring:
Benchmarks by industry: | Industry | Poor | Average | Good | Excellent | |----------|------|---------|------|-----------| | SaaS B2B | <20 | 20-40 | 40-60 | 60+ | | SaaS B2C | <10 | 10-30 | 30-50 | 50+ | | E-commerce | <20 | 20-40 | 40-60 | 60+ | | Financial Services | <10 | 10-30 | 30-50 | 50+ | | Healthcare Tech | <15 | 15-35 | 35-55 | 55+ |
NPS follow-up questions (by score):
NPS action rules:
Question: "How satisfied were you with [specific interaction]? (1-5)" Score: (Satisfied responses [4-5] / Total responses) × 100
Benchmarks: 75% = decent, 85% = good, 90%+ = excellent
When to use: After specific interactions (support, onboarding, feature launch)
Question: "How easy was it to [specific task]? (1-7, 1=very difficult, 7=very easy)" Score: Average of all responses
Benchmarks: <4 = high effort (problem), 4-5 = acceptable, 5+ = low effort (good)
Why CES matters: CES is the strongest predictor of future purchasing behavior. High-effort experiences drive 96% of disloyalty.
Combine metrics for overall VoC health:
voc_health_score:
nps:
weight: 30
score: "[0-100 normalized: (NPS + 100) / 2]"
csat:
weight: 20
score: "[CSAT %]"
ces:
weight: 15
score: "[(CES / 7) × 100]"
feedback_volume:
weight: 10
score: "[trend: increasing = good]"
response_rate:
weight: 10
score: "[survey response rate %]"
closed_loop_rate:
weight: 15
score: "[% of feedback items with documented response/action]"
total: "[weighted sum / 100]"
grade: "[A: 80+, B: 65-79, C: 50-64, D: 35-49, F: <35]"
Score each feature request:
| Factor | Formula | Weight | |--------|---------|--------| | Reach | # customers requesting (or affected) per quarter | 25% | | Impact | Score 0.5 (minimal), 1 (low), 2 (medium), 3 (high) | 25% | | Confidence | % confidence in estimates (50-100%) | 20% | | Effort | Person-weeks to build | 15% | | Revenue Signal | ARR at risk or expansion ARR | 15% |
RICE-V Score = (Reach × Impact × Confidence × Revenue Signal) / Effort
1. Is this blocking revenue? (churn risk from top accounts)
→ YES: Fast-track (Sprint 1)
→ NO: Continue ↓
2. Does it affect >20% of customers?
→ YES: High priority (next quarter)
→ NO: Continue ↓
3. Is it from a strategic segment we're targeting?
→ YES: Medium-high priority
→ NO: Continue ↓
4. Does it align with product vision?
→ YES: Backlog with planned quarter
→ NO: Decline with explanation (close the loop!)
Not every feature request should be built. Decline gracefully:
Template: "Thanks for this suggestion! We've heard this from [X] customers. Right now, our roadmap is focused on [theme] because [reason]. We're not planning to build this in the next [timeframe], but we've logged it. Here's what we ARE building that might help: [alternative]. We'll update this request if our plans change."
When to say no:
COLLECT → ANALYZE → ACT → COMMUNICATE
↑ |
└──────────────────────────────┘
Acknowledgment (within 24 hours): "Thanks for sharing this feedback. We've logged it as [category] and it's being reviewed by our [product/engineering] team. We'll update you when we have a plan."
In Progress: "Quick update on your feedback about [topic] — we're actively working on this. Expected [timeframe]. Here's what's changing: [brief description]."
Shipped: "Remember when you told us about [original feedback]? We fixed it! Here's what changed: [specific improvement]. Try it out and let us know what you think."
Declined (see "Saying No" above)
| Type | Acknowledge | Triage | Resolve/Respond | |------|------------|--------|-----------------| | Bug report | 4 hours | 24 hours | Per severity SLA | | Feature request | 48 hours | 1 week | Quarterly review | | Complaint | 4 hours | 24 hours | 72 hours | | Praise | 24 hours | N/A | Share internally | | Churn feedback | 24 hours | 48 hours | 1 week (win-back) | | NPS detractor | 24 hours | 48 hours | 1 week |
feedback_item:
id: "FB-[YYYY]-[####]"
source: "[channel]"
customer: "[name, segment, ARR]"
date_received: "[YYYY-MM-DD]"
category: "[from taxonomy]"
sentiment: "[-2 to +2]"
urgency: "[1-5]"
status: "[New | Acknowledged | Triaging | In Progress | Shipped | Declined | Won't Fix]"
assigned_to: "[team/person]"
date_acknowledged: "[YYYY-MM-DD]"
date_resolved: "[YYYY-MM-DD]"
resolution: "[what was done]"
customer_notified: "[yes/no + date]"
customer_satisfied: "[yes/no/unknown]"
linked_items:
jira: "[ticket ID]"
roadmap: "[feature/initiative]"
other_feedback: "[related FB IDs]"
weekly_voc:
period: "[week of YYYY-MM-DD]"
volume:
total_feedback: "[count]"
by_channel:
support: "[count]"
in_app: "[count]"
surveys: "[count]"
interviews: "[count]"
trend: "[↑/↓/→ vs last week]"
metrics:
nps: "[score] ([↑/↓] [X] from last period)"
csat: "[%] ([↑/↓] [X]%)"
ces: "[score] ([↑/↓])"
top_themes:
- theme: "[#1 theme]"
mentions: "[count]"
sentiment: "[avg]"
revenue_at_risk: "[$]"
action: "[what we're doing]"
- theme: "[#2]"
# ...
- theme: "[#3]"
# ...
alerts:
- "[Enterprise detractor: Company X, NPS dropped from 8 to 3]"
- "[New theme emerging: Y mentioned by 5 customers this week]"
closed_loop:
items_received: "[count]"
items_acknowledged: "[count] ([%])"
items_resolved: "[count]"
avg_resolution_time: "[days]"
wins:
- "[Positive feedback / testimonial / case study lead]"
Structure:
Go deeper quarterly:
When making product decisions, rank evidence:
Rule: Never build based on Level 6-7 alone. Always triangulate with Level 1-3.
product_decision:
what: "[Feature/change being considered]"
voc_evidence:
quantitative:
customers_requesting: "[count]"
arr_represented: "[$]"
nps_impact: "[detractors mentioning this]"
support_tickets: "[count related]"
qualitative:
interview_quotes:
- "[quote]" — [segment]
themes_connected: "[from taxonomy]"
behavioral:
usage_data: "[relevant metrics]"
churn_correlation: "[if applicable]"
decision: "[Build / Decline / Investigate further]"
confidence: "[High / Medium / Low]"
expected_impact:
nps_change: "[estimated]"
churn_reduction: "[estimated]"
expansion_revenue: "[estimated]"
success_metrics:
- "[How we'll know this worked]"
review_date: "[When to check impact]"
Structure:
Selection criteria:
When customers mention competitors:
competitive_mention:
competitor: "[name]"
context: "[switching to / comparing / switched from]"
feature_gap: "[what competitor has that we don't]"
our_advantage: "[what they like about us vs competitor]"
customer_segment: "[type]"
deal_size: "[$]"
action: "[product response needed?]"
Track quarterly: "Competitor mention frequency" — rising mentions of a specific competitor = early warning.
Customer feedback reveals pricing issues:
| Signal | Meaning | Action | |--------|---------|--------| | "Too expensive" with no specifics | Perception problem, not price problem | Improve value communication | | "Feature X should be in my plan" | Packaging issue | Review tier boundaries | | "Competitor is cheaper" | Price positioning | Competitive analysis | | "I'd pay more for Y" | Expansion opportunity | Test premium tier/add-on | | "Not sure what I'm paying for" | Value clarity gap | Improve billing page, add ROI dashboard |
If you have multiple products:
Simplified system when you have <50 customers:
Rate your VoC program (0-100):
| Dimension | Weight | 0-25 | 50 | 75 | 100 | |-----------|--------|------|----|----|-----| | Collection coverage | 15% | 1 channel | 3 channels | 5+ channels | Full journey coverage | | Analysis depth | 15% | Read comments | Categorize | Quantified themes | Predictive insights | | Response time | 10% | Weeks | Days | Hours | Real-time SLA met | | Closed-loop rate | 15% | <20% | 40-60% | 60-80% | >80% with satisfaction check | | Decision influence | 15% | Ignored | Occasional input | Regular roadmap input | Systematic decision driver | | Metric tracking | 10% | None | NPS exists | NPS + CSAT + CES | Full suite + benchmarks | | Segmentation | 10% | None | By plan tier | Multi-dimensional | Persona-level insights | | ROI measurement | 10% | None | Anecdotal | Feature-level impact | Revenue-attributed |
Use these to interact:
Built by AfrexAI — Turning customer voices into revenue decisions.
Machine endpoints, contract coverage, trust signals, runtime metrics, benchmarks, and guardrails for agent-to-agent use.
Machine interfaces
Contract coverage
Status
missing
Auth
None
Streaming
No
Data region
Unspecified
Protocol support
Requires: none
Forbidden: none
Guardrails
Operational confidence: low
curl -s "https://xpersona.co/api/v1/agents/clawhub-skills-1kalin-afrexai-voc-engine/snapshot"
curl -s "https://xpersona.co/api/v1/agents/clawhub-skills-1kalin-afrexai-voc-engine/contract"
curl -s "https://xpersona.co/api/v1/agents/clawhub-skills-1kalin-afrexai-voc-engine/trust"
Operational fit
Trust signals
Handshake
UNKNOWN
Confidence
unknown
Attempts 30d
unknown
Fallback rate
unknown
Runtime metrics
Observed P50
unknown
Observed P95
unknown
Rate limit
unknown
Estimated cost
unknown
Do not use if
Raw contract, invocation, trust, capability, facts, and change-event payloads for machine-side inspection.
Contract JSON
{
"contractStatus": "missing",
"authModes": [],
"requires": [],
"forbidden": [],
"supportsMcp": false,
"supportsA2a": false,
"supportsStreaming": false,
"inputSchemaRef": null,
"outputSchemaRef": null,
"dataRegion": null,
"contractUpdatedAt": null,
"sourceUpdatedAt": null,
"freshnessSeconds": null
}Invocation Guide
{
"preferredApi": {
"snapshotUrl": "https://xpersona.co/api/v1/agents/clawhub-skills-1kalin-afrexai-voc-engine/snapshot",
"contractUrl": "https://xpersona.co/api/v1/agents/clawhub-skills-1kalin-afrexai-voc-engine/contract",
"trustUrl": "https://xpersona.co/api/v1/agents/clawhub-skills-1kalin-afrexai-voc-engine/trust"
},
"curlExamples": [
"curl -s \"https://xpersona.co/api/v1/agents/clawhub-skills-1kalin-afrexai-voc-engine/snapshot\"",
"curl -s \"https://xpersona.co/api/v1/agents/clawhub-skills-1kalin-afrexai-voc-engine/contract\"",
"curl -s \"https://xpersona.co/api/v1/agents/clawhub-skills-1kalin-afrexai-voc-engine/trust\""
],
"jsonRequestTemplate": {
"query": "summarize this repo",
"constraints": {
"maxLatencyMs": 2000,
"protocolPreference": [
"OPENCLEW"
]
}
},
"jsonResponseTemplate": {
"ok": true,
"result": {
"summary": "...",
"confidence": 0.9
},
"meta": {
"source": "CLAWHUB",
"generatedAt": "2026-04-17T00:15:15.676Z"
}
},
"retryPolicy": {
"maxAttempts": 3,
"backoffMs": [
500,
1500,
3500
],
"retryableConditions": [
"HTTP_429",
"HTTP_503",
"NETWORK_TIMEOUT"
]
}
}Trust JSON
{
"status": "unavailable",
"handshakeStatus": "UNKNOWN",
"verificationFreshnessHours": null,
"reputationScore": null,
"p95LatencyMs": null,
"successRate30d": null,
"fallbackRate": null,
"attempts30d": null,
"trustUpdatedAt": null,
"trustConfidence": "unknown",
"sourceUpdatedAt": null,
"freshnessSeconds": null
}Capability Matrix
{
"rows": [
{
"key": "OPENCLEW",
"type": "protocol",
"support": "unknown",
"confidenceSource": "profile",
"notes": "Listed on profile"
},
{
"key": "team",
"type": "capability",
"support": "supported",
"confidenceSource": "profile",
"notes": "Declared in agent profile metadata"
},
{
"key": "resolved",
"type": "capability",
"support": "supported",
"confidenceSource": "profile",
"notes": "Declared in agent profile metadata"
},
{
"key": "experience",
"type": "capability",
"support": "supported",
"confidenceSource": "profile",
"notes": "Declared in agent profile metadata"
},
{
"key": "interaction",
"type": "capability",
"support": "supported",
"confidenceSource": "profile",
"notes": "Declared in agent profile metadata"
},
{
"key": "csat",
"type": "capability",
"support": "supported",
"confidenceSource": "profile",
"notes": "Declared in agent profile metadata"
},
{
"key": "tickets",
"type": "capability",
"support": "supported",
"confidenceSource": "profile",
"notes": "Declared in agent profile metadata"
}
],
"flattenedTokens": "protocol:OPENCLEW|unknown|profile capability:team|supported|profile capability:resolved|supported|profile capability:experience|supported|profile capability:interaction|supported|profile capability:csat|supported|profile capability:tickets|supported|profile"
}Facts JSON
[
{
"factKey": "docs_crawl",
"category": "integration",
"label": "Crawlable docs",
"value": "6 indexed pages on the official domain",
"href": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
"sourceUrl": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
"sourceType": "search_document",
"confidence": "medium",
"observedAt": "2026-04-15T05:03:46.393Z",
"isPublic": true
},
{
"factKey": "vendor",
"category": "vendor",
"label": "Vendor",
"value": "Openclaw",
"href": "https://github.com/openclaw/skills/tree/main/skills/1kalin/afrexai-voc-engine",
"sourceUrl": "https://github.com/openclaw/skills/tree/main/skills/1kalin/afrexai-voc-engine",
"sourceType": "profile",
"confidence": "medium",
"observedAt": "2026-04-15T00:45:39.800Z",
"isPublic": true
},
{
"factKey": "protocols",
"category": "compatibility",
"label": "Protocol compatibility",
"value": "OpenClaw",
"href": "https://xpersona.co/api/v1/agents/clawhub-skills-1kalin-afrexai-voc-engine/contract",
"sourceUrl": "https://xpersona.co/api/v1/agents/clawhub-skills-1kalin-afrexai-voc-engine/contract",
"sourceType": "contract",
"confidence": "medium",
"observedAt": "2026-04-15T00:45:39.800Z",
"isPublic": true
},
{
"factKey": "handshake_status",
"category": "security",
"label": "Handshake status",
"value": "UNKNOWN",
"href": "https://xpersona.co/api/v1/agents/clawhub-skills-1kalin-afrexai-voc-engine/trust",
"sourceUrl": "https://xpersona.co/api/v1/agents/clawhub-skills-1kalin-afrexai-voc-engine/trust",
"sourceType": "trust",
"confidence": "medium",
"observedAt": null,
"isPublic": true
}
]Change Events JSON
[
{
"eventType": "docs_update",
"title": "Docs refreshed: Sign in to GitHub · GitHub",
"description": "Fresh crawlable documentation was indexed for the official domain.",
"href": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
"sourceUrl": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
"sourceType": "search_document",
"confidence": "medium",
"observedAt": "2026-04-15T05:03:46.393Z",
"isPublic": true
}
]Sponsored
Ads related to afrexai-voc-engine and adjacent AI workflows.