Rank
70
AI Agents & MCPs & AI Workflow Automation โข (~400 MCP servers for AI agents) โข AI Automation / AI Agent with MCPs โข AI Workflows & AI Agents โข MCPs for AI Agents
Traction
No public download signal
Freshness
Updated 2d ago
Xpersona Agent
AI-orchestrated usability testing using Amazon Nova Act. The agent generates personas, runs tests to collect raw data, interprets responses to determine goal achievement, and generates HTML reports. Tests real user workflows (booking, checkout, posting) with safety guardrails. Use when asked to "test website usability", "run usability test", "generate usability report", "evaluate user experience", "test checkout flow", "test booking process", or "analyze website UX". --- name: nova-act-usability version: 1.0.2 description: AI-orchestrated usability testing using Amazon Nova Act. The agent generates personas, runs tests to collect raw data, interprets responses to determine goal achievement, and generates HTML reports. Tests real user workflows (booking, checkout, posting) with safety guardrails. Use when asked to "test website usability", "run usability test", "generate usability
clawhub skill install skills:adityak6798:website-usability-test-nova-actOverall rank
#62
Adoption
No public adoption signal
Trust
Unknown
Freshness
Feb 25, 2026
Freshness
Last checked Feb 25, 2026
Best For
nova-act-usability is best for analyze, adapt, ask workflows where OpenClaw compatibility matters.
Not Ideal For
Contract metadata is missing or unavailable for deterministic execution.
Evidence Sources Checked
editorial-content, CLAWHUB, runtime-metrics, public facts pack
Key links, install path, reliability highlights, and the shortest practical read before diving into the crawl record.
Overview
AI-orchestrated usability testing using Amazon Nova Act. The agent generates personas, runs tests to collect raw data, interprets responses to determine goal achievement, and generates HTML reports. Tests real user workflows (booking, checkout, posting) with safety guardrails. Use when asked to "test website usability", "run usability test", "generate usability report", "evaluate user experience", "test checkout flow", "test booking process", or "analyze website UX". --- name: nova-act-usability version: 1.0.2 description: AI-orchestrated usability testing using Amazon Nova Act. The agent generates personas, runs tests to collect raw data, interprets responses to determine goal achievement, and generates HTML reports. Tests real user workflows (booking, checkout, posting) with safety guardrails. Use when asked to "test website usability", "run usability test", "generate usability Capability contract not published. No trust telemetry is available yet. Last updated 4/15/2026.
Trust score
Unknown
Compatibility
OpenClaw
Freshness
Feb 25, 2026
Vendor
Openclaw
Artifacts
0
Benchmarks
0
Last release
Unpublished
Install & run
clawhub skill install skills:adityak6798:website-usability-test-nova-actSetup complexity is LOW. This package is likely designed for quick installation with minimal external side-effects.
Final validation: Expose the agent to a mock request payload inside a sandbox and trace the network egress before allowing access to real customer data.
Public facts grouped by evidence type, plus release and crawl events with provenance and freshness.
Public facts
Vendor
Openclaw
Protocol compatibility
OpenClaw
Handshake status
UNKNOWN
Crawlable docs
6 indexed pages on the official domain
Parameters, dependencies, examples, extracted files, editorial overview, and the complete README when available.
Captured outputs
Extracted files
0
Examples
6
Snippets
0
Languages
typescript
Parameters
python
import subprocess
import os
import sys
import json
import tempfile
# Step 1: Check dependencies
try:
import nova_act
print("โ
Dependencies ready")
except ImportError:
print("๐ฆ Dependencies not installed. Please run:")
print(" pip3 install nova-act pydantic playwright")
print(" playwright install chromium")
sys.exit(1)
# Step 2: Verify Nova Act API key
config_file = os.path.expanduser("~/.openclaw/config/nova-act.json")
with open(config_file, 'r') as f:
config = json.load(f)
if config.get('apiKey') == 'your-nova-act-api-key-here':
print(f"โ ๏ธ Please add your Nova Act API key to {config_file}")
sys.exit(1)
# Step 3: YOU (the AI agent) generate personas
# Example for https://www.pgatour.com/ (golf tournament site)
website_url = "https://www.pgatour.com/"
personas = [
{
"name": "Marcus Chen",
"archetype": "tournament_follower",
"age": 42,
"tech_proficiency": "high",
"description": "Avid golf fan who follows multiple tours and tracks player stats",
"goals": [
"Check current tournament leaderboard",
"View recent tournament results",
"Track favorite player performance"
]
},
{
"name": "Dorothy Williams",
"archetype": "casual_viewer",
"age": 68,
"tech_proficiency": "low",
"description": "Occasional golf viewer who watches major tournaments",
"goals": [
"Find when the next tournament is",
"See who won recently",
"Understand how to watch online"
]
}
]
# Step 4: Save personas and run test
with tempfile.NamedTemporaryFile(mode='w', suffix='.json', delete=False) as f:
json.dump(personas, f, indent=2)
personas_file = f.name
skill_dir = os.path.expanduser("~/.openclaw/skills/nova-act-usability")
test_script = os.path.join(skill_dir, "scripts", "run_adaptive_test.py")
# Run with AI-generated personas
subprocess.run([sysjson
{
"name": "FirstName LastName",
"archetype": "descriptive_identifier",
"age": 30,
"tech_proficiency": "low|medium|high",
"description": "One sentence about who they are",
"goals": [
"First goal relevant to this website",
"Second goal relevant to this website",
"Third goal relevant to this website"
]
}python
# User: "Test PGA Tour site as a golf enthusiast" website_url = "https://www.pgatour.com/" user_persona = "golf enthusiast who follows tournaments closely" subprocess.run([sys.executable, test_script, website_url, user_persona]) # Script will parse this and create personas automatically
python
# Generic, less contextual personas subprocess.run([sys.executable, test_script, website_url])
json
{
"step_name": "check_nav_for_pricing",
"prompt": "Is there a pricing link in the navigation?",
"expected_outcome": "Find pricing in navigation",
"raw_response": "No",
"api_success": true,
"needs_agent_analysis": true,
"attempts": [
{
"prompt": "Is there a pricing link in the navigation?",
"response": "No",
"approach": "original"
}
]
}text
Step 1: "Is there a pricing link?" โ Response: "No" (1 attempt) โ Goal achieved: NO (explicit negative) โ Friction: HIGH (not discoverable) Step 2: "What is the headline?" โ Response: "Amazon Nova Act" (1 attempt) โ Goal achieved: YES (actual content) โ Friction: LOW (immediately visible) Step 3: "Find documentation" โ Response: "I found a docs link in the footer" (3 attempts) โ Goal achieved: YES (found eventually) โ Friction: MEDIUM (required multiple approaches)
Editorial read
Docs source
CLAWHUB
Editorial quality
ready
AI-orchestrated usability testing using Amazon Nova Act. The agent generates personas, runs tests to collect raw data, interprets responses to determine goal achievement, and generates HTML reports. Tests real user workflows (booking, checkout, posting) with safety guardrails. Use when asked to "test website usability", "run usability test", "generate usability report", "evaluate user experience", "test checkout flow", "test booking process", or "analyze website UX". --- name: nova-act-usability version: 1.0.2 description: AI-orchestrated usability testing using Amazon Nova Act. The agent generates personas, runs tests to collect raw data, interprets responses to determine goal achievement, and generates HTML reports. Tests real user workflows (booking, checkout, posting) with safety guardrails. Use when asked to "test website usability", "run usability test", "generate usability
AI-orchestrated usability testing with digital twin personas powered by Amazon Nova Act.
This skill requires an Amazon Nova Act API key.
| Requirement | Details |
|-------------|---------|
| API Key | Nova Act API key from AWS Console |
| Config Location | ~/.openclaw/config/nova-act.json |
| Format | {"apiKey": "your-nova-act-api-key-here"} |
| Dependencies | pip3 install nova-act pydantic playwright |
| Browser | playwright install chromium (~300MB download) |
What this skill accesses:
~/.openclaw/config/nova-act.json (your API key)./nova_act_logs/ (trace files with screenshots), ./test_results_adaptive.json, ./nova_act_usability_report.htmlWhat trace files contain:
Recommendations:
Agent-Driven Interpretation: The script no longer interprets responses. YOU (the agent) must:
raw_responsegoal_achieved and overall_successNo hardcoded regex. No extra API calls. The agent doing the work is already running.
When a user asks to test a website, YOU (the AI agent) must complete ALL 4 phases:
| Phase | What Happens | Who Does It | |-------|--------------|-------------| | 1. Setup | Generate personas, run test script | Agent + Script | | 2. Collect | Script captures raw Nova Act responses | Script | | 3. Interpret | Read JSON, determine goal_achieved for each step | Agent | | 4. Report | Generate HTML report with interpreted results | Agent |
โ ๏ธ The script does NOT interpret responses or generate the final report. You must do phases 3-4.
You're already an AI (Claude) - use your intelligence to generate contextual personas!
import subprocess
import os
import sys
import json
import tempfile
# Step 1: Check dependencies
try:
import nova_act
print("โ
Dependencies ready")
except ImportError:
print("๐ฆ Dependencies not installed. Please run:")
print(" pip3 install nova-act pydantic playwright")
print(" playwright install chromium")
sys.exit(1)
# Step 2: Verify Nova Act API key
config_file = os.path.expanduser("~/.openclaw/config/nova-act.json")
with open(config_file, 'r') as f:
config = json.load(f)
if config.get('apiKey') == 'your-nova-act-api-key-here':
print(f"โ ๏ธ Please add your Nova Act API key to {config_file}")
sys.exit(1)
# Step 3: YOU (the AI agent) generate personas
# Example for https://www.pgatour.com/ (golf tournament site)
website_url = "https://www.pgatour.com/"
personas = [
{
"name": "Marcus Chen",
"archetype": "tournament_follower",
"age": 42,
"tech_proficiency": "high",
"description": "Avid golf fan who follows multiple tours and tracks player stats",
"goals": [
"Check current tournament leaderboard",
"View recent tournament results",
"Track favorite player performance"
]
},
{
"name": "Dorothy Williams",
"archetype": "casual_viewer",
"age": 68,
"tech_proficiency": "low",
"description": "Occasional golf viewer who watches major tournaments",
"goals": [
"Find when the next tournament is",
"See who won recently",
"Understand how to watch online"
]
}
]
# Step 4: Save personas and run test
with tempfile.NamedTemporaryFile(mode='w', suffix='.json', delete=False) as f:
json.dump(personas, f, indent=2)
personas_file = f.name
skill_dir = os.path.expanduser("~/.openclaw/skills/nova-act-usability")
test_script = os.path.join(skill_dir, "scripts", "run_adaptive_test.py")
# Run with AI-generated personas
subprocess.run([sys.executable, test_script, website_url, personas_file])
# Clean up temp file
os.unlink(personas_file)
Persona Template:
{
"name": "FirstName LastName",
"archetype": "descriptive_identifier",
"age": 30,
"tech_proficiency": "low|medium|high",
"description": "One sentence about who they are",
"goals": [
"First goal relevant to this website",
"Second goal relevant to this website",
"Third goal relevant to this website"
]
}
If user specifies a persona description, pass it as a string:
# User: "Test PGA Tour site as a golf enthusiast"
website_url = "https://www.pgatour.com/"
user_persona = "golf enthusiast who follows tournaments closely"
subprocess.run([sys.executable, test_script, website_url, user_persona])
# Script will parse this and create personas automatically
Let the script guess personas based on basic category keywords:
# Generic, less contextual personas
subprocess.run([sys.executable, test_script, website_url])
โ Advantages:
โ What to avoid:
Analyze the website:
.gov โ citizens, .edu โ students/facultyCreate diverse personas:
Generate realistic goals:
Examples by industry:
Users can trigger this skill by saying:
The AI will automatically:
NEW in this version: The skill now tests complete user journeys, not just information-finding!
E-Commerce:
Flight/Hotel Booking:
Social Media:
Account Signup:
Form Submission:
The skill will NEVER:
The skill will ALWAYS:
You (the AI agent) must analyze test results! The script collects raw responses but does NOT interpret them.
The script returns raw Nova Act responses like:
"No" - Is there a pricing link?"I don't see any documentation" - Is there docs?"Amazon Nova Act" - What is the headline?You must determine if each response means the goal was achieved:
| Response | Goal Achieved? |
|----------|---------------|
| "No" | โ NOT achieved |
| "I don't see..." | โ NOT achieved |
| "Not found" | โ NOT achieved |
| "Yes, I found..." | โ
Achieved |
| "Amazon Nova Act" (content) | โ
Achieved |
| "The pricing is $29/mo" | โ
Achieved |
After the test script runs, read the JSON results. Each step contains:
{
"step_name": "check_nav_for_pricing",
"prompt": "Is there a pricing link in the navigation?",
"expected_outcome": "Find pricing in navigation",
"raw_response": "No",
"api_success": true,
"needs_agent_analysis": true,
"attempts": [
{
"prompt": "Is there a pricing link in the navigation?",
"response": "No",
"approach": "original"
}
]
}
Key fields you analyze:
raw_response: The actual Nova Act response - YOU determine what it meansapi_success: Did the API call work? (script handles this)needs_agent_analysis: Always true - your cue to interpretattempts: All attempts made (script tries up to 3 alternative approaches)For each step, determine:
goal_achieved: Did the response indicate success or failure?friction_level: How hard was it? (attempts.length > 1 = friction)observations: UX insights from the responseAnalysis example:
Step 1: "Is there a pricing link?"
โ Response: "No" (1 attempt)
โ Goal achieved: NO (explicit negative)
โ Friction: HIGH (not discoverable)
Step 2: "What is the headline?"
โ Response: "Amazon Nova Act" (1 attempt)
โ Goal achieved: YES (actual content)
โ Friction: LOW (immediately visible)
Step 3: "Find documentation"
โ Response: "I found a docs link in the footer" (3 attempts)
โ Goal achieved: YES (found eventually)
โ Friction: MEDIUM (required multiple approaches)
The response_interpreter.py provides helpers if you want structured prompts:
from scripts.response_interpreter import (
format_for_agent_analysis,
create_agent_prompt_for_interpretation,
create_agent_prompt_for_alternative
)
# Format all results for analysis
formatted = format_for_agent_analysis(results)
# Get interpretation prompt for one step
prompt = create_agent_prompt_for_interpretation(step_result)
# Get retry prompt when goal not achieved
retry_prompt = create_agent_prompt_for_alternative(
original_prompt="Is there a pricing link?",
failed_response="No",
attempt_number=2
)
The script does NOT generate the final report automatically. You (the agent) must:
test_results_adaptive.json with raw datagoal_achieved: true/false based on raw_responseoverall_success: true/false on each testStep-by-step code for the agent to execute:
import json
import os
import sys
# Add skill scripts to path
sys.path.insert(0, os.path.expanduser("~/.openclaw/skills/nova-act-usability/scripts"))
from enhanced_report_generator import generate_enhanced_report
# 1. Read raw results
with open('test_results_adaptive.json', 'r') as f:
results = json.load(f)
# 2. YOU (the agent) interpret each step
for test in results:
goals_achieved = 0
for step in test.get('steps', []):
raw = step.get('raw_response', '')
# AGENT INTERPRETS: Does this response indicate goal was achieved?
# You decide based on the response content and expected outcome
# Example interpretations:
# "No" โ goal_achieved = False
# "Leaderboard, News, Schedule, Players" โ goal_achieved = True (content found)
# "Yes" โ goal_achieved = True
# "I don't see any..." โ goal_achieved = False
step['goal_achieved'] = ??? # YOU set this based on your interpretation
if step['goal_achieved']:
goals_achieved += 1
# 3. Set overall success (e.g., >= 50% goals achieved)
total = len(test.get('steps', []))
test['goals_achieved'] = goals_achieved
test['overall_success'] = (goals_achieved / total >= 0.5) if total > 0 else False
# 4. Save interpreted results
with open('test_results_adaptive.json', 'w') as f:
json.dump(results, f, indent=2)
# 5. Generate report with interpreted data
page_analysis = {
'title': '...', # From your earlier analysis
'purpose': '...',
'navigation': [...]
}
all_traces = []
for r in results:
all_traces.extend(r.get('trace_files', []))
report_path = generate_enhanced_report(page_analysis, results, all_traces)
print(f"Report: {report_path}")
Why the agent must interpret:
Nova Act is a browser automation tool, NOT a reasoning engine.
The Claude agent (you) does all reasoning about:
Nova Act just:
# DON'T ask Nova Act to think about personas
nova.act("As a beginner user, can you easily find the documentation?")
nova.act("Would a business professional find the pricing clear?")
nova.act("Is this task accomplishable for someone with low technical skills?")
# Simple browser actions
nova.act("Click the Documentation link in the navigation")
nova.act("Find and click a link containing 'Pricing'")
nova.act_get("What text is displayed in the main heading?")
nova.act_get("List the navigation menu items visible on this page")
You (the AI) are the orchestrator. This skill provides:
references/nova-act-cookbook.md) - Best practices, workflow patterns, and safety guidelines (automatically loaded at test start)run_adaptive_test.py) - Main execution script with workflow detectionscripts/dynamic_exploration.py) - Generates workflow-appropriate test strategiesscripts/nova_session.py) - Nova Act wrapperenhanced_report_generator.py) - Auto-generated HTML reportsExecution Flow:
Before running ANY test, check if dependencies are installed:
# Check if nova-act is installed
python3 -c "import nova_act" 2>/dev/null
if [ $? -ne 0 ]; then
echo "Dependencies not installed. Please run:"
echo " pip3 install nova-act pydantic playwright"
echo " playwright install chromium"
exit 1
fi
# Check API key
if ! grep -q '"apiKey":.*[^"]' ~/.openclaw/config/nova-act.json; then
echo "โ ๏ธ Please add your Nova Act API key to ~/.openclaw/config/nova-act.json"
exit 1
fi
Or use Python to check:
import sys
# Check if nova-act is installed
try:
import nova_act
print("โ
Dependencies already installed")
except ImportError:
print("๐ฆ Dependencies not installed. Please run:")
print(" pip3 install nova-act pydantic playwright")
print(" playwright install chromium")
sys.exit(1)
When a user asks for usability testing:
# Find the skill directory
SKILL_DIR=~/.openclaw/skills/nova-act-usability
# Run the adaptive test script
python3 "$SKILL_DIR/scripts/run_adaptive_test.py" "https://example.com"
# This will:
# - Create nova_act_logs/ in current directory
# - Create test_results_adaptive.json in current directory
# - Create nova_act_usability_report.html in current directory
# - Provide 60-second status updates during test
Recommended timeout: 30 minutes (1800 seconds)
Full usability tests with 3 personas ร 3 goals = 9 tests can take 10-20+ minutes depending on:
Graceful shutdown: If the test is interrupted (timeout, SIGTERM, SIGINT), it will:
test_results_adaptive.jsonFor shorter tests: Use fewer personas or goals:
# Quick test with 1 persona
personas = [{"name": "Test User", "archetype": "casual", ...}]
pip3 install nova-act pydantic playwright && playwright install chromiumWhen user requests usability testing:
import subprocess
import os
# Get skill directory
skill_dir = os.path.expanduser("~/.openclaw/skills/nova-act-usability")
if not os.path.exists(skill_dir):
# Try workspace location
skill_dir = os.path.join(os.getcwd(), "nova-act-usability")
script_path = os.path.join(skill_dir, "scripts", "run_adaptive_test.py")
# Run test
result = subprocess.run(
["python3", script_path, "https://example.com"],
env={**os.environ, "NOVA_ACT_SKIP_PLAYWRIGHT_INSTALL": "1"},
capture_output=True,
text=True
)
print(result.stdout)
The adaptive test script (run_adaptive_test.py) handles:
For each persona + task combination:
from scripts.nova_session import nova_session
from nova_act import BOOL_SCHEMA
import time
observations = []
with nova_session(website_url) as nova:
start_time = time.time()
# Initial navigation
observations.append({
"step": "navigate",
"action": f"Loaded {website_url}",
"success": True,
"notes": "Initial page load"
})
# Execute task step-by-step (AI-orchestrated)
# Break into small act() calls based on cookbook guidance
# Example: "Find pricing information" task
# Step 1: Look for pricing link
nova.act("Look for a link or button for pricing, plans, or subscription")
found = nova.act_get(
"Is there a visible pricing or plans link?",
schema=BOOL_SCHEMA
)
observations.append({
"step": "find_pricing_link",
"action": "Search for pricing navigation",
"success": found.parsed_response,
"notes": "Easy to find" if found.parsed_response else "Not immediately visible - UX friction"
})
if found.parsed_response:
# Step 2: Navigate to pricing
nova.act("Click on the pricing or plans link")
# Step 3: Analyze pricing page
is_clear = nova.act_get(
"Is the pricing information clearly displayed with prices and features?",
schema=BOOL_SCHEMA
)
observations.append({
"step": "view_pricing",
"action": "Accessed pricing page",
"success": is_clear.parsed_response,
"notes": "Clear pricing display" if is_clear.parsed_response else "Pricing unclear or confusing"
})
else:
# Alternative path - try search
nova.act("Look for a search function")
# ... continue orchestrating
duration = time.time() - start_time
# Document overall task result
task_success = all(obs["success"] for obs in observations if obs["success"] is not None)
results.append({
"persona": persona_name,
"task": task_description,
"success": task_success,
"duration": duration,
"observations": observations,
"friction_points": [obs for obs in observations if not obs.get("success")]
})
After all tests:
import json
from scripts.enhanced_report_generator import generate_enhanced_report
# Save results
with open("test_results_adaptive.json", "w") as f:
json.dump(results, f, indent=2)
# Generate HTML report
report_path = generate_enhanced_report(
page_analysis=page_analysis,
results=test_results
)
print(f"Report: {report_path}")
The AI should decide how to break down each task based on:
Low-tech persona example:
# More explicit, step-by-step
nova.act("Look for a button labeled 'Contact' or 'Contact Us'")
nova.act("Click on the Contact button")
result = nova.act_get("Is there a phone number or email address visible?")
High-tech persona example:
# Test efficiency features
nova.act("Look for keyboard shortcuts or quick access features")
nova.act("Try to use search (Ctrl+K or Cmd+K)")
After EVERY act() call, analyze:
Document friction immediately in observations.
Adapt act() prompts to persona characteristics:
references/nova-act-cookbook.mdMUST READ before starting any test. Contains best practices for:
references/persona-examples.mdTemplate personas with detailed profiles:
scripts/nova_session.pyThin wrapper providing Nova Act session primitive:
with nova_session(url, headless=True, logs_dir="./logs") as nova:
nova.act("action")
result = nova.act_get("query", schema=Schema)
scripts/enhanced_report_generator.pyCompiles observations into HTML usability report with trace file links.
assets/report-template.htmlProfessional HTML template for usability reports.
This skill requires dependencies that must be installed before use.
ALWAYS check if dependencies are installed before running tests:
# Quick dependency check
try:
import nova_act
print("โ
Dependencies installed")
except ImportError:
print("๐ฆ Dependencies not installed. Please run:")
print(" pip3 install nova-act pydantic playwright")
print(" playwright install chromium")
print("")
print("This will take 2-3 minutes to download browsers (~300MB)")
Step 1: Install Python packages
pip3 install nova-act pydantic playwright
Step 2: Install Playwright browser
playwright install chromium
Step 3: Configure API key
mkdir -p ~/.openclaw/config
echo '{"apiKey": "your-key-here"}' > ~/.openclaw/config/nova-act.json
your-key-here with your actual Nova Act API keyUser request: "Test example.com for elderly users"
AI orchestration:
references/nova-act-cookbook.mdreferences/persona-examples.mdThe AI decides every step. The skill just provides tools and guidance.
nova-act-usability/
โโโ SKILL.md # This file
โโโ README.md # User documentation
โโโ skill.json # Skill manifest
โโโ scripts/
โ โโโ run_adaptive_test.py # Main orchestrator (accepts URL arg)
โ โโโ nova_session.py # Session wrapper
โ โโโ enhanced_report_generator.py # HTML report generator
โ โโโ trace_finder.py # Extract trace file paths
โโโ references/
โ โโโ nova-act-cookbook.md # Best practices
โ โโโ persona-examples.md # Template personas
โโโ assets/
โโโ report-template.html # HTML template
When you run a test, these files are created in your current working directory:
./
โโโ nova_act_logs/ # Nova Act trace files
โ โโโ act_<id>_output.html # Session recordings
โ โโโ ...
โโโ test_results_adaptive.json # Raw test results
โโโ nova_act_usability_report.html # Final report
All paths are relative - works from any installation location!
Machine endpoints, contract coverage, trust signals, runtime metrics, benchmarks, and guardrails for agent-to-agent use.
Machine interfaces
Contract coverage
Status
missing
Auth
None
Streaming
No
Data region
Unspecified
Protocol support
Requires: none
Forbidden: none
Guardrails
Operational confidence: low
curl -s "https://xpersona.co/api/v1/agents/clawhub-skills-adityak6798-website-usability-test-nova-act/snapshot"
curl -s "https://xpersona.co/api/v1/agents/clawhub-skills-adityak6798-website-usability-test-nova-act/contract"
curl -s "https://xpersona.co/api/v1/agents/clawhub-skills-adityak6798-website-usability-test-nova-act/trust"
Operational fit
Trust signals
Handshake
UNKNOWN
Confidence
unknown
Attempts 30d
unknown
Fallback rate
unknown
Runtime metrics
Observed P50
unknown
Observed P95
unknown
Rate limit
unknown
Estimated cost
unknown
Do not use if
Raw contract, invocation, trust, capability, facts, and change-event payloads for machine-side inspection.
Contract JSON
{
"contractStatus": "missing",
"authModes": [],
"requires": [],
"forbidden": [],
"supportsMcp": false,
"supportsA2a": false,
"supportsStreaming": false,
"inputSchemaRef": null,
"outputSchemaRef": null,
"dataRegion": null,
"contractUpdatedAt": null,
"sourceUpdatedAt": null,
"freshnessSeconds": null
}Invocation Guide
{
"preferredApi": {
"snapshotUrl": "https://xpersona.co/api/v1/agents/clawhub-skills-adityak6798-website-usability-test-nova-act/snapshot",
"contractUrl": "https://xpersona.co/api/v1/agents/clawhub-skills-adityak6798-website-usability-test-nova-act/contract",
"trustUrl": "https://xpersona.co/api/v1/agents/clawhub-skills-adityak6798-website-usability-test-nova-act/trust"
},
"curlExamples": [
"curl -s \"https://xpersona.co/api/v1/agents/clawhub-skills-adityak6798-website-usability-test-nova-act/snapshot\"",
"curl -s \"https://xpersona.co/api/v1/agents/clawhub-skills-adityak6798-website-usability-test-nova-act/contract\"",
"curl -s \"https://xpersona.co/api/v1/agents/clawhub-skills-adityak6798-website-usability-test-nova-act/trust\""
],
"jsonRequestTemplate": {
"query": "summarize this repo",
"constraints": {
"maxLatencyMs": 2000,
"protocolPreference": [
"OPENCLEW"
]
}
},
"jsonResponseTemplate": {
"ok": true,
"result": {
"summary": "...",
"confidence": 0.9
},
"meta": {
"source": "CLAWHUB",
"generatedAt": "2026-04-17T00:06:19.591Z"
}
},
"retryPolicy": {
"maxAttempts": 3,
"backoffMs": [
500,
1500,
3500
],
"retryableConditions": [
"HTTP_429",
"HTTP_503",
"NETWORK_TIMEOUT"
]
}
}Trust JSON
{
"status": "unavailable",
"handshakeStatus": "UNKNOWN",
"verificationFreshnessHours": null,
"reputationScore": null,
"p95LatencyMs": null,
"successRate30d": null,
"fallbackRate": null,
"attempts30d": null,
"trustUpdatedAt": null,
"trustConfidence": "unknown",
"sourceUpdatedAt": null,
"freshnessSeconds": null
}Capability Matrix
{
"rows": [
{
"key": "OPENCLEW",
"type": "protocol",
"support": "unknown",
"confidenceSource": "profile",
"notes": "Listed on profile"
},
{
"key": "analyze",
"type": "capability",
"support": "supported",
"confidenceSource": "profile",
"notes": "Declared in agent profile metadata"
},
{
"key": "adapt",
"type": "capability",
"support": "supported",
"confidenceSource": "profile",
"notes": "Declared in agent profile metadata"
},
{
"key": "ask",
"type": "capability",
"support": "supported",
"confidenceSource": "profile",
"notes": "Declared in agent profile metadata"
},
{
"key": "trigger",
"type": "capability",
"support": "supported",
"confidenceSource": "profile",
"notes": "Declared in agent profile metadata"
},
{
"key": "reason",
"type": "capability",
"support": "supported",
"confidenceSource": "profile",
"notes": "Declared in agent profile metadata"
},
{
"key": "you",
"type": "capability",
"support": "supported",
"confidenceSource": "profile",
"notes": "Declared in agent profile metadata"
},
{
"key": "take",
"type": "capability",
"support": "supported",
"confidenceSource": "profile",
"notes": "Declared in agent profile metadata"
}
],
"flattenedTokens": "protocol:OPENCLEW|unknown|profile capability:analyze|supported|profile capability:adapt|supported|profile capability:ask|supported|profile capability:trigger|supported|profile capability:reason|supported|profile capability:you|supported|profile capability:take|supported|profile"
}Facts JSON
[
{
"factKey": "docs_crawl",
"category": "integration",
"label": "Crawlable docs",
"value": "6 indexed pages on the official domain",
"href": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
"sourceUrl": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
"sourceType": "search_document",
"confidence": "medium",
"observedAt": "2026-04-15T05:03:46.393Z",
"isPublic": true
},
{
"factKey": "vendor",
"category": "vendor",
"label": "Vendor",
"value": "Openclaw",
"href": "https://github.com/openclaw/skills/tree/main/skills/adityak6798/website-usability-test-nova-act",
"sourceUrl": "https://github.com/openclaw/skills/tree/main/skills/adityak6798/website-usability-test-nova-act",
"sourceType": "profile",
"confidence": "medium",
"observedAt": "2026-04-15T00:45:39.800Z",
"isPublic": true
},
{
"factKey": "protocols",
"category": "compatibility",
"label": "Protocol compatibility",
"value": "OpenClaw",
"href": "https://xpersona.co/api/v1/agents/clawhub-skills-adityak6798-website-usability-test-nova-act/contract",
"sourceUrl": "https://xpersona.co/api/v1/agents/clawhub-skills-adityak6798-website-usability-test-nova-act/contract",
"sourceType": "contract",
"confidence": "medium",
"observedAt": "2026-04-15T00:45:39.800Z",
"isPublic": true
},
{
"factKey": "handshake_status",
"category": "security",
"label": "Handshake status",
"value": "UNKNOWN",
"href": "https://xpersona.co/api/v1/agents/clawhub-skills-adityak6798-website-usability-test-nova-act/trust",
"sourceUrl": "https://xpersona.co/api/v1/agents/clawhub-skills-adityak6798-website-usability-test-nova-act/trust",
"sourceType": "trust",
"confidence": "medium",
"observedAt": null,
"isPublic": true
}
]Change Events JSON
[
{
"eventType": "docs_update",
"title": "Docs refreshed: Sign in to GitHub ยท GitHub",
"description": "Fresh crawlable documentation was indexed for the official domain.",
"href": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
"sourceUrl": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
"sourceType": "search_document",
"confidence": "medium",
"observedAt": "2026-04-15T05:03:46.393Z",
"isPublic": true
}
]Sponsored
Ads related to nova-act-usability and adjacent AI workflows.