Rank
70
AI Agents & MCPs & AI Workflow Automation โข (~400 MCP servers for AI agents) โข AI Automation / AI Agent with MCPs โข AI Workflows & AI Agents โข MCPs for AI Agents
Traction
No public download signal
Freshness
Updated 2d ago
Crawler Summary
AI-orchestrated usability testing using Amazon Nova Act. The agent generates personas, runs tests to collect raw data, interprets responses to determine goal achievement, and generates HTML reports. Tests real user workflows (booking, checkout, posting) with safety guardrails. Use when asked to "test website usability", "run usability test", "generate usability report", "evaluate user experience", "test checkout flow", "test booking process", or "analyze website UX". --- name: nova-act-usability version: 2.0.0 description: AI-orchestrated usability testing using Amazon Nova Act. The agent generates personas, runs tests to collect raw data, interprets responses to determine goal achievement, and generates HTML reports. Tests real user workflows (booking, checkout, posting) with safety guardrails. Use when asked to "test website usability", "run usability test", "generate usability Published capability contract available. No trust telemetry is available yet. 3 GitHub stars reported by the source. Last updated 2/24/2026.
Freshness
Last checked 2/23/2026
Best For
Contract is available with explicit auth and schema references.
Not Ideal For
nova-act-usability is not ideal for teams that need stronger public trust telemetry, lower setup complexity, or more explicit contract coverage before production rollout.
Evidence Sources Checked
editorial-content, capability-contract, runtime-metrics, public facts pack
AI-orchestrated usability testing using Amazon Nova Act. The agent generates personas, runs tests to collect raw data, interprets responses to determine goal achievement, and generates HTML reports. Tests real user workflows (booking, checkout, posting) with safety guardrails. Use when asked to "test website usability", "run usability test", "generate usability report", "evaluate user experience", "test checkout flow", "test booking process", or "analyze website UX". --- name: nova-act-usability version: 2.0.0 description: AI-orchestrated usability testing using Amazon Nova Act. The agent generates personas, runs tests to collect raw data, interprets responses to determine goal achievement, and generates HTML reports. Tests real user workflows (booking, checkout, posting) with safety guardrails. Use when asked to "test website usability", "run usability test", "generate usability
Public facts
7
Change events
1
Artifacts
0
Freshness
Feb 23, 2026
Published capability contract available. No trust telemetry is available yet. 3 GitHub stars reported by the source. Last updated 2/24/2026.
Trust score
Unknown
Compatibility
OpenClaw
Freshness
Feb 23, 2026
Vendor
Adityak6798
Artifacts
0
Benchmarks
0
Last release
Unpublished
Key links, install path, and a quick operational read before the deeper crawl record.
Summary
Published capability contract available. No trust telemetry is available yet. 3 GitHub stars reported by the source. Last updated 2/24/2026.
Setup snapshot
git clone https://github.com/adityak6798/OpenClaw_NovaActWebsiteUsabilityTest_Skill.gitSetup complexity is LOW. This package is likely designed for quick installation with minimal external side-effects.
Final validation: Expose the agent to a mock request payload inside a sandbox and trace the network egress before allowing access to real customer data.
Everything public we have scraped or crawled about this agent, grouped by evidence type with provenance.
Vendor
Adityak6798
Protocol compatibility
OpenClaw
Auth modes
api_key
Machine-readable schemas
OpenAPI or schema references published
Adoption signal
3 GitHub stars
Handshake status
UNKNOWN
Crawlable docs
6 indexed pages on the official domain
Merged public release, docs, artifact, benchmark, pricing, and trust refresh events.
Extracted files, examples, snippets, parameters, dependencies, permissions, and artifact metadata.
Extracted files
0
Examples
6
Snippets
0
Languages
typescript
Parameters
python
import subprocess
import os
import sys
import json
import tempfile
# Step 1: Check dependencies
try:
import nova_act
print("โ
Dependencies ready")
except ImportError:
print("๐ฆ Installing dependencies (one-time setup, ~3 minutes)...")
skill_dir = os.path.expanduser("~/.openclaw/skills/nova-act-usability")
result = subprocess.run(["./setup.sh"], cwd=skill_dir, capture_output=True, text=True)
if result.returncode != 0:
print(f"โ Setup failed. Run manually: cd {skill_dir} && ./setup.sh")
sys.exit(1)
# Step 2: Verify Nova Act API key
config_file = os.path.expanduser("~/.openclaw/config/nova-act.json")
with open(config_file, 'r') as f:
config = json.load(f)
if config.get('apiKey') == 'your-nova-act-api-key-here':
print(f"โ ๏ธ Please add your Nova Act API key to {config_file}")
sys.exit(1)
# Step 3: YOU (the AI agent) generate personas
# Example for https://www.pgatour.com/ (golf tournament site)
website_url = "https://www.pgatour.com/"
personas = [
{
"name": "Marcus Chen",
"archetype": "tournament_follower",
"age": 42,
"tech_proficiency": "high",
"description": "Avid golf fan who follows multiple tours and tracks player stats",
"goals": [
"Check current tournament leaderboard",
"View recent tournament results",
"Track favorite player performance"
]
},
{
"name": "Dorothy Williams",
"archetype": "casual_viewer",
"age": 68,
"tech_proficiency": "low",
"description": "Occasional golf viewer who watches major tournaments",
"goals": [
"Find when the next tournament is",
"See who won recently",
"Understand how to watch online"
]
}
]
# Step 4: Save personas and run test
with tempfile.NamedTemporaryFile(mode='w', suffix='.json', delete=False) as f:
json.dump(personas, f, indent=2)
personas_file = f.name
skilljson
{
"name": "FirstName LastName",
"archetype": "descriptive_identifier",
"age": 30,
"tech_proficiency": "low|medium|high",
"description": "One sentence about who they are",
"goals": [
"First goal relevant to this website",
"Second goal relevant to this website",
"Third goal relevant to this website"
]
}python
# User: "Test PGA Tour site as a golf enthusiast" website_url = "https://www.pgatour.com/" user_persona = "golf enthusiast who follows tournaments closely" subprocess.run([sys.executable, test_script, website_url, user_persona]) # Script will parse this and create personas automatically
python
# Generic, less contextual personas subprocess.run([sys.executable, test_script, website_url])
json
{
"step_name": "check_nav_for_pricing",
"prompt": "Is there a pricing link in the navigation?",
"expected_outcome": "Find pricing in navigation",
"raw_response": "No",
"api_success": true,
"needs_agent_analysis": true,
"attempts": [
{
"prompt": "Is there a pricing link in the navigation?",
"response": "No",
"approach": "original"
}
]
}text
Step 1: "Is there a pricing link?" โ Response: "No" (1 attempt) โ Goal achieved: NO (explicit negative) โ Friction: HIGH (not discoverable) Step 2: "What is the headline?" โ Response: "Amazon Nova Act" (1 attempt) โ Goal achieved: YES (actual content) โ Friction: LOW (immediately visible) Step 3: "Find documentation" โ Response: "I found a docs link in the footer" (3 attempts) โ Goal achieved: YES (found eventually) โ Friction: MEDIUM (required multiple approaches)
Full documentation captured from public sources, including the complete README when available.
Docs source
GITHUB OPENCLEW
Editorial quality
ready
AI-orchestrated usability testing using Amazon Nova Act. The agent generates personas, runs tests to collect raw data, interprets responses to determine goal achievement, and generates HTML reports. Tests real user workflows (booking, checkout, posting) with safety guardrails. Use when asked to "test website usability", "run usability test", "generate usability report", "evaluate user experience", "test checkout flow", "test booking process", or "analyze website UX". --- name: nova-act-usability version: 2.0.0 description: AI-orchestrated usability testing using Amazon Nova Act. The agent generates personas, runs tests to collect raw data, interprets responses to determine goal achievement, and generates HTML reports. Tests real user workflows (booking, checkout, posting) with safety guardrails. Use when asked to "test website usability", "run usability test", "generate usability
AI-orchestrated usability testing with digital twin personas powered by Amazon Nova Act.
Agent-Driven Interpretation: The script no longer interprets responses. YOU (the agent) must:
raw_responsegoal_achieved and overall_successNo hardcoded regex. No extra API calls. The agent doing the work is already running.
When a user asks to test a website, YOU (the AI agent) must complete ALL 4 phases:
| Phase | What Happens | Who Does It | |-------|--------------|-------------| | 1. Setup | Generate personas, run test script | Agent + Script | | 2. Collect | Script captures raw Nova Act responses | Script | | 3. Interpret | Read JSON, determine goal_achieved for each step | Agent | | 4. Report | Generate HTML report with interpreted results | Agent |
โ ๏ธ The script does NOT interpret responses or generate the final report. You must do phases 3-4.
You're already an AI (Claude) - use your intelligence to generate contextual personas!
import subprocess
import os
import sys
import json
import tempfile
# Step 1: Check dependencies
try:
import nova_act
print("โ
Dependencies ready")
except ImportError:
print("๐ฆ Installing dependencies (one-time setup, ~3 minutes)...")
skill_dir = os.path.expanduser("~/.openclaw/skills/nova-act-usability")
result = subprocess.run(["./setup.sh"], cwd=skill_dir, capture_output=True, text=True)
if result.returncode != 0:
print(f"โ Setup failed. Run manually: cd {skill_dir} && ./setup.sh")
sys.exit(1)
# Step 2: Verify Nova Act API key
config_file = os.path.expanduser("~/.openclaw/config/nova-act.json")
with open(config_file, 'r') as f:
config = json.load(f)
if config.get('apiKey') == 'your-nova-act-api-key-here':
print(f"โ ๏ธ Please add your Nova Act API key to {config_file}")
sys.exit(1)
# Step 3: YOU (the AI agent) generate personas
# Example for https://www.pgatour.com/ (golf tournament site)
website_url = "https://www.pgatour.com/"
personas = [
{
"name": "Marcus Chen",
"archetype": "tournament_follower",
"age": 42,
"tech_proficiency": "high",
"description": "Avid golf fan who follows multiple tours and tracks player stats",
"goals": [
"Check current tournament leaderboard",
"View recent tournament results",
"Track favorite player performance"
]
},
{
"name": "Dorothy Williams",
"archetype": "casual_viewer",
"age": 68,
"tech_proficiency": "low",
"description": "Occasional golf viewer who watches major tournaments",
"goals": [
"Find when the next tournament is",
"See who won recently",
"Understand how to watch online"
]
}
]
# Step 4: Save personas and run test
with tempfile.NamedTemporaryFile(mode='w', suffix='.json', delete=False) as f:
json.dump(personas, f, indent=2)
personas_file = f.name
skill_dir = os.path.expanduser("~/.openclaw/skills/nova-act-usability")
test_script = os.path.join(skill_dir, "scripts", "run_adaptive_test.py")
# Run with AI-generated personas
subprocess.run([sys.executable, test_script, website_url, personas_file])
# Clean up temp file
os.unlink(personas_file)
Persona Template:
{
"name": "FirstName LastName",
"archetype": "descriptive_identifier",
"age": 30,
"tech_proficiency": "low|medium|high",
"description": "One sentence about who they are",
"goals": [
"First goal relevant to this website",
"Second goal relevant to this website",
"Third goal relevant to this website"
]
}
If user specifies a persona description, pass it as a string:
# User: "Test PGA Tour site as a golf enthusiast"
website_url = "https://www.pgatour.com/"
user_persona = "golf enthusiast who follows tournaments closely"
subprocess.run([sys.executable, test_script, website_url, user_persona])
# Script will parse this and create personas automatically
Let the script guess personas based on basic category keywords:
# Generic, less contextual personas
subprocess.run([sys.executable, test_script, website_url])
โ Advantages:
โ What to avoid:
Analyze the website:
.gov โ citizens, .edu โ students/facultyCreate diverse personas:
Generate realistic goals:
Examples by industry:
Users can trigger this skill by saying:
The AI will automatically:
NEW in this version: The skill now tests complete user journeys, not just information-finding!
E-Commerce:
Flight/Hotel Booking:
Social Media:
Account Signup:
Form Submission:
This skill includes guardrails designed to stop before material impact actions. However, no AI-based system can guarantee safetyโbehavior depends on website structure, Nova Act responses, and real-time conditions.
Guardrails are designed to:
The skill is designed to:
โ ๏ธ Recommendation: Always test on staging environments first. Do not run against production systems with real payment methods or accounts without understanding the risks.
You (the AI agent) must analyze test results! The script collects raw responses but does NOT interpret them.
The script returns raw Nova Act responses like:
"No" - Is there a pricing link?"I don't see any documentation" - Is there docs?"Amazon Nova Act" - What is the headline?You must determine if each response means the goal was achieved:
| Response | Goal Achieved? |
|----------|---------------|
| "No" | โ NOT achieved |
| "I don't see..." | โ NOT achieved |
| "Not found" | โ NOT achieved |
| "Yes, I found..." | โ
Achieved |
| "Amazon Nova Act" (content) | โ
Achieved |
| "The pricing is $29/mo" | โ
Achieved |
After the test script runs, read the JSON results. Each step contains:
{
"step_name": "check_nav_for_pricing",
"prompt": "Is there a pricing link in the navigation?",
"expected_outcome": "Find pricing in navigation",
"raw_response": "No",
"api_success": true,
"needs_agent_analysis": true,
"attempts": [
{
"prompt": "Is there a pricing link in the navigation?",
"response": "No",
"approach": "original"
}
]
}
Key fields you analyze:
raw_response: The actual Nova Act response - YOU determine what it meansapi_success: Did the API call work? (script handles this)needs_agent_analysis: Always true - your cue to interpretattempts: All attempts made (script tries up to 3 alternative approaches)For each step, determine:
goal_achieved: Did the response indicate success or failure?friction_level: How hard was it? (attempts.length > 1 = friction)observations: UX insights from the responseAnalysis example:
Step 1: "Is there a pricing link?"
โ Response: "No" (1 attempt)
โ Goal achieved: NO (explicit negative)
โ Friction: HIGH (not discoverable)
Step 2: "What is the headline?"
โ Response: "Amazon Nova Act" (1 attempt)
โ Goal achieved: YES (actual content)
โ Friction: LOW (immediately visible)
Step 3: "Find documentation"
โ Response: "I found a docs link in the footer" (3 attempts)
โ Goal achieved: YES (found eventually)
โ Friction: MEDIUM (required multiple approaches)
The response_interpreter.py provides helpers if you want structured prompts:
from scripts.response_interpreter import (
format_for_agent_analysis,
create_agent_prompt_for_interpretation,
create_agent_prompt_for_alternative
)
# Format all results for analysis
formatted = format_for_agent_analysis(results)
# Get interpretation prompt for one step
prompt = create_agent_prompt_for_interpretation(step_result)
# Get retry prompt when goal not achieved
retry_prompt = create_agent_prompt_for_alternative(
original_prompt="Is there a pricing link?",
failed_response="No",
attempt_number=2
)
The script does NOT generate the final report automatically. You (the agent) must:
test_results_adaptive.json with raw datagoal_achieved: true/false based on raw_responseoverall_success: true/false on each testStep-by-step code for the agent to execute:
import json
import os
import sys
# Add skill scripts to path
sys.path.insert(0, os.path.expanduser("~/.openclaw/skills/nova-act-usability/scripts"))
from enhanced_report_generator import generate_enhanced_report
# 1. Read raw results
with open('test_results_adaptive.json', 'r') as f:
results = json.load(f)
# 2. YOU (the agent) interpret each step
for test in results:
goals_achieved = 0
for step in test.get('steps', []):
raw = step.get('raw_response', '')
# AGENT INTERPRETS: Does this response indicate goal was achieved?
# You decide based on the response content and expected outcome
# Example interpretations:
# "No" โ goal_achieved = False
# "Leaderboard, News, Schedule, Players" โ goal_achieved = True (content found)
# "Yes" โ goal_achieved = True
# "I don't see any..." โ goal_achieved = False
step['goal_achieved'] = ??? # YOU set this based on your interpretation
if step['goal_achieved']:
goals_achieved += 1
# 3. Set overall success (e.g., >= 50% goals achieved)
total = len(test.get('steps', []))
test['goals_achieved'] = goals_achieved
test['overall_success'] = (goals_achieved / total >= 0.5) if total > 0 else False
# 4. Save interpreted results
with open('test_results_adaptive.json', 'w') as f:
json.dump(results, f, indent=2)
# 5. Generate report with interpreted data
page_analysis = {
'title': '...', # From your earlier analysis
'purpose': '...',
'navigation': [...]
}
all_traces = []
for r in results:
all_traces.extend(r.get('trace_files', []))
report_path = generate_enhanced_report(page_analysis, results, all_traces)
print(f"Report: {report_path}")
Why the agent must interpret:
Nova Act is a browser automation tool, NOT a reasoning engine.
The Claude agent (you) does all reasoning about:
Nova Act just:
# DON'T ask Nova Act to think about personas
nova.act("As a beginner user, can you easily find the documentation?")
nova.act("Would a business professional find the pricing clear?")
nova.act("Is this task accomplishable for someone with low technical skills?")
# Simple browser actions
nova.act("Click the Documentation link in the navigation")
nova.act("Find and click a link containing 'Pricing'")
nova.act_get("What text is displayed in the main heading?")
nova.act_get("List the navigation menu items visible on this page")
You (the AI) are the orchestrator. This skill provides:
references/nova-act-cookbook.md) - Best practices, workflow patterns, and safety guidelines (automatically loaded at test start)run_adaptive_test.py) - Main execution script with workflow detectionscripts/dynamic_exploration.py) - Generates workflow-appropriate test strategiesscripts/nova_session.py) - Nova Act wrapperenhanced_report_generator.py) - Auto-generated HTML reportsExecution Flow:
Before running ANY test, check if dependencies are installed:
# Check if nova-act is installed
python3 -c "import nova_act" 2>/dev/null
if [ $? -ne 0 ]; then
echo "Dependencies not installed. Running setup..."
cd ~/.openclaw/skills/nova-act-usability
./setup.sh
# After setup, remind user to add API key if needed
if ! grep -q '"apiKey":.*[^"]' ~/.openclaw/config/nova-act.json; then
echo "โ ๏ธ Please add your Nova Act API key to ~/.openclaw/config/nova-act.json"
exit 1
fi
fi
Or use Python to check:
import subprocess
import sys
import os
# Check if nova-act is installed
try:
import nova_act
print("โ
Dependencies already installed")
except ImportError:
print("๐ฆ Installing dependencies...")
skill_dir = os.path.expanduser("~/.openclaw/skills/nova-act-usability")
setup_script = os.path.join(skill_dir, "setup.sh")
if os.path.exists(setup_script):
result = subprocess.run([setup_script], cwd=skill_dir)
if result.returncode != 0:
print("โ Setup failed. Please run manually:")
print(f" cd {skill_dir}")
print(f" ./setup.sh")
sys.exit(1)
else:
print("โ Setup script not found. Dependencies must be installed manually.")
sys.exit(1)
When a user asks for usability testing:
# Find the skill directory
SKILL_DIR=~/.openclaw/skills/nova-act-usability
# Run the adaptive test script
python3 "$SKILL_DIR/scripts/run_adaptive_test.py" "https://example.com"
# This will:
# - Create nova_act_logs/ in current directory
# - Create test_results_adaptive.json in current directory
# - Create nova_act_usability_report.html in current directory
# - Provide 60-second status updates during test
Recommended timeout: 30 minutes (1800 seconds)
Full usability tests with 3 personas ร 3 goals = 9 tests can take 10-20+ minutes depending on:
Graceful shutdown: If the test is interrupted (timeout, SIGTERM, SIGINT), it will:
test_results_adaptive.jsonFor shorter tests: Use fewer personas or goals:
# Quick test with 1 persona
personas = [{"name": "Test User", "archetype": "casual", ...}]
When user requests usability testing:
import subprocess
import os
# Get skill directory
skill_dir = os.path.expanduser("~/.openclaw/skills/nova-act-usability")
if not os.path.exists(skill_dir):
# Try workspace location
skill_dir = os.path.join(os.getcwd(), "nova-act-usability")
script_path = os.path.join(skill_dir, "scripts", "run_adaptive_test.py")
# Run test
result = subprocess.run(
["python3", script_path, "https://example.com"],
env={**os.environ, "NOVA_ACT_SKIP_PLAYWRIGHT_INSTALL": "1"},
capture_output=True,
text=True
)
print(result.stdout)
The adaptive test script (run_adaptive_test.py) handles:
For each persona + task combination:
from scripts.nova_session import nova_session
from nova_act import BOOL_SCHEMA
import time
observations = []
with nova_session(website_url) as nova:
start_time = time.time()
# Initial navigation
observations.append({
"step": "navigate",
"action": f"Loaded {website_url}",
"success": True,
"notes": "Initial page load"
})
# Execute task step-by-step (AI-orchestrated)
# Break into small act() calls based on cookbook guidance
# Example: "Find pricing information" task
# Step 1: Look for pricing link
nova.act("Look for a link or button for pricing, plans, or subscription")
found = nova.act_get(
"Is there a visible pricing or plans link?",
schema=BOOL_SCHEMA
)
observations.append({
"step": "find_pricing_link",
"action": "Search for pricing navigation",
"success": found.parsed_response,
"notes": "Easy to find" if found.parsed_response else "Not immediately visible - UX friction"
})
if found.parsed_response:
# Step 2: Navigate to pricing
nova.act("Click on the pricing or plans link")
# Step 3: Analyze pricing page
is_clear = nova.act_get(
"Is the pricing information clearly displayed with prices and features?",
schema=BOOL_SCHEMA
)
observations.append({
"step": "view_pricing",
"action": "Accessed pricing page",
"success": is_clear.parsed_response,
"notes": "Clear pricing display" if is_clear.parsed_response else "Pricing unclear or confusing"
})
else:
# Alternative path - try search
nova.act("Look for a search function")
# ... continue orchestrating
duration = time.time() - start_time
# Document overall task result
task_success = all(obs["success"] for obs in observations if obs["success"] is not None)
results.append({
"persona": persona_name,
"task": task_description,
"success": task_success,
"duration": duration,
"observations": observations,
"friction_points": [obs for obs in observations if not obs.get("success")]
})
After all tests:
import json
from scripts.enhanced_report_generator import generate_enhanced_report
# Save results
with open("test_results_adaptive.json", "w") as f:
json.dump(results, f, indent=2)
# Generate HTML report
report_path = generate_enhanced_report(
page_analysis=page_analysis,
results=test_results
)
print(f"Report: {report_path}")
The AI should decide how to break down each task based on:
Low-tech persona example:
# More explicit, step-by-step
nova.act("Look for a button labeled 'Contact' or 'Contact Us'")
nova.act("Click on the Contact button")
result = nova.act_get("Is there a phone number or email address visible?")
High-tech persona example:
# Test efficiency features
nova.act("Look for keyboard shortcuts or quick access features")
nova.act("Try to use search (Ctrl+K or Cmd+K)")
After EVERY act() call, analyze:
Document friction immediately in observations.
Adapt act() prompts to persona characteristics:
references/nova-act-cookbook.mdMUST READ before starting any test. Contains best practices for:
references/persona-examples.mdTemplate personas with detailed profiles:
scripts/nova_session.pyThin wrapper providing Nova Act session primitive:
with nova_session(url, headless=True, logs_dir="./logs") as nova:
nova.act("action")
result = nova.act_get("query", schema=Schema)
scripts/enhanced_report_generator.pyCompiles observations into HTML usability report with trace file links.
assets/report-template.htmlProfessional HTML template for usability reports.
This skill requires dependencies that must be installed before use.
ALWAYS check if dependencies are installed before running tests:
# Quick dependency check
try:
import nova_act
# Dependencies installed โ
except ImportError:
# Need to run setup โ ๏ธ
import subprocess
import os
skill_dir = os.path.expanduser("~/.openclaw/skills/nova-act-usability")
setup_script = os.path.join(skill_dir, "setup.sh")
print("๐ฆ Installing Nova Act dependencies (one-time setup)...")
print("This will take 2-3 minutes to download browsers (~300MB)")
subprocess.run([setup_script], cwd=skill_dir, check=True)
print("โ
Setup complete!")
print("โ ๏ธ User needs to add API key to ~/.openclaw/config/nova-act.json")
If this is your first time using the skill:
cd ~/.openclaw/skills/nova-act-usability
./setup.sh
The setup script will:
After setup:
~/.openclaw/config/nova-act.json"your-nova-act-api-key-here" with your actual keypip3 install nova-act pydantic playwright
playwright install chromium
User request: "Test example.com for elderly users"
AI orchestration:
references/nova-act-cookbook.mdreferences/persona-examples.mdThe AI decides every step. The skill just provides tools and guidance.
nova-act-usability/
โโโ SKILL.md # This file
โโโ README.md # User documentation
โโโ skill.json # Skill manifest
โโโ scripts/
โ โโโ run_adaptive_test.py # Main orchestrator (accepts URL arg)
โ โโโ nova_session.py # Session wrapper
โ โโโ enhanced_report_generator.py # HTML report generator
โ โโโ trace_finder.py # Extract trace file paths
โโโ references/
โ โโโ nova-act-cookbook.md # Best practices
โ โโโ persona-examples.md # Template personas
โโโ assets/
โโโ report-template.html # HTML template
When you run a test, these files are created in your current working directory:
./
โโโ nova_act_logs/ # Nova Act trace files
โ โโโ act_<id>_output.html # Session recordings
โ โโโ ...
โโโ test_results_adaptive.json # Raw test results
โโโ nova_act_usability_report.html # Final report
All paths are relative - works from any installation location!
Machine endpoints, protocol fit, contract coverage, invocation examples, and guardrails for agent-to-agent use.
Contract coverage
Status
ready
Auth
api_key
Streaming
No
Data region
global
Protocol support
Requires: openclew, lang:typescript
Forbidden: none
Guardrails
Operational confidence: medium
curl -s "https://xpersona.co/api/v1/agents/adityak6798-openclaw-novaactwebsiteusabilitytest-skill/snapshot"
curl -s "https://xpersona.co/api/v1/agents/adityak6798-openclaw-novaactwebsiteusabilitytest-skill/contract"
curl -s "https://xpersona.co/api/v1/agents/adityak6798-openclaw-novaactwebsiteusabilitytest-skill/trust"
Trust and runtime signals, benchmark suites, failure patterns, and practical risk constraints.
Trust signals
Handshake
UNKNOWN
Confidence
unknown
Attempts 30d
unknown
Fallback rate
unknown
Runtime metrics
Observed P50
unknown
Observed P95
unknown
Rate limit
unknown
Estimated cost
unknown
Every public screenshot, visual asset, demo link, and owner-provided destination tied to this agent.
Neighboring agents from the same protocol and source ecosystem for comparison and shortlist building.
Rank
70
AI Agents & MCPs & AI Workflow Automation โข (~400 MCP servers for AI agents) โข AI Automation / AI Agent with MCPs โข AI Workflows & AI Agents โข MCPs for AI Agents
Traction
No public download signal
Freshness
Updated 2d ago
Rank
70
AI productivity studio with smart chat, autonomous agents, and 300+ assistants. Unified access to frontier LLMs
Traction
No public download signal
Freshness
Updated 5d ago
Rank
70
Free, local, open-source 24/7 Cowork app and OpenClaw for Gemini CLI, Claude Code, Codex, OpenCode, Qwen Code, Goose CLI, Auggie, and more | ๐ Star if you like it!
Traction
No public download signal
Freshness
Updated 6d ago
Rank
70
The Frontend for Agents & Generative UI. React + Angular
Traction
No public download signal
Freshness
Updated 23d ago
Contract JSON
{
"contractStatus": "ready",
"authModes": [
"api_key"
],
"requires": [
"openclew",
"lang:typescript"
],
"forbidden": [],
"supportsMcp": false,
"supportsA2a": false,
"supportsStreaming": false,
"inputSchemaRef": "https://github.com/adityak6798/OpenClaw_NovaActWebsiteUsabilityTest_Skill#input",
"outputSchemaRef": "https://github.com/adityak6798/OpenClaw_NovaActWebsiteUsabilityTest_Skill#output",
"dataRegion": "global",
"contractUpdatedAt": "2026-02-24T19:44:11.468Z",
"sourceUpdatedAt": "2026-02-24T19:44:11.468Z",
"freshnessSeconds": 4424712
}Invocation Guide
{
"preferredApi": {
"snapshotUrl": "https://xpersona.co/api/v1/agents/adityak6798-openclaw-novaactwebsiteusabilitytest-skill/snapshot",
"contractUrl": "https://xpersona.co/api/v1/agents/adityak6798-openclaw-novaactwebsiteusabilitytest-skill/contract",
"trustUrl": "https://xpersona.co/api/v1/agents/adityak6798-openclaw-novaactwebsiteusabilitytest-skill/trust"
},
"curlExamples": [
"curl -s \"https://xpersona.co/api/v1/agents/adityak6798-openclaw-novaactwebsiteusabilitytest-skill/snapshot\"",
"curl -s \"https://xpersona.co/api/v1/agents/adityak6798-openclaw-novaactwebsiteusabilitytest-skill/contract\"",
"curl -s \"https://xpersona.co/api/v1/agents/adityak6798-openclaw-novaactwebsiteusabilitytest-skill/trust\""
],
"jsonRequestTemplate": {
"query": "summarize this repo",
"constraints": {
"maxLatencyMs": 2000,
"protocolPreference": [
"OPENCLEW"
]
}
},
"jsonResponseTemplate": {
"ok": true,
"result": {
"summary": "...",
"confidence": 0.9
},
"meta": {
"source": "GITHUB_OPENCLEW",
"generatedAt": "2026-04-17T00:49:23.849Z"
}
},
"retryPolicy": {
"maxAttempts": 3,
"backoffMs": [
500,
1500,
3500
],
"retryableConditions": [
"HTTP_429",
"HTTP_503",
"NETWORK_TIMEOUT"
]
}
}Trust JSON
{
"status": "unavailable",
"handshakeStatus": "UNKNOWN",
"verificationFreshnessHours": null,
"reputationScore": null,
"p95LatencyMs": null,
"successRate30d": null,
"fallbackRate": null,
"attempts30d": null,
"trustUpdatedAt": null,
"trustConfidence": "unknown",
"sourceUpdatedAt": null,
"freshnessSeconds": null
}Capability Matrix
{
"rows": [
{
"key": "OPENCLEW",
"type": "protocol",
"support": "unknown",
"confidenceSource": "profile",
"notes": "Listed on profile"
},
{
"key": "analyze",
"type": "capability",
"support": "supported",
"confidenceSource": "profile",
"notes": "Declared in agent profile metadata"
},
{
"key": "adapt",
"type": "capability",
"support": "supported",
"confidenceSource": "profile",
"notes": "Declared in agent profile metadata"
},
{
"key": "ask",
"type": "capability",
"support": "supported",
"confidenceSource": "profile",
"notes": "Declared in agent profile metadata"
},
{
"key": "trigger",
"type": "capability",
"support": "supported",
"confidenceSource": "profile",
"notes": "Declared in agent profile metadata"
},
{
"key": "guarantee",
"type": "capability",
"support": "supported",
"confidenceSource": "profile",
"notes": "Declared in agent profile metadata"
},
{
"key": "reason",
"type": "capability",
"support": "supported",
"confidenceSource": "profile",
"notes": "Declared in agent profile metadata"
},
{
"key": "you",
"type": "capability",
"support": "supported",
"confidenceSource": "profile",
"notes": "Declared in agent profile metadata"
},
{
"key": "take",
"type": "capability",
"support": "supported",
"confidenceSource": "profile",
"notes": "Declared in agent profile metadata"
}
],
"flattenedTokens": "protocol:OPENCLEW|unknown|profile capability:analyze|supported|profile capability:adapt|supported|profile capability:ask|supported|profile capability:trigger|supported|profile capability:guarantee|supported|profile capability:reason|supported|profile capability:you|supported|profile capability:take|supported|profile"
}Facts JSON
[
{
"factKey": "docs_crawl",
"category": "integration",
"label": "Crawlable docs",
"value": "6 indexed pages on the official domain",
"href": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
"sourceUrl": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
"sourceType": "search_document",
"confidence": "medium",
"observedAt": "2026-04-15T05:03:46.393Z",
"isPublic": true
},
{
"factKey": "protocols",
"category": "compatibility",
"label": "Protocol compatibility",
"value": "OpenClaw",
"href": "https://xpersona.co/api/v1/agents/adityak6798-openclaw-novaactwebsiteusabilitytest-skill/contract",
"sourceUrl": "https://xpersona.co/api/v1/agents/adityak6798-openclaw-novaactwebsiteusabilitytest-skill/contract",
"sourceType": "contract",
"confidence": "medium",
"observedAt": "2026-02-24T19:44:11.468Z",
"isPublic": true
},
{
"factKey": "auth_modes",
"category": "compatibility",
"label": "Auth modes",
"value": "api_key",
"href": "https://xpersona.co/api/v1/agents/adityak6798-openclaw-novaactwebsiteusabilitytest-skill/contract",
"sourceUrl": "https://xpersona.co/api/v1/agents/adityak6798-openclaw-novaactwebsiteusabilitytest-skill/contract",
"sourceType": "contract",
"confidence": "high",
"observedAt": "2026-02-24T19:44:11.468Z",
"isPublic": true
},
{
"factKey": "schema_refs",
"category": "artifact",
"label": "Machine-readable schemas",
"value": "OpenAPI or schema references published",
"href": "https://github.com/adityak6798/OpenClaw_NovaActWebsiteUsabilityTest_Skill#input",
"sourceUrl": "https://xpersona.co/api/v1/agents/adityak6798-openclaw-novaactwebsiteusabilitytest-skill/contract",
"sourceType": "contract",
"confidence": "high",
"observedAt": "2026-02-24T19:44:11.468Z",
"isPublic": true
},
{
"factKey": "vendor",
"category": "vendor",
"label": "Vendor",
"value": "Adityak6798",
"href": "https://github.com/adityak6798/OpenClaw_NovaActWebsiteUsabilityTest_Skill",
"sourceUrl": "https://github.com/adityak6798/OpenClaw_NovaActWebsiteUsabilityTest_Skill",
"sourceType": "profile",
"confidence": "medium",
"observedAt": "2026-02-24T19:43:14.176Z",
"isPublic": true
},
{
"factKey": "traction",
"category": "adoption",
"label": "Adoption signal",
"value": "3 GitHub stars",
"href": "https://github.com/adityak6798/OpenClaw_NovaActWebsiteUsabilityTest_Skill",
"sourceUrl": "https://github.com/adityak6798/OpenClaw_NovaActWebsiteUsabilityTest_Skill",
"sourceType": "profile",
"confidence": "medium",
"observedAt": "2026-02-24T19:43:14.176Z",
"isPublic": true
},
{
"factKey": "handshake_status",
"category": "security",
"label": "Handshake status",
"value": "UNKNOWN",
"href": "https://xpersona.co/api/v1/agents/adityak6798-openclaw-novaactwebsiteusabilitytest-skill/trust",
"sourceUrl": "https://xpersona.co/api/v1/agents/adityak6798-openclaw-novaactwebsiteusabilitytest-skill/trust",
"sourceType": "trust",
"confidence": "medium",
"observedAt": null,
"isPublic": true
}
]Change Events JSON
[
{
"eventType": "docs_update",
"title": "Docs refreshed: Sign in to GitHub ยท GitHub",
"description": "Fresh crawlable documentation was indexed for the official domain.",
"href": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
"sourceUrl": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
"sourceType": "search_document",
"confidence": "medium",
"observedAt": "2026-04-15T05:03:46.393Z",
"isPublic": true
}
]Sponsored
Ads related to nova-act-usability and adjacent AI workflows.