Rank
70
AI Agents & MCPs & AI Workflow Automation • (~400 MCP servers for AI agents) • AI Automation / AI Agent with MCPs • AI Workflows & AI Agents • MCPs for AI Agents
Traction
No public download signal
Freshness
Updated 2d ago
Crawler Summary
Analyze annotated UI screenshots and markdown documentation to generate agent-consumable UI flow specifications. Use when processing web app UI flows described via markdown + screenshots into structured, automation-ready knowledge. --- name: multimodal-ui-flow-analyzer description: Analyze annotated UI screenshots and markdown documentation to generate agent-consumable UI flow specifications. Use when processing web app UI flows described via markdown + screenshots into structured, automation-ready knowledge. license: MIT metadata: author: Bowen version: "1.0" tags: - ui-analysis - multimodal - automation - workflow --- Multimodal UI Flow Analy Capability contract not published. No trust telemetry is available yet. 2 GitHub stars reported by the source. Last updated 4/14/2026.
Freshness
Last checked 4/14/2026
Best For
multimodal-ui-flow-analyzer is best for general automation workflows where OpenClaw compatibility matters.
Not Ideal For
Contract metadata is missing or unavailable for deterministic execution.
Evidence Sources Checked
editorial-content, GITHUB OPENCLEW, runtime-metrics, public facts pack
Analyze annotated UI screenshots and markdown documentation to generate agent-consumable UI flow specifications. Use when processing web app UI flows described via markdown + screenshots into structured, automation-ready knowledge. --- name: multimodal-ui-flow-analyzer description: Analyze annotated UI screenshots and markdown documentation to generate agent-consumable UI flow specifications. Use when processing web app UI flows described via markdown + screenshots into structured, automation-ready knowledge. license: MIT metadata: author: Bowen version: "1.0" tags: - ui-analysis - multimodal - automation - workflow --- Multimodal UI Flow Analy
Public facts
5
Change events
1
Artifacts
0
Freshness
Apr 14, 2026
Capability contract not published. No trust telemetry is available yet. 2 GitHub stars reported by the source. Last updated 4/14/2026.
Trust score
Unknown
Compatibility
OpenClaw
Freshness
Apr 14, 2026
Vendor
Boweneos
Artifacts
0
Benchmarks
0
Last release
Unpublished
Key links, install path, and a quick operational read before the deeper crawl record.
Summary
Capability contract not published. No trust telemetry is available yet. 2 GitHub stars reported by the source. Last updated 4/14/2026.
Setup snapshot
git clone https://github.com/boweneos/ui-flow-agent-skills.gitSetup complexity is LOW. This package is likely designed for quick installation with minimal external side-effects.
Final validation: Expose the agent to a mock request payload inside a sandbox and trace the network egress before allowing access to real customer data.
Everything public we have scraped or crawled about this agent, grouped by evidence type with provenance.
Vendor
Boweneos
Protocol compatibility
OpenClaw
Adoption signal
2 GitHub stars
Handshake status
UNKNOWN
Crawlable docs
6 indexed pages on the official domain
Merged public release, docs, artifact, benchmark, pricing, and trust refresh events.
Extracted files, examples, snippets, parameters, dependencies, permissions, and artifact metadata.
Extracted files
0
Examples
5
Snippets
0
Languages
typescript
Parameters
md
## Step N: <Short Title> **Intent:** What the user is trying to accomplish. **User Action (Text):** Plain-language description of the interaction. **Visual Reference:**  **Visual Annotations:** - Box / arrow / highlight descriptions
json
{
"annotation_mapping": {
"red_box": "Primary action button",
"arrow": "Cursor movement direction",
"highlight": "Target input field"
}
}json
{
"step_id": "step-N",
"intent": "Description of user goal",
"action": "click|type|select|scroll|hover",
"ui_element": {
"type": "button|input|link|menu|dropdown",
"label": "Visible text or aria-label",
"visual_location": "Position description",
"identification_strategy": [
"visible text equals 'X'",
"role=button",
"data-testid='element-id'"
]
},
"precondition": "Required state before action",
"resulting_state": "Expected state after action"
}md
# UI_FLOW: <flow_name> ## Metadata - App: <Application Name> - Flow Type: Static UI Interaction - Source: Annotated screenshots + human-authored text --- ## Step 1 **Intent:** <goal> **Action:** - type: <action_type> - target: - role: <element_role> - text: "<visible_text>" - location: <position_description> **Preconditions:** - <required_state> **Postconditions:** - <resulting_state> **Automation Notes:** - <selector_recommendations>
json
{
"step_id": "step-2",
"intent": "Create a new project",
"action": "click",
"ui_element": {
"type": "button",
"label": "Create Project",
"visual_location": "top-right of main content area",
"identification_strategy": [
"visible text equals 'Create Project'",
"role=button"
]
},
"precondition": "User is on Projects dashboard",
"resulting_state": "Project creation modal opens"
}Full documentation captured from public sources, including the complete README when available.
Docs source
GITHUB OPENCLEW
Editorial quality
ready
Analyze annotated UI screenshots and markdown documentation to generate agent-consumable UI flow specifications. Use when processing web app UI flows described via markdown + screenshots into structured, automation-ready knowledge. --- name: multimodal-ui-flow-analyzer description: Analyze annotated UI screenshots and markdown documentation to generate agent-consumable UI flow specifications. Use when processing web app UI flows described via markdown + screenshots into structured, automation-ready knowledge. license: MIT metadata: author: Bowen version: "1.0" tags: - ui-analysis - multimodal - automation - workflow --- Multimodal UI Flow Analy
This skill enables you to analyze static web app UI flows described via markdown + annotated screenshots and output agent-consumable knowledge for downstream AI code agents.
Activate this skill when:
Before analysis, ensure each UI step follows this structure:
## Step N: <Short Title>
**Intent:**
What the user is trying to accomplish.
**User Action (Text):**
Plain-language description of the interaction.
**Visual Reference:**

**Visual Annotations:**
- Box / arrow / highlight descriptions
If the input doesn't follow this format, restructure it first.
When analyzing screenshots:
Analyze one step at a time, never the entire document at once.
For each step, extract:
Annotation Priority Rules:
Map annotations explicitly:
{
"annotation_mapping": {
"red_box": "Primary action button",
"arrow": "Cursor movement direction",
"highlight": "Target input field"
}
}
Produce output in the canonical format (see templates in assets/templates/).
Per-Step JSON Format:
{
"step_id": "step-N",
"intent": "Description of user goal",
"action": "click|type|select|scroll|hover",
"ui_element": {
"type": "button|input|link|menu|dropdown",
"label": "Visible text or aria-label",
"visual_location": "Position description",
"identification_strategy": [
"visible text equals 'X'",
"role=button",
"data-testid='element-id'"
]
},
"precondition": "Required state before action",
"resulting_state": "Expected state after action"
}
Flow Markdown Format:
# UI_FLOW: <flow_name>
## Metadata
- App: <Application Name>
- Flow Type: Static UI Interaction
- Source: Annotated screenshots + human-authored text
---
## Step 1
**Intent:** <goal>
**Action:**
- type: <action_type>
- target:
- role: <element_role>
- text: "<visible_text>"
- location: <position_description>
**Preconditions:**
- <required_state>
**Postconditions:**
- <resulting_state>
**Automation Notes:**
- <selector_recommendations>
For each step, include:
Stable DOM selectors (prefer semantic)
role=button + visible textdata-testid attributesaria-label valuesBrittle selectors to avoid
Wait conditions
Before finalizing, verify:
Use templates from assets/templates/:
| Template | Purpose |
|----------|---------|
| step-output.json | Single step structured output |
| flow-output.md | Complete flow specification |
| automation-hints.md | Test automation guidance |
Input: User provides markdown with annotated screenshot showing a "Create Project" button highlighted with a red box.
Analysis Process:
Output:
{
"step_id": "step-2",
"intent": "Create a new project",
"action": "click",
"ui_element": {
"type": "button",
"label": "Create Project",
"visual_location": "top-right of main content area",
"identification_strategy": [
"visible text equals 'Create Project'",
"role=button"
]
},
"precondition": "User is on Projects dashboard",
"resulting_state": "Project creation modal opens"
}
If multiple elements are highlighted, process them in visual reading order (top-to-bottom, left-to-right).
If a step lacks a visual reference, flag it and proceed with text-only analysis. Note reduced confidence in output.
For drag-and-drop or multi-select, describe both source and target elements with separate identification strategies.
If the UI shows dynamic content (lists, tables), describe the interaction pattern rather than specific instances.
Machine endpoints, protocol fit, contract coverage, invocation examples, and guardrails for agent-to-agent use.
Contract coverage
Status
missing
Auth
None
Streaming
No
Data region
Unspecified
Protocol support
Requires: none
Forbidden: none
Guardrails
Operational confidence: low
curl -s "https://xpersona.co/api/v1/agents/boweneos-ui-flow-agent-skills/snapshot"
curl -s "https://xpersona.co/api/v1/agents/boweneos-ui-flow-agent-skills/contract"
curl -s "https://xpersona.co/api/v1/agents/boweneos-ui-flow-agent-skills/trust"
Trust and runtime signals, benchmark suites, failure patterns, and practical risk constraints.
Trust signals
Handshake
UNKNOWN
Confidence
unknown
Attempts 30d
unknown
Fallback rate
unknown
Runtime metrics
Observed P50
unknown
Observed P95
unknown
Rate limit
unknown
Estimated cost
unknown
Do not use if
Every public screenshot, visual asset, demo link, and owner-provided destination tied to this agent.
Neighboring agents from the same protocol and source ecosystem for comparison and shortlist building.
Rank
70
AI Agents & MCPs & AI Workflow Automation • (~400 MCP servers for AI agents) • AI Automation / AI Agent with MCPs • AI Workflows & AI Agents • MCPs for AI Agents
Traction
No public download signal
Freshness
Updated 2d ago
Rank
70
AI productivity studio with smart chat, autonomous agents, and 300+ assistants. Unified access to frontier LLMs
Traction
No public download signal
Freshness
Updated 6d ago
Rank
70
Free, local, open-source 24/7 Cowork app and OpenClaw for Gemini CLI, Claude Code, Codex, OpenCode, Qwen Code, Goose CLI, Auggie, and more | 🌟 Star if you like it!
Traction
No public download signal
Freshness
Updated 6d ago
Rank
70
The Frontend for Agents & Generative UI. React + Angular
Traction
No public download signal
Freshness
Updated 23d ago
Contract JSON
{
"contractStatus": "missing",
"authModes": [],
"requires": [],
"forbidden": [],
"supportsMcp": false,
"supportsA2a": false,
"supportsStreaming": false,
"inputSchemaRef": null,
"outputSchemaRef": null,
"dataRegion": null,
"contractUpdatedAt": null,
"sourceUpdatedAt": null,
"freshnessSeconds": null
}Invocation Guide
{
"preferredApi": {
"snapshotUrl": "https://xpersona.co/api/v1/agents/boweneos-ui-flow-agent-skills/snapshot",
"contractUrl": "https://xpersona.co/api/v1/agents/boweneos-ui-flow-agent-skills/contract",
"trustUrl": "https://xpersona.co/api/v1/agents/boweneos-ui-flow-agent-skills/trust"
},
"curlExamples": [
"curl -s \"https://xpersona.co/api/v1/agents/boweneos-ui-flow-agent-skills/snapshot\"",
"curl -s \"https://xpersona.co/api/v1/agents/boweneos-ui-flow-agent-skills/contract\"",
"curl -s \"https://xpersona.co/api/v1/agents/boweneos-ui-flow-agent-skills/trust\""
],
"jsonRequestTemplate": {
"query": "summarize this repo",
"constraints": {
"maxLatencyMs": 2000,
"protocolPreference": [
"OPENCLEW"
]
}
},
"jsonResponseTemplate": {
"ok": true,
"result": {
"summary": "...",
"confidence": 0.9
},
"meta": {
"source": "GITHUB_OPENCLEW",
"generatedAt": "2026-04-17T03:37:06.857Z"
}
},
"retryPolicy": {
"maxAttempts": 3,
"backoffMs": [
500,
1500,
3500
],
"retryableConditions": [
"HTTP_429",
"HTTP_503",
"NETWORK_TIMEOUT"
]
}
}Trust JSON
{
"status": "unavailable",
"handshakeStatus": "UNKNOWN",
"verificationFreshnessHours": null,
"reputationScore": null,
"p95LatencyMs": null,
"successRate30d": null,
"fallbackRate": null,
"attempts30d": null,
"trustUpdatedAt": null,
"trustConfidence": "unknown",
"sourceUpdatedAt": null,
"freshnessSeconds": null
}Capability Matrix
{
"rows": [
{
"key": "OPENCLEW",
"type": "protocol",
"support": "unknown",
"confidenceSource": "profile",
"notes": "Listed on profile"
}
],
"flattenedTokens": "protocol:OPENCLEW|unknown|profile"
}Facts JSON
[
{
"factKey": "docs_crawl",
"category": "integration",
"label": "Crawlable docs",
"value": "6 indexed pages on the official domain",
"href": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
"sourceUrl": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
"sourceType": "search_document",
"confidence": "medium",
"observedAt": "2026-04-15T05:03:46.393Z",
"isPublic": true
},
{
"factKey": "vendor",
"category": "vendor",
"label": "Vendor",
"value": "Boweneos",
"href": "https://github.com/boweneos/ui-flow-agent-skills",
"sourceUrl": "https://github.com/boweneos/ui-flow-agent-skills",
"sourceType": "profile",
"confidence": "medium",
"observedAt": "2026-04-14T22:27:24.939Z",
"isPublic": true
},
{
"factKey": "protocols",
"category": "compatibility",
"label": "Protocol compatibility",
"value": "OpenClaw",
"href": "https://xpersona.co/api/v1/agents/boweneos-ui-flow-agent-skills/contract",
"sourceUrl": "https://xpersona.co/api/v1/agents/boweneos-ui-flow-agent-skills/contract",
"sourceType": "contract",
"confidence": "medium",
"observedAt": "2026-04-14T22:27:24.939Z",
"isPublic": true
},
{
"factKey": "traction",
"category": "adoption",
"label": "Adoption signal",
"value": "2 GitHub stars",
"href": "https://github.com/boweneos/ui-flow-agent-skills",
"sourceUrl": "https://github.com/boweneos/ui-flow-agent-skills",
"sourceType": "profile",
"confidence": "medium",
"observedAt": "2026-04-14T22:27:24.939Z",
"isPublic": true
},
{
"factKey": "handshake_status",
"category": "security",
"label": "Handshake status",
"value": "UNKNOWN",
"href": "https://xpersona.co/api/v1/agents/boweneos-ui-flow-agent-skills/trust",
"sourceUrl": "https://xpersona.co/api/v1/agents/boweneos-ui-flow-agent-skills/trust",
"sourceType": "trust",
"confidence": "medium",
"observedAt": null,
"isPublic": true
}
]Change Events JSON
[
{
"eventType": "docs_update",
"title": "Docs refreshed: Sign in to GitHub · GitHub",
"description": "Fresh crawlable documentation was indexed for the official domain.",
"href": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
"sourceUrl": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
"sourceType": "search_document",
"confidence": "medium",
"observedAt": "2026-04-15T05:03:46.393Z",
"isPublic": true
}
]Sponsored
Ads related to multimodal-ui-flow-analyzer and adjacent AI workflows.