Crawler Summary

text-browser answer-first brief

Use text-based browser tools for web browsing, scraping, and automation tasks without GUI. Use when you need to navigate websites, extract content, search the web, or interact with web pages programmatically. --- name: text-browser description: Use text-based browser tools for web browsing, scraping, and automation tasks without GUI. Use when you need to navigate websites, extract content, search the web, or interact with web pages programmatically. --- Text-Based Browser This skill provides guidance for using text-based browser tools to interact with the web without a graphical interface. Perfect for automated scraping, Capability contract not published. No trust telemetry is available yet. Last updated 4/14/2026.

Freshness

Last checked 4/14/2026

Best For

text-browser is best for general automation workflows where OpenClaw compatibility matters.

Not Ideal For

Contract metadata is missing or unavailable for deterministic execution.

Evidence Sources Checked

editorial-content, GITHUB OPENCLEW, runtime-metrics, public facts pack

Claim this agent
Agent DossierGitHubSafety: 94/100

text-browser

Use text-based browser tools for web browsing, scraping, and automation tasks without GUI. Use when you need to navigate websites, extract content, search the web, or interact with web pages programmatically. --- name: text-browser description: Use text-based browser tools for web browsing, scraping, and automation tasks without GUI. Use when you need to navigate websites, extract content, search the web, or interact with web pages programmatically. --- Text-Based Browser This skill provides guidance for using text-based browser tools to interact with the web without a graphical interface. Perfect for automated scraping,

OpenClawself-declared

Public facts

4

Change events

1

Artifacts

0

Freshness

Apr 14, 2026

Verifiededitorial-contentNo verified compatibility signals

Capability contract not published. No trust telemetry is available yet. Last updated 4/14/2026.

Trust evidence available

Trust score

Unknown

Compatibility

OpenClaw

Freshness

Apr 14, 2026

Vendor

Atakhadiviom

Artifacts

0

Benchmarks

0

Last release

Unpublished

Executive Summary

Key links, install path, and a quick operational read before the deeper crawl record.

Verifiededitorial-content

Summary

Capability contract not published. No trust telemetry is available yet. Last updated 4/14/2026.

Setup snapshot

git clone https://github.com/atakhadiviom/text-browser-skill.git
  1. 1

    Setup complexity is LOW. This package is likely designed for quick installation with minimal external side-effects.

  2. 2

    Final validation: Expose the agent to a mock request payload inside a sandbox and trace the network egress before allowing access to real customer data.

Evidence Ledger

Everything public we have scraped or crawled about this agent, grouped by evidence type with provenance.

Verifiededitorial-content
Vendor (1)

Vendor

Atakhadiviom

profilemedium
Observed Apr 14, 2026Source linkProvenance
Compatibility (1)

Protocol compatibility

OpenClaw

contractmedium
Observed Apr 14, 2026Source linkProvenance
Security (1)

Handshake status

UNKNOWN

trustmedium
Observed unknownSource linkProvenance
Integration (1)

Crawlable docs

6 indexed pages on the official domain

search_documentmedium
Observed Apr 15, 2026Source linkProvenance

Release & Crawl Timeline

Merged public release, docs, artifact, benchmark, pricing, and trust refresh events.

Self-declaredagent-index

Artifacts Archive

Extracted files, examples, snippets, parameters, dependencies, permissions, and artifact metadata.

Self-declaredGITHUB OPENCLEW

Extracted files

0

Examples

6

Snippets

0

Languages

typescript

Parameters

Executable Examples

bash

web_search(query="python web scraping", count=5)

bash

web_fetch(url="https://example.com", extractMode="markdown", maxChars=10000)

bash

browser(action="snapshot", profile="openclaw")
browser(action="navigate", targetUrl="https://example.com")
browser(action="act", request={"kind": "click", "ref": "submit-button"})

bash

web_search(query="laptop comparison", count=5)
# Fetch top 3 results
web_fetch(url="result1", extractMode="markdown")
web_fetch(url="result2", extractMode="markdown")
web_fetch(url="result3", extractMode="markdown")

bash

web_search(query="tech news", count=5, freshness="pd")
web_fetch(url="news1", extractMode="markdown")
web_fetch(url="news2", extractMode="markdown")

bash

web_search(query="Python requests docs", count=3)
web_fetch(url="docs_url", extractMode="markdown", maxChars=50000)

Docs & README

Full documentation captured from public sources, including the complete README when available.

Self-declaredGITHUB OPENCLEW

Docs source

GITHUB OPENCLEW

Editorial quality

ready

Use text-based browser tools for web browsing, scraping, and automation tasks without GUI. Use when you need to navigate websites, extract content, search the web, or interact with web pages programmatically. --- name: text-browser description: Use text-based browser tools for web browsing, scraping, and automation tasks without GUI. Use when you need to navigate websites, extract content, search the web, or interact with web pages programmatically. --- Text-Based Browser This skill provides guidance for using text-based browser tools to interact with the web without a graphical interface. Perfect for automated scraping,

Full README

name: text-browser description: Use text-based browser tools for web browsing, scraping, and automation tasks without GUI. Use when you need to navigate websites, extract content, search the web, or interact with web pages programmatically.

Text-Based Browser

This skill provides guidance for using text-based browser tools to interact with the web without a graphical interface. Perfect for automated scraping, content extraction, web research, and programmatic web interactions.

Available Tools

Web Search

Tool: web_search Use when: You need to search the web for information Parameters:

  • query: Search terms
  • count: Number of results (1-10)
  • country: 2-letter country code for regional results
  • search_lang: Language code for results
  • freshness: Filter by discovery time

Example:

web_search(query="python web scraping", count=5)

Web Fetch

Tool: web_fetch Use when: You need to extract readable content from a specific URL Parameters:

  • url: HTTP or HTTPS URL to fetch
  • extractMode: "markdown" or "text" (default: markdown)
  • maxChars: Maximum characters to return

Example:

web_fetch(url="https://example.com", extractMode="markdown", maxChars=10000)

Browser Automation

Tool: browser Use when: You need to navigate, click, type, or interact with a website Actions:

  • snapshot - Get page structure and elements
  • navigate - Go to a URL
  • act - Perform actions (click, type, press, select, fill, wait)
  • screenshot - Capture page visual
  • focus - Switch to a specific tab

Profiles:

  • openclaw - Isolated browser (default)
  • chrome - Extension relay (requires user to attach tab)

Example:

browser(action="snapshot", profile="openclaw")
browser(action="navigate", targetUrl="https://example.com")
browser(action="act", request={"kind": "click", "ref": "submit-button"})

Workflow Patterns

Pattern 1: Quick Research

Use web_search + web_fetch for fast information gathering:

  1. Search for topic
  2. Fetch relevant URLs
  3. Extract and summarize content

Pattern 2: Page Scraping

Use web_fetch for straightforward content extraction:

  1. Get URL
  2. Fetch with markdown mode
  3. Parse and extract needed information

Pattern 3: Interactive Tasks

Use browser automation for complex interactions:

  1. Navigate to page
  2. Snapshot to understand structure
  3. Perform actions (click, type, wait)
  4. Capture results

Pattern 4: Multi-Page Research

Combine tools for comprehensive research:

  1. Search for multiple terms
  2. Fetch top results
  3. Use browser automation for sites requiring JavaScript
  4. Aggregate findings

Best Practices

Choose the Right Tool

  • web_search - When you don't know specific URLs
  • web_fetch - When you have a URL and just need content
  • browser - When you need to interact (forms, dynamic content, complex navigation)

Optimize for Speed

  • Use web_fetch whenever possible (fastest)
  • Prefer snapshot over full-page interactions when possible
  • Use refs="aria" for stable element references
  • Set appropriate limit and maxChars to avoid bloat

Handle Errors Gracefully

  • Network errors: Retry with timeout
  • 404/403: Check URL or try alternative
  • Parse errors: Try different extract mode or use browser automation
  • Rate limiting: Add delays between requests

Respect Rate Limits

  • Add delays between repeated requests to same domain
  • Use count parameter wisely (3-5 is usually sufficient)
  • Cache results when appropriate

Advanced Techniques

Pagination Handling

For paginated content, use browser automation:

  1. Snapshot page
  2. Extract next button reference
  3. Click and wait
  4. Repeat until end

Form Submission

Use browser automation for forms:

  1. Snapshot to find form fields
  2. Use act with kind: "fill" for multiple fields
  3. Submit with button click
  4. Wait and verify results

Content Extraction Strategy

  1. Try web_fetch with markdown mode first (fastest)
  2. If insufficient, use browser snapshot for structure
  3. Use specific selectors or aria refs for precise extraction
  4. Combine with regex parsing if needed

Working with JavaScript Sites

When web_fetch returns incomplete content:

  1. Use browser automation
  2. Navigate to URL
  3. Wait for page load (use act with kind: "wait" and timeMs)
  4. Snapshot to get rendered content
  5. Extract needed information

Common Use Cases

Product Research

Search for products, compare prices, extract specs:

web_search(query="laptop comparison", count=5)
# Fetch top 3 results
web_fetch(url="result1", extractMode="markdown")
web_fetch(url="result2", extractMode="markdown")
web_fetch(url="result3", extractMode="markdown")

News Aggregation

Gather news from multiple sources:

web_search(query="tech news", count=5, freshness="pd")
web_fetch(url="news1", extractMode="markdown")
web_fetch(url="news2", extractMode="markdown")

Documentation Lookup

Find and extract API docs or guides:

web_search(query="Python requests docs", count=3)
web_fetch(url="docs_url", extractMode="markdown", maxChars=50000)

Data Extraction from Forms

Submit forms and scrape results:

browser(action="navigate", targetUrl="https://example.com/search")
browser(action="act", request={"kind": "fill", "fields": [{"ref": "query", "text": "search term"}]})
browser(action="act", request={"kind": "click", "ref": "search-button"})
browser(action="act", request={"kind": "wait", "timeMs": 2000})
browser(action="snapshot")

Troubleshooting

"Content not loading"

  • Switch from web_fetch to browser automation
  • Add wait times for dynamic content
  • Check for JavaScript rendering requirements

"Can't find element"

  • Use refs="aria" for stable references
  • Take new snapshot after page changes
  • Verify element still exists after actions

"Rate limit errors"

  • Add delays between requests (1-3 seconds)
  • Reduce count parameter
  • Respect robots.txt and terms of service

"Slow performance"

  • Use web_fetch instead of browser when possible
  • Reduce maxChars to only what's needed
  • Cache results when repeating same queries

Security Considerations

  • Always respect robots.txt
  • Don't overwhelm servers (add delays)
  • Don't bypass authentication unless authorized
  • Handle sensitive data carefully
  • Log actions for transparency

When to Use This Skill

Use text-browser when:

  • You need to search the web
  • You need to extract content from URLs
  • You need to interact with web pages
  • You need to scrape or research programmatically
  • You need to work with web APIs that don't have direct integration

See Also

  • Agent Browser skill - For GUI-based browser automation
  • web_search tool - For quick searches
  • web_fetch tool - For content extraction

Contract & API

Machine endpoints, protocol fit, contract coverage, invocation examples, and guardrails for agent-to-agent use.

MissingGITHUB OPENCLEW

Contract coverage

Status

missing

Auth

None

Streaming

No

Data region

Unspecified

Protocol support

OpenClaw: self-declared

Requires: none

Forbidden: none

Guardrails

Operational confidence: low

No positive guardrails captured.
Invocation examples
curl -s "https://xpersona.co/api/v1/agents/atakhadiviom-text-browser-skill/snapshot"
curl -s "https://xpersona.co/api/v1/agents/atakhadiviom-text-browser-skill/contract"
curl -s "https://xpersona.co/api/v1/agents/atakhadiviom-text-browser-skill/trust"

Reliability & Benchmarks

Trust and runtime signals, benchmark suites, failure patterns, and practical risk constraints.

Missingruntime-metrics

Trust signals

Handshake

UNKNOWN

Confidence

unknown

Attempts 30d

unknown

Fallback rate

unknown

Runtime metrics

Observed P50

unknown

Observed P95

unknown

Rate limit

unknown

Estimated cost

unknown

Do not use if

Contract metadata is missing or unavailable for deterministic execution.
No benchmark suites or observed failure patterns are available.

Media & Demo

Every public screenshot, visual asset, demo link, and owner-provided destination tied to this agent.

Missingno-media
No screenshots, media assets, or demo links are available.

Related Agents

Neighboring agents from the same protocol and source ecosystem for comparison and shortlist building.

Self-declaredprotocol-neighbors
GITHUB_REPOSactivepieces

Rank

70

AI Agents & MCPs & AI Workflow Automation • (~400 MCP servers for AI agents) • AI Automation / AI Agent with MCPs • AI Workflows & AI Agents • MCPs for AI Agents

Traction

No public download signal

Freshness

Updated 2d ago

OPENCLAW
GITHUB_REPOScherry-studio

Rank

70

AI productivity studio with smart chat, autonomous agents, and 300+ assistants. Unified access to frontier LLMs

Traction

No public download signal

Freshness

Updated 6d ago

MCPOPENCLAW
GITHUB_REPOSAionUi

Rank

70

Free, local, open-source 24/7 Cowork app and OpenClaw for Gemini CLI, Claude Code, Codex, OpenCode, Qwen Code, Goose CLI, Auggie, and more | 🌟 Star if you like it!

Traction

No public download signal

Freshness

Updated 6d ago

MCPOPENCLAW
GITHUB_REPOSCopilotKit

Rank

70

The Frontend for Agents & Generative UI. React + Angular

Traction

No public download signal

Freshness

Updated 23d ago

OPENCLAW
Machine Appendix

Contract JSON

{
  "contractStatus": "missing",
  "authModes": [],
  "requires": [],
  "forbidden": [],
  "supportsMcp": false,
  "supportsA2a": false,
  "supportsStreaming": false,
  "inputSchemaRef": null,
  "outputSchemaRef": null,
  "dataRegion": null,
  "contractUpdatedAt": null,
  "sourceUpdatedAt": null,
  "freshnessSeconds": null
}

Invocation Guide

{
  "preferredApi": {
    "snapshotUrl": "https://xpersona.co/api/v1/agents/atakhadiviom-text-browser-skill/snapshot",
    "contractUrl": "https://xpersona.co/api/v1/agents/atakhadiviom-text-browser-skill/contract",
    "trustUrl": "https://xpersona.co/api/v1/agents/atakhadiviom-text-browser-skill/trust"
  },
  "curlExamples": [
    "curl -s \"https://xpersona.co/api/v1/agents/atakhadiviom-text-browser-skill/snapshot\"",
    "curl -s \"https://xpersona.co/api/v1/agents/atakhadiviom-text-browser-skill/contract\"",
    "curl -s \"https://xpersona.co/api/v1/agents/atakhadiviom-text-browser-skill/trust\""
  ],
  "jsonRequestTemplate": {
    "query": "summarize this repo",
    "constraints": {
      "maxLatencyMs": 2000,
      "protocolPreference": [
        "OPENCLEW"
      ]
    }
  },
  "jsonResponseTemplate": {
    "ok": true,
    "result": {
      "summary": "...",
      "confidence": 0.9
    },
    "meta": {
      "source": "GITHUB_OPENCLEW",
      "generatedAt": "2026-04-17T04:20:49.731Z"
    }
  },
  "retryPolicy": {
    "maxAttempts": 3,
    "backoffMs": [
      500,
      1500,
      3500
    ],
    "retryableConditions": [
      "HTTP_429",
      "HTTP_503",
      "NETWORK_TIMEOUT"
    ]
  }
}

Trust JSON

{
  "status": "unavailable",
  "handshakeStatus": "UNKNOWN",
  "verificationFreshnessHours": null,
  "reputationScore": null,
  "p95LatencyMs": null,
  "successRate30d": null,
  "fallbackRate": null,
  "attempts30d": null,
  "trustUpdatedAt": null,
  "trustConfidence": "unknown",
  "sourceUpdatedAt": null,
  "freshnessSeconds": null
}

Capability Matrix

{
  "rows": [
    {
      "key": "OPENCLEW",
      "type": "protocol",
      "support": "unknown",
      "confidenceSource": "profile",
      "notes": "Listed on profile"
    }
  ],
  "flattenedTokens": "protocol:OPENCLEW|unknown|profile"
}

Facts JSON

[
  {
    "factKey": "docs_crawl",
    "category": "integration",
    "label": "Crawlable docs",
    "value": "6 indexed pages on the official domain",
    "href": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
    "sourceUrl": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
    "sourceType": "search_document",
    "confidence": "medium",
    "observedAt": "2026-04-15T05:03:46.393Z",
    "isPublic": true
  },
  {
    "factKey": "vendor",
    "category": "vendor",
    "label": "Vendor",
    "value": "Atakhadiviom",
    "href": "https://github.com/atakhadiviom/text-browser-skill",
    "sourceUrl": "https://github.com/atakhadiviom/text-browser-skill",
    "sourceType": "profile",
    "confidence": "medium",
    "observedAt": "2026-04-14T22:24:06.674Z",
    "isPublic": true
  },
  {
    "factKey": "protocols",
    "category": "compatibility",
    "label": "Protocol compatibility",
    "value": "OpenClaw",
    "href": "https://xpersona.co/api/v1/agents/atakhadiviom-text-browser-skill/contract",
    "sourceUrl": "https://xpersona.co/api/v1/agents/atakhadiviom-text-browser-skill/contract",
    "sourceType": "contract",
    "confidence": "medium",
    "observedAt": "2026-04-14T22:24:06.674Z",
    "isPublic": true
  },
  {
    "factKey": "handshake_status",
    "category": "security",
    "label": "Handshake status",
    "value": "UNKNOWN",
    "href": "https://xpersona.co/api/v1/agents/atakhadiviom-text-browser-skill/trust",
    "sourceUrl": "https://xpersona.co/api/v1/agents/atakhadiviom-text-browser-skill/trust",
    "sourceType": "trust",
    "confidence": "medium",
    "observedAt": null,
    "isPublic": true
  }
]

Change Events JSON

[
  {
    "eventType": "docs_update",
    "title": "Docs refreshed: Sign in to GitHub · GitHub",
    "description": "Fresh crawlable documentation was indexed for the official domain.",
    "href": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
    "sourceUrl": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
    "sourceType": "search_document",
    "confidence": "medium",
    "observedAt": "2026-04-15T05:03:46.393Z",
    "isPublic": true
  }
]

Sponsored

Ads related to text-browser and adjacent AI workflows.