Crawler Summary

AISquare-Studio-QA answer-first brief

Setting up QA testing agents using playwright and crewAI AISquare Studio AutoQA $1 $1 $1 $1 ai automation testing playwright github-action crewai openai qa test-generation multi-agent **AI-powered GitHub Action that converts natural language test descriptions in pull request bodies into fully automated Playwright tests.** Write what you want to test in plain English — AutoQA generates, executes, and commits production-ready test code using CrewAI multi-agent orchestration Capability contract not published. No trust telemetry is available yet. 160 GitHub stars reported by the source. Last updated 4/15/2026.

Freshness

Last checked 4/15/2026

Best For

AISquare-Studio-QA is best for crewai, multi-agent workflows where OpenClaw compatibility matters.

Not Ideal For

Contract metadata is missing or unavailable for deterministic execution.

Evidence Sources Checked

editorial-content, GITHUB REPOS, runtime-metrics, public facts pack

Claim this agent
Agent DossierGITHUB REPOSSafety: 75/100

AISquare-Studio-QA

Setting up QA testing agents using playwright and crewAI AISquare Studio AutoQA $1 $1 $1 $1 ai automation testing playwright github-action crewai openai qa test-generation multi-agent **AI-powered GitHub Action that converts natural language test descriptions in pull request bodies into fully automated Playwright tests.** Write what you want to test in plain English — AutoQA generates, executes, and commits production-ready test code using CrewAI multi-agent orchestration

OpenClawself-declared

Public facts

5

Change events

1

Artifacts

0

Freshness

Apr 15, 2026

Verifiededitorial-contentNo verified compatibility signals160 GitHub stars

Capability contract not published. No trust telemetry is available yet. 160 GitHub stars reported by the source. Last updated 4/15/2026.

160 GitHub starsTrust evidence available

Trust score

Unknown

Compatibility

OpenClaw

Freshness

Apr 15, 2026

Vendor

Aisquare Studio

Artifacts

0

Benchmarks

0

Last release

Unpublished

Executive Summary

Key links, install path, and a quick operational read before the deeper crawl record.

Verifiededitorial-content

Summary

Capability contract not published. No trust telemetry is available yet. 160 GitHub stars reported by the source. Last updated 4/15/2026.

Setup snapshot

  1. 1

    Setup complexity is LOW. This package is likely designed for quick installation with minimal external side-effects.

  2. 2

    Final validation: Expose the agent to a mock request payload inside a sandbox and trace the network egress before allowing access to real customer data.

Evidence Ledger

Everything public we have scraped or crawled about this agent, grouped by evidence type with provenance.

Verifiededitorial-content
Vendor (1)

Vendor

Aisquare Studio

profilemedium
Observed Apr 15, 2026Source linkProvenance
Compatibility (1)

Protocol compatibility

OpenClaw

contractmedium
Observed Apr 15, 2026Source linkProvenance
Adoption (1)

Adoption signal

160 GitHub stars

profilemedium
Observed Apr 15, 2026Source linkProvenance
Security (1)

Handshake status

UNKNOWN

trustmedium
Observed unknownSource linkProvenance
Integration (1)

Crawlable docs

6 indexed pages on the official domain

search_documentmedium
Observed Apr 15, 2026Source linkProvenance

Release & Crawl Timeline

Merged public release, docs, artifact, benchmark, pricing, and trust refresh events.

Self-declaredagent-index

Artifacts Archive

Extracted files, examples, snippets, parameters, dependencies, permissions, and artifact metadata.

Self-declaredGITHUB REPOS

Extracted files

0

Examples

6

Snippets

0

Languages

python

Executable Examples

text

PR Description          AutoQA Action              Your Repository
┌──────────────┐     ┌──────────────────┐     ┌──────────────────┐
│

text

1. A developer writes numbered test steps in the PR description inside a fenced `autoqa` block
2. The GitHub Action triggers on PR open/edit/sync events
3. AutoQA parses the PR body for metadata (`flow_name`, `tier`, `area`) and test steps
4. CrewAI agents generate Playwright Python test code from the steps
5. Generated code is validated via AST analysis and executed against your staging environment
6. On success, the test file is committed to `tests/autoqa/{tier}/{area}/test_{flow_name}.py`
7. Results and screenshots are posted as a PR comment

---

## Quick Start

### 1. Add the workflow

Create `.github/workflows/autoqa.yml` in your repository:

text

### 2. Configure secrets

Add the following secrets in your repository's **Settings → Secrets and variables → Actions**:

| Secret             | Description                       |
| ------------------ | --------------------------------- |
| `OPENAI_API_KEY`   | OpenAI API key (GPT-4 access)     |
| `STAGING_URL`      | Staging environment login URL     |
| `STAGING_EMAIL`    | Test account email                |
| `STAGING_PASSWORD` | Test account password             |

### 3. Write test steps in a PR

Include a fenced `autoqa` block in your pull request description:

autoqa

flow_name: user_login_success
tier: A
area: auth

text

Open the PR and AutoQA takes care of the rest.

---

## PR Format Reference

The `autoqa` code block defines metadata. Numbered steps below it describe the test scenario.

autoqa

flow_name: <snake_case_test_name>
tier: <A|B|C>
area: <feature_area>

Docs & README

Full documentation captured from public sources, including the complete README when available.

Self-declaredGITHUB REPOS

Docs source

GITHUB REPOS

Editorial quality

ready

Setting up QA testing agents using playwright and crewAI AISquare Studio AutoQA $1 $1 $1 $1 ai automation testing playwright github-action crewai openai qa test-generation multi-agent **AI-powered GitHub Action that converts natural language test descriptions in pull request bodies into fully automated Playwright tests.** Write what you want to test in plain English — AutoQA generates, executes, and commits production-ready test code using CrewAI multi-agent orchestration

Full README

AISquare Studio AutoQA

GitHub Action Python 3.11+ Playwright CrewAI

ai automation testing playwright github-action crewai openai qa test-generation multi-agent

AI-powered GitHub Action that converts natural language test descriptions in pull request bodies into fully automated Playwright tests. Write what you want to test in plain English — AutoQA generates, executes, and commits production-ready test code using CrewAI multi-agent orchestration and OpenAI GPT-4.


Features

  • AI-Powered Test Generation — Natural language steps become executable Playwright Python tests
  • Active Execution Mode — Iterative step-by-step generation with real-time browser context
  • Smart Selector Discovery — Auto-discovers optimal selectors from live pages via DOMInspectorTool
  • Intelligent Retry — Automatic error recovery with alternative selectors and failure analysis
  • AST-Based Security Validation — Prevents unsafe code patterns before execution
  • Cross-Repository Architecture — Deploys as a GitHub Action, runs in any repository
  • Comprehensive Reporting — PR comments with screenshots, HTML reports, and JSON artifacts
  • ETag-Based Idempotency — Prevents duplicate test generation for unchanged PR descriptions
  • Multi-Tier Test Organization — Categorize tests into A/B/C tiers by criticality
  • Caching Strategy — Pip and Playwright browser caching for fast CI runs

How It Works

PR Description          AutoQA Action              Your Repository
┌──────────────┐     ┌──────────────────┐     ┌──────────────────┐
│  ```autoqa   │     │ 1. Parse PR body │     │ tests/autoqa/    │
│  flow: login │────▶│ 2. Generate code │────▶│   A/auth/        │
│  tier: A     │     │ 3. Validate AST  │     │     test_login.py│
│  area: auth  │     │ 4. Execute tests │     └──────────────────┘
│  ```         │     │ 5. Commit on pass│
│              │     │ 6. Comment on PR │
│  1. Go to /  │     └──────────────────┘
│  2. Login    │
│  3. Verify   │
└──────────────┘
  1. A developer writes numbered test steps in the PR description inside a fenced autoqa block
  2. The GitHub Action triggers on PR open/edit/sync events
  3. AutoQA parses the PR body for metadata (flow_name, tier, area) and test steps
  4. CrewAI agents generate Playwright Python test code from the steps
  5. Generated code is validated via AST analysis and executed against your staging environment
  6. On success, the test file is committed to tests/autoqa/{tier}/{area}/test_{flow_name}.py
  7. Results and screenshots are posted as a PR comment

Quick Start

1. Add the workflow

Create .github/workflows/autoqa.yml in your repository:

name: AutoQA Test Generation

on:
  pull_request:
    types: [opened, synchronize, edited]

jobs:
  autoqa:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v6
        with:
          token: ${{ secrets.GITHUB_TOKEN }}

      - name: Generate and Execute Tests
        uses: AISquare-Studio/AISquare-Studio-QA@main
        with:
          openai-api-key: ${{ secrets.OPENAI_API_KEY }}
          staging-url: ${{ secrets.STAGING_URL }}
          staging-email: ${{ secrets.STAGING_EMAIL }}
          staging-password: ${{ secrets.STAGING_PASSWORD }}

2. Configure secrets

Add the following secrets in your repository's Settings → Secrets and variables → Actions:

| Secret | Description | | ------------------ | --------------------------------- | | OPENAI_API_KEY | OpenAI API key (GPT-4 access) | | STAGING_URL | Staging environment login URL | | STAGING_EMAIL | Test account email | | STAGING_PASSWORD | Test account password |

3. Write test steps in a PR

Include a fenced autoqa block in your pull request description:

```autoqa
flow_name: user_login_success
tier: A
area: auth
```

1. Navigate to the login page
2. Enter valid email address
3. Enter valid password
4. Click the login button
5. Verify the dashboard appears

Open the PR and AutoQA takes care of the rest.


PR Format Reference

The autoqa code block defines metadata. Numbered steps below it describe the test scenario.

```autoqa
flow_name: <snake_case_test_name>
tier: <A|B|C>
area: <feature_area>
```

1. First test step in plain English
2. Second test step
3. ...

| Field | Required | Description | | ----------- | -------- | --------------------------------------------------------- | | flow_name | Yes | Snake-case identifier used for the generated file name | | tier | Yes | A (critical), B (important), or C (nice-to-have) | | area | Yes | Feature area used as subdirectory (e.g., auth, billing) |


Configuration Reference

Action Inputs

| Input | Required | Default | Description | | ------------------- | -------- | ---------------- | ------------------------------------------------- | | openai-api-key | Yes | — | OpenAI API key

| openai-model | No | openai/gpt-4.1 | OpenAI model for test generation (e.g., openai/gpt-4.1, openai/gpt-4o) | | | staging-url | Yes | — | Staging environment URL | | qa-github-token | No | github.token | GitHub token (for private repo access) | | staging-email | No | test@example.com | Test account email | | staging-password | No | — | Test account password | | target-repo-path | No | . | Path to the target repository | | git-user-name | No | AutoQA Bot | Git user name for test commits | | git-user-email | No | — | Git user email for test commits | | pr-body | No | (auto-detected) | PR description text | | test-directory | No | tests/autoqa | Base directory for generated tests | | create-pr | No | false | Create a PR for tests instead of pushing directly | | execution-mode | No | generate | Execution mode: generate, suite, or all |

Action Outputs

| Output | Description | | --------------------- | --------------------------------------- | | test_generated | Whether a test was generated (true/false) | | test_file_path | Path to the generated test file | | test_results | JSON object with execution results | | generation_metadata | JSON object with generation metadata | | screenshot_path | Path to captured screenshots | | etag | Idempotency hash of the PR description | | flow_name | Parsed flow name | | tier | Parsed tier | | area | Parsed area | | error | Error message (if failed) |

Execution Modes

| Mode | Behavior | | ---------- | ----------------------------------------------------------------- | | generate | Parse PR, generate a new test, execute it, and commit on success | | suite | Run the existing test suite only (regression testing) | | all | Generate a new test and run the full existing suite |


Project Structure

AISquare-Studio-QA/
├── action.yml                          # GitHub Action definition
├── qa_runner.py                        # Local test runner entry point
├── requirements.txt                    # Python dependencies
├── pyproject.toml                      # Python project configuration
├── pytest.ini                          # Pytest configuration
├── env.template                        # Environment variables template
├── .github/
│   ├── copilot-instructions.md         # Copilot custom instructions (AI agent reference)
│   └── workflows/                      # CI/CD workflows (lint, test, release)
├── config/
│   ├── autoqa_config.yaml              # AutoQA policy and settings
│   └── test_data.yaml                  # Test scenarios and selectors
├── src/
│   ├── agents/
│   │   ├── planner_agent.py            # Generates Playwright code via CrewAI
│   │   ├── executor_agent.py           # Validates and executes code (AST safety)
│   │   └── step_executor_agent.py      # Active execution step agent
│   ├── autoqa/
│   │   ├── action_runner.py            # Main GitHub Action orchestrator
│   │   ├── parser.py                   # PR body metadata parser
│   │   ├── action_reporter.py          # PR comment generator
│   │   └── cross_repo_manager.py       # Test file commits across repos
│   ├── crews/
│   │   └── qa_crew.py                  # CrewAI agent orchestration
│   ├── execution/
│   │   ├── iterative_orchestrator.py   # Step-by-step execution coordinator
│   │   ├── execution_context.py        # State tracking between steps
│   │   └── retry_handler.py            # Failure analysis and retry logic
│   ├── tools/
│   │   ├── playwright_executor.py      # Test code execution engine
│   │   └── dom_inspector.py            # Live page selector discovery
│   ├── templates/
│   │   └── test_execution_template.py  # Execution template
│   └── utils/
│       ├── logger.py                   # GitHub Actions-aware logging
│       ├── github_comment_client.py    # GitHub API client
│       ├── comment_builder.py          # Markdown comment builder
│       ├── screenshot_handler.py       # Screenshot capture
│       └── screenshot_embed_manager.py # Screenshot embedding
├── tests/                              # Pytest test suites
├── docs/                               # Documentation
├── examples/                           # Example workflows and configs
├── reports/                            # Generated test artifacts
└── scripts/                            # Utility scripts

Local Development

Prerequisites

  • Python 3.11+
  • An OpenAI API key with GPT-4 access

Setup

# Clone the repository
git clone https://github.com/AISquare-Studio/AISquare-Studio-QA.git
cd AISquare-Studio-QA

# Install dependencies
pip install -r requirements.txt
playwright install --with-deps chromium

# Configure environment
cp env.template .env
# Edit .env with your staging URL, credentials, and OpenAI API key

Running locally

# Run the test runner
python qa_runner.py

# Run with visible browser for debugging
HEADLESS_MODE=false python qa_runner.py

# Show detailed help
python qa_runner.py --help-detailed

Running the test suite

pytest tests/ -v

Architecture

AutoQA uses a multi-agent architecture powered by CrewAI:

  • Planner Agent — Converts natural language steps into Playwright Python code
  • Executor Agent — Validates generated code via AST analysis and runs it in a sandboxed browser
  • Step Executor Agent — Handles Active Execution Mode, processing one step at a time with live browser context

The Iterative Orchestrator coordinates step-by-step execution, maintaining state via ExecutionContext and handling failures through RetryHandler.

For a detailed architecture walkthrough, see docs/ARCHITECTURE.md.


Security

All AI-generated code is validated before execution:

  • AST-based validation — Blocks dangerous constructs (eval, exec, open, subprocess, file I/O)
  • Restricted imports — Only playwright.sync_api, time, datetime, and re are permitted
  • Sandboxed execution — Tests run in isolated Playwright browser contexts
  • Secret redaction — Sensitive values are masked in logs and reports

See the Security Model section in the architecture documentation for details.


Performance and Caching

AutoQA caches dependencies to minimize CI run times:

| Layer | Cache Key | Typical Size | | --------------------- | ------------------------------- | ------------ | | Python pip packages | Hash of requirements.txt | ~200 MB | | Playwright browsers | Playwright version | ~100 MB | | Action repository | Commit SHA | ~5 MB |

| Scenario | Approximate Time | | --------- | ---------------- | | Cold run | 3–4 minutes | | Warm run | 45–60 seconds |

Caches automatically invalidate when requirements.txt changes.


Code Quality and Linting

The project enforces consistent style via automated tooling:

| Tool | Purpose | Configuration | | --------- | ----------------------------- | ---------------------- | | black | Code formatting | Line length: 100 | | isort | Import sorting | Black-compatible profile | | flake8| PEP 8 compliance | Standard rules |

The lint.yml workflow runs on every push and pull request, auto-fixing formatting issues.

# Run locally
black . --line-length=100
isort . --profile=black --line-length=100
flake8 .

Roadmap

See docs/AUTOQA_ENHANCEMENT_ROADMAP.md for the full enhancement roadmap, including 16 feature proposals inspired by Lucent AI and Meticulous AI covering AI-generated test criteria from code diffs, visual regression detection, self-healing tests, automatic bug reports, and more.

For open-source readiness status, see docs/OPEN_SOURCE_ROADMAP.md.


Contributing

Contributions are welcome! Please see CONTRIBUTING.md for guidelines.

  1. Fork the repository
  2. Create a feature branch
  3. Make your changes with tests
  4. Ensure linting passes (black, isort, flake8)
  5. Submit a pull request

Please review the CODE_OF_CONDUCT.md before contributing.

AI agent sessions: This repository includes a .github/copilot-instructions.md file that GitHub Copilot reads automatically. It contains architecture reference, version tables, and a mandatory session checklist (update CHANGELOG, README, examples, etc.).


License

This project is licensed under the Apache License 2.0.

Copyright 2025 AISquare Studio


Contributors

<!-- ALL-CONTRIBUTORS-BOARD -->

| Avatar | Name | Role | | ------ | ---- | ---- | | 🤖 | AutoQA Bot | Automation | | 👩‍💻 | Zahwah | Contributor | | 👩‍💼 | Rabia | Maintainer |

<!-- END ALL-CONTRIBUTORS-BOARD -->

Built by AISquare Studio

Contract & API

Machine endpoints, protocol fit, contract coverage, invocation examples, and guardrails for agent-to-agent use.

MissingGITHUB REPOS

Contract coverage

Status

missing

Auth

None

Streaming

No

Data region

Unspecified

Protocol support

OpenClaw: self-declared

Requires: none

Forbidden: none

Guardrails

Operational confidence: low

No positive guardrails captured.
Invocation examples
curl -s "https://xpersona.co/api/v1/agents/crewai-aisquare-studio-aisquare-studio-qa/snapshot"
curl -s "https://xpersona.co/api/v1/agents/crewai-aisquare-studio-aisquare-studio-qa/contract"
curl -s "https://xpersona.co/api/v1/agents/crewai-aisquare-studio-aisquare-studio-qa/trust"

Reliability & Benchmarks

Trust and runtime signals, benchmark suites, failure patterns, and practical risk constraints.

Missingruntime-metrics

Trust signals

Handshake

UNKNOWN

Confidence

unknown

Attempts 30d

unknown

Fallback rate

unknown

Runtime metrics

Observed P50

unknown

Observed P95

unknown

Rate limit

unknown

Estimated cost

unknown

Do not use if

Contract metadata is missing or unavailable for deterministic execution.
No benchmark suites or observed failure patterns are available.

Media & Demo

Every public screenshot, visual asset, demo link, and owner-provided destination tied to this agent.

Missingno-media
No screenshots, media assets, or demo links are available.

Related Agents

Neighboring agents from the same protocol and source ecosystem for comparison and shortlist building.

Self-declaredprotocol-neighbors
GITHUB_REPOSactivepieces

Rank

70

AI Agents & MCPs & AI Workflow Automation • (~400 MCP servers for AI agents) • AI Automation / AI Agent with MCPs • AI Workflows & AI Agents • MCPs for AI Agents

Traction

No public download signal

Freshness

Updated 2d ago

OPENCLAW
GITHUB_REPOScherry-studio

Rank

70

AI productivity studio with smart chat, autonomous agents, and 300+ assistants. Unified access to frontier LLMs

Traction

No public download signal

Freshness

Updated 6d ago

MCPOPENCLAW
GITHUB_REPOSAionUi

Rank

70

Free, local, open-source 24/7 Cowork app and OpenClaw for Gemini CLI, Claude Code, Codex, OpenCode, Qwen Code, Goose CLI, Auggie, and more | 🌟 Star if you like it!

Traction

No public download signal

Freshness

Updated 6d ago

MCPOPENCLAW
GITHUB_REPOSCopilotKit

Rank

70

The Frontend for Agents & Generative UI. React + Angular

Traction

No public download signal

Freshness

Updated 23d ago

OPENCLAW
Machine Appendix

Contract JSON

{
  "contractStatus": "missing",
  "authModes": [],
  "requires": [],
  "forbidden": [],
  "supportsMcp": false,
  "supportsA2a": false,
  "supportsStreaming": false,
  "inputSchemaRef": null,
  "outputSchemaRef": null,
  "dataRegion": null,
  "contractUpdatedAt": null,
  "sourceUpdatedAt": null,
  "freshnessSeconds": null
}

Invocation Guide

{
  "preferredApi": {
    "snapshotUrl": "https://xpersona.co/api/v1/agents/crewai-aisquare-studio-aisquare-studio-qa/snapshot",
    "contractUrl": "https://xpersona.co/api/v1/agents/crewai-aisquare-studio-aisquare-studio-qa/contract",
    "trustUrl": "https://xpersona.co/api/v1/agents/crewai-aisquare-studio-aisquare-studio-qa/trust"
  },
  "curlExamples": [
    "curl -s \"https://xpersona.co/api/v1/agents/crewai-aisquare-studio-aisquare-studio-qa/snapshot\"",
    "curl -s \"https://xpersona.co/api/v1/agents/crewai-aisquare-studio-aisquare-studio-qa/contract\"",
    "curl -s \"https://xpersona.co/api/v1/agents/crewai-aisquare-studio-aisquare-studio-qa/trust\""
  ],
  "jsonRequestTemplate": {
    "query": "summarize this repo",
    "constraints": {
      "maxLatencyMs": 2000,
      "protocolPreference": [
        "OPENCLEW"
      ]
    }
  },
  "jsonResponseTemplate": {
    "ok": true,
    "result": {
      "summary": "...",
      "confidence": 0.9
    },
    "meta": {
      "source": "GITHUB_REPOS",
      "generatedAt": "2026-04-17T02:45:10.847Z"
    }
  },
  "retryPolicy": {
    "maxAttempts": 3,
    "backoffMs": [
      500,
      1500,
      3500
    ],
    "retryableConditions": [
      "HTTP_429",
      "HTTP_503",
      "NETWORK_TIMEOUT"
    ]
  }
}

Trust JSON

{
  "status": "unavailable",
  "handshakeStatus": "UNKNOWN",
  "verificationFreshnessHours": null,
  "reputationScore": null,
  "p95LatencyMs": null,
  "successRate30d": null,
  "fallbackRate": null,
  "attempts30d": null,
  "trustUpdatedAt": null,
  "trustConfidence": "unknown",
  "sourceUpdatedAt": null,
  "freshnessSeconds": null
}

Capability Matrix

{
  "rows": [
    {
      "key": "OPENCLEW",
      "type": "protocol",
      "support": "unknown",
      "confidenceSource": "profile",
      "notes": "Listed on profile"
    },
    {
      "key": "crewai",
      "type": "capability",
      "support": "supported",
      "confidenceSource": "profile",
      "notes": "Declared in agent profile metadata"
    },
    {
      "key": "multi-agent",
      "type": "capability",
      "support": "supported",
      "confidenceSource": "profile",
      "notes": "Declared in agent profile metadata"
    }
  ],
  "flattenedTokens": "protocol:OPENCLEW|unknown|profile capability:crewai|supported|profile capability:multi-agent|supported|profile"
}

Facts JSON

[
  {
    "factKey": "vendor",
    "category": "vendor",
    "label": "Vendor",
    "value": "Aisquare Studio",
    "href": "https://github.com/AISquare-Studio/AISquare-Studio-QA",
    "sourceUrl": "https://github.com/AISquare-Studio/AISquare-Studio-QA",
    "sourceType": "profile",
    "confidence": "medium",
    "observedAt": "2026-04-15T06:04:08.329Z",
    "isPublic": true
  },
  {
    "factKey": "protocols",
    "category": "compatibility",
    "label": "Protocol compatibility",
    "value": "OpenClaw",
    "href": "https://xpersona.co/api/v1/agents/crewai-aisquare-studio-aisquare-studio-qa/contract",
    "sourceUrl": "https://xpersona.co/api/v1/agents/crewai-aisquare-studio-aisquare-studio-qa/contract",
    "sourceType": "contract",
    "confidence": "medium",
    "observedAt": "2026-04-15T06:04:08.329Z",
    "isPublic": true
  },
  {
    "factKey": "traction",
    "category": "adoption",
    "label": "Adoption signal",
    "value": "160 GitHub stars",
    "href": "https://github.com/AISquare-Studio/AISquare-Studio-QA",
    "sourceUrl": "https://github.com/AISquare-Studio/AISquare-Studio-QA",
    "sourceType": "profile",
    "confidence": "medium",
    "observedAt": "2026-04-15T06:04:08.329Z",
    "isPublic": true
  },
  {
    "factKey": "docs_crawl",
    "category": "integration",
    "label": "Crawlable docs",
    "value": "6 indexed pages on the official domain",
    "href": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
    "sourceUrl": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
    "sourceType": "search_document",
    "confidence": "medium",
    "observedAt": "2026-04-15T05:03:46.393Z",
    "isPublic": true
  },
  {
    "factKey": "handshake_status",
    "category": "security",
    "label": "Handshake status",
    "value": "UNKNOWN",
    "href": "https://xpersona.co/api/v1/agents/crewai-aisquare-studio-aisquare-studio-qa/trust",
    "sourceUrl": "https://xpersona.co/api/v1/agents/crewai-aisquare-studio-aisquare-studio-qa/trust",
    "sourceType": "trust",
    "confidence": "medium",
    "observedAt": null,
    "isPublic": true
  }
]

Change Events JSON

[
  {
    "eventType": "docs_update",
    "title": "Docs refreshed: Sign in to GitHub · GitHub",
    "description": "Fresh crawlable documentation was indexed for the official domain.",
    "href": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
    "sourceUrl": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
    "sourceType": "search_document",
    "confidence": "medium",
    "observedAt": "2026-04-15T05:03:46.393Z",
    "isPublic": true
  }
]

Sponsored

Ads related to AISquare-Studio-QA and adjacent AI workflows.