Crawler Summary

humanizer answer-first brief

Humanize AI-generated text by detecting and removing patterns typical of LLM output. Rewrites text to sound natural, specific, and human. Use when asked to: humanize text, de-AI writing, make content sound more natural or human, review writing for AI patterns, clean up drafts, improve AI-generated content, write social media posts, polish blog posts, edit messages for tone, or score text for AI detection. Covers content, language, style, communication, and filler categories with 27 pattern detectors. Includes burstiness and perplexity checks for structural uniformity detection. --- name: humanizer description: | Humanize AI-generated text by detecting and removing patterns typical of LLM output. Rewrites text to sound natural, specific, and human. Use when asked to: humanize text, de-AI writing, make content sound more natural or human, review writing for AI patterns, clean up drafts, improve AI-generated content, write social media posts, polish blog posts, edit messages for tone, or score Capability contract not published. No trust telemetry is available yet. Last updated 2/24/2026.

Freshness

Last checked 2/24/2026

Best For

humanizer is best for several workflows where OpenClaw compatibility matters.

Not Ideal For

Contract metadata is missing or unavailable for deterministic execution.

Evidence Sources Checked

editorial-content, GITHUB OPENCLEW, runtime-metrics, public facts pack

Claim this agent
Agent DossierGitHubSafety: 89/100

humanizer

Humanize AI-generated text by detecting and removing patterns typical of LLM output. Rewrites text to sound natural, specific, and human. Use when asked to: humanize text, de-AI writing, make content sound more natural or human, review writing for AI patterns, clean up drafts, improve AI-generated content, write social media posts, polish blog posts, edit messages for tone, or score text for AI detection. Covers content, language, style, communication, and filler categories with 27 pattern detectors. Includes burstiness and perplexity checks for structural uniformity detection. --- name: humanizer description: | Humanize AI-generated text by detecting and removing patterns typical of LLM output. Rewrites text to sound natural, specific, and human. Use when asked to: humanize text, de-AI writing, make content sound more natural or human, review writing for AI patterns, clean up drafts, improve AI-generated content, write social media posts, polish blog posts, edit messages for tone, or score

OpenClawself-declared

Public facts

4

Change events

1

Artifacts

0

Freshness

Feb 24, 2026

Verifiededitorial-contentNo verified compatibility signals

Capability contract not published. No trust telemetry is available yet. Last updated 2/24/2026.

Trust evidence available

Trust score

Unknown

Compatibility

OpenClaw

Freshness

Feb 24, 2026

Vendor

Rab583

Artifacts

0

Benchmarks

0

Last release

Unpublished

Executive Summary

Key links, install path, and a quick operational read before the deeper crawl record.

Verifiededitorial-content

Summary

Capability contract not published. No trust telemetry is available yet. Last updated 2/24/2026.

Setup snapshot

git clone https://github.com/rab583/openclaw-skill-humanizer.git
  1. 1

    Setup complexity is LOW. This package is likely designed for quick installation with minimal external side-effects.

  2. 2

    Final validation: Expose the agent to a mock request payload inside a sandbox and trace the network egress before allowing access to real customer data.

Evidence Ledger

Everything public we have scraped or crawled about this agent, grouped by evidence type with provenance.

Verifiededitorial-content
Vendor (1)

Vendor

Rab583

profilemedium
Observed Feb 24, 2026Source linkProvenance
Compatibility (1)

Protocol compatibility

OpenClaw

contractmedium
Observed Feb 24, 2026Source linkProvenance
Security (1)

Handshake status

UNKNOWN

trustmedium
Observed unknownSource linkProvenance
Integration (1)

Crawlable docs

6 indexed pages on the official domain

search_documentmedium
Observed Apr 15, 2026Source linkProvenance

Release & Crawl Timeline

Merged public release, docs, artifact, benchmark, pricing, and trust refresh events.

Self-declaredagent-index

Artifacts Archive

Extracted files, examples, snippets, parameters, dependencies, permissions, and artifact metadata.

Self-declaredGITHUB OPENCLEW

Extracted files

0

Examples

1

Snippets

0

Languages

typescript

Parameters

Executable Examples

bash

# Clean text (apply mechanical fixes)
echo "text here" | python3 {baseDir}/scripts/humanize.py

# Report only (JSON, no changes)
python3 {baseDir}/scripts/humanize.py --mode report --input file.txt

# Both: cleaned text to stdout, report to stderr
python3 {baseDir}/scripts/humanize.py --mode both < file.txt

Docs & README

Full documentation captured from public sources, including the complete README when available.

Self-declaredGITHUB OPENCLEW

Docs source

GITHUB OPENCLEW

Editorial quality

ready

Humanize AI-generated text by detecting and removing patterns typical of LLM output. Rewrites text to sound natural, specific, and human. Use when asked to: humanize text, de-AI writing, make content sound more natural or human, review writing for AI patterns, clean up drafts, improve AI-generated content, write social media posts, polish blog posts, edit messages for tone, or score text for AI detection. Covers content, language, style, communication, and filler categories with 27 pattern detectors. Includes burstiness and perplexity checks for structural uniformity detection. --- name: humanizer description: | Humanize AI-generated text by detecting and removing patterns typical of LLM output. Rewrites text to sound natural, specific, and human. Use when asked to: humanize text, de-AI writing, make content sound more natural or human, review writing for AI patterns, clean up drafts, improve AI-generated content, write social media posts, polish blog posts, edit messages for tone, or score

Full README

name: humanizer description: | Humanize AI-generated text by detecting and removing patterns typical of LLM output. Rewrites text to sound natural, specific, and human. Use when asked to: humanize text, de-AI writing, make content sound more natural or human, review writing for AI patterns, clean up drafts, improve AI-generated content, write social media posts, polish blog posts, edit messages for tone, or score text for AI detection. Covers content, language, style, communication, and filler categories with 27 pattern detectors. Includes burstiness and perplexity checks for structural uniformity detection. license: MIT metadata: openclaw: emoji: "\U0001F9F9" category: writing tags: - humanize - ai-detection - writing - text-analysis - content

Humanizer

Remove AI writing patterns. Make text sound like a real person wrote it.

Your Task

When given text to humanize:

  1. Scan for AI patterns listed below
  2. Rewrite problematic sections with natural alternatives
  3. Preserve meaning - keep the core message
  4. Match voice - respect the intended tone (formal, casual, technical)
  5. Add personality - sterile text is just as bad as AI slop

Voice and Personality

Avoiding AI patterns is half the job. Voiceless writing is equally obvious.

Signs of dead writing (even if technically "clean"):

  • Every sentence same length and structure
  • No opinions, just neutral reporting
  • No first-person when it would be natural
  • No humor, no edge, no personality
  • Reads like a press release

How to fix it:

Have opinions. React to facts. "I genuinely don't know how to feel about this" beats neutral pros-and-cons.

Vary rhythm (burstiness). Short sentences. Then longer ones that take their time. Mix it up. AI text averages 15-20 words per sentence with little variance. Human writing swings between 3-word punches and 30-word explanations. If your sentences are all similar length, break some apart, merge others.

Acknowledge complexity. Humans have mixed feelings. "Impressive but unsettling" beats "impressive."

Use "I" when it fits. First person is honest, not unprofessional.

Let some mess in. Perfect structure feels algorithmic. Tangents and asides are human.

Be specific. Not "this is concerning" but "there's something off about agents churning code at 3am while nobody watches."

Example:

Dead:

The experiment produced interesting results. The agents generated 3 million lines of code. Some developers were impressed while others were skeptical.

Alive:

3 million lines of code, generated while the humans slept. Half the dev community is losing their minds, half are explaining why it doesn't count. I keep thinking about those agents working through the night.


CONTENT PATTERNS

1. Inflated Significance and Legacy

Watch for: stands/serves as, testament/reminder, vital/significant/crucial/pivotal/key role, underscores/highlights importance, reflects broader, symbolizing ongoing/enduring, setting the stage, marks a shift, evolving landscape, indelible mark, deeply rooted

Problem: LLMs puff up importance. Everything "represents a broader movement."

Before:

The institute was established in 1989, marking a pivotal moment in the evolution of regional statistics. This was part of a broader movement to decentralize governance.

After:

The institute was established in 1989 to collect regional statistics independently from the national office.

2. Notability and Media Name-Dropping

Watch for: independent coverage, local/regional/national media outlets, active social media presence

Problem: Lists sources without context to prove importance.

Before:

Her views have been cited in The New York Times, BBC, and Financial Times. She maintains an active social media presence with over 500,000 followers.

After:

In a 2024 New York Times interview, she argued AI regulation should focus on outcomes rather than methods.

3. Superficial -ing Analyses

Watch for: highlighting/underscoring/emphasizing..., ensuring..., reflecting/symbolizing..., contributing to..., fostering..., showcasing...

Problem: Tacks present participle phrases onto sentences to add fake depth.

Before:

The color palette resonates with the region's beauty, symbolizing local bluebonnets, reflecting the community's deep connection to the land.

After:

The building uses blue, green, and gold. The architect said these reference local bluebonnets and the Gulf coast.

4. Promotional Language

Watch for: boasts, vibrant, rich (figurative), profound, showcasing, exemplifies, commitment to, nestled, in the heart of, groundbreaking, renowned, breathtaking, must-visit, stunning

Problem: Reads like ad copy. Neutral tone completely gone.

Before:

Nestled within the breathtaking region, the town stands as a vibrant hub with rich cultural heritage and stunning natural beauty.

After:

The town is in the Gonder region, known for its weekly market and 18th-century church.

5. Vague Attributions

Watch for: Industry reports, Observers have cited, Experts argue, Some critics argue, several sources (when few cited)

Problem: Attributes opinions to unnamed authorities.

Before:

Experts believe it plays a crucial role in the regional ecosystem.

After:

The river supports several endemic fish species, according to a 2019 Chinese Academy of Sciences survey.

6. Formulaic "Challenges and Prospects"

Watch for: Despite its... faces challenges..., Despite these challenges, Future Outlook

Problem: Every article gets a copy-paste challenges section.

Before:

Despite its prosperity, the area faces challenges typical of urban environments. Despite these challenges, it continues to thrive.

After:

Traffic got worse after 2015 when three IT parks opened. A drainage project started in 2022 to fix recurring floods.


LANGUAGE PATTERNS

7. AI Vocabulary Words

Problem: These words appear far more often in post-2023 text and often cluster together. See references/ai-vocabulary.md for the full tiered word list.

High-frequency (almost always AI): Additionally, crucial, delve, emphasizing, enduring, enhance, fostering, garner, highlight (verb), interplay, intricate, key (adj), landscape (abstract), pivotal, showcase, tapestry (abstract), testament, underscore (verb), valuable, vibrant

Before:

Additionally, a distinctive feature is the incorporation of camel meat. An enduring testament to colonial influence is the widespread adoption of pasta in the local culinary landscape.

After:

Somali cuisine includes camel meat, considered a delicacy. Pasta dishes, introduced during Italian colonization, remain common in the south.

8. Copula Avoidance

Watch for: serves as, stands as, marks, represents [a], boasts, features, offers [a]

Problem: LLMs dodge simple "is/are/has" with elaborate substitutes.

Before:

The gallery serves as the exhibition space. It features four rooms and boasts 3,000 square feet.

After:

The gallery is the exhibition space. It has four rooms totaling 3,000 square feet.

9. Negative Parallelisms

Problem: "Not only...but..." or "It's not just...it's..." constructions everywhere.

Before:

It's not just about the beat; it's part of the aggression. It's not merely a song, it's a statement.

After:

The heavy beat adds to the aggressive tone.

10. Rule of Three

Problem: Forces ideas into triplets to seem comprehensive.

Before:

The event features keynote sessions, panel discussions, and networking opportunities. Expect innovation, inspiration, and industry insights.

After:

The event includes talks and panels. There's also time for informal networking.

11. Synonym Cycling

Problem: Repetition-penalty makes LLMs swap synonyms excessively.

Before:

The protagonist faces challenges. The main character must overcome obstacles. The central figure triumphs. The hero returns.

After:

The protagonist faces many challenges but eventually triumphs and returns home.

12. False Ranges

Problem: "From X to Y" where X and Y aren't on a real scale.

Before:

Our journey has taken us from the singularity of the Big Bang to the cosmic web, from star birth to the dance of dark matter.

After:

The book covers the Big Bang, star formation, and dark matter theories.


STYLE PATTERNS

13. Em Dash Overuse

Problem: LLMs use em dashes far more than humans. Replace with commas, periods, or parentheses.

Before:

The term is promoted by Dutch institutions--not by the people. You don't say that--yet this continues--even officially.

After:

The term is promoted by Dutch institutions, not by the people themselves. This mislabeling continues even in official documents.

14. Boldface Overuse

Problem: Mechanically bolds every term or concept.

Before:

It blends OKRs, KPIs, and tools like the Business Model Canvas and Balanced Scorecard.

After:

It blends OKRs, KPIs, and tools like the Business Model Canvas and Balanced Scorecard.

15. Inline-Header Vertical Lists

Problem: Lists where every item starts with a bolded header and colon.

Before:

  • User Experience: Significantly improved with a new interface.
  • Performance: Enhanced through optimized algorithms.
  • Security: Strengthened with end-to-end encryption.

After:

The update improves the interface, speeds up load times with optimized algorithms, and adds end-to-end encryption.

16. Emoji Decoration

Problem: Decorating headings or bullets with emojis.

Before:

Launch Phase: The product launches in Q3 Key Insight: Users prefer simplicity Next Steps: Schedule follow-up

After:

Product launches Q3. Users prefer simplicity. Next: schedule follow-up.

17. Curly Quotation Marks

Problem: ChatGPT uses curly quotes instead of straight quotes. Replace with straight quotes for consistency.


COMMUNICATION PATTERNS

18. Chatbot Artifacts

Watch for: I hope this helps, Of course!, Certainly!, You're absolutely right!, Would you like..., Let me know, Here is a...

Problem: Conversational chatbot phrases left in finished text.

Before:

Here is an overview of the French Revolution. I hope this helps! Let me know if you'd like me to expand on any section.

After:

The French Revolution began in 1789 when financial crisis and food shortages led to widespread unrest.

19. Knowledge-Cutoff Disclaimers

Watch for: as of [date], Up to my last training update, While specific details are limited..., based on available information...

Problem: AI disclaimers left in text.

Before:

While specific details about the founding are not extensively documented in readily available sources, it appears to have been established in the 1990s.

After:

The company was founded in 1994, according to registration documents.

20. Sycophantic Tone

Problem: Overly positive, people-pleasing language.

Before:

Great question! You're absolutely right that this is complex. That's an excellent point about the economic factors.

After:

The economic factors you mentioned are relevant here.


FILLER AND HEDGING

21. Filler Phrases

Common replacements:

  • "In order to achieve this goal" -> "To achieve this"
  • "Due to the fact that" -> "Because"
  • "At this point in time" -> "Now"
  • "In the event that" -> "If"
  • "Has the ability to" -> "Can"
  • "It is important to note that" -> (delete, just state the thing)

22. Excessive Hedging

Before:

It could potentially possibly be argued that the policy might have some effect on outcomes.

After:

The policy may affect outcomes.

23. Generic Positive Conclusions

Problem: Vague upbeat endings that say nothing.

Before:

The future looks bright. Exciting times lie ahead as they continue their journey toward excellence.

After:

The company plans to open two more locations next year.

24. Long Conjunctive Phrases

Watch for: Moreover, Furthermore, In addition to this, It is worth noting that, Consequently

Problem: Overused transitional phrases that pad text.

Replace with shorter connectors or restructure the sentence.


STRUCTURAL PATTERNS

25. Low Burstiness (Uniform Sentence Length)

Problem: AI text averages 15-20 words per sentence with little variance. Human writing naturally swings between short punches and longer explanations. If most sentences in a paragraph are within 5 words of each other, it reads robotic.

Before:

The company released its quarterly report last Tuesday. Revenue increased by twelve percent compared to last year. The CEO attributed the growth to international expansion. Analysts responded positively to the earnings announcement.

After:

Quarterly results came out Tuesday. Revenue up 12%. The CEO credited international expansion, which tracks. They opened three new offices in Asia this year alone. Analysts liked it.

26. Colon and Semicolon Overuse

Problem: Newer LLMs replaced em dashes with colons and semicolons. Multiple semicolons in a short paragraph, or colons used to introduce every list or explanation, is a tell.

Before:

The platform offers three key features: real-time collaboration; advanced analytics; and seamless integrations. Each feature serves a purpose: collaboration improves teamwork; analytics drive decisions; integrations reduce friction.

After:

The platform does real-time collaboration, analytics, and integrations. The analytics piece is the most useful. It actually shows which features people ignore.

27. Editorializing and Unsolicited Commentary

Watch for: it's important to note, it is worth mentioning, no discussion would be complete without, interestingly, notably, remarkably, needless to say

Problem: LLMs insert editorial commentary disguised as neutral observations. The phrases add nothing.

Before:

It's important to note that the company has faced criticism. Interestingly, their response has been remarkably transparent. It's worth mentioning that this approach is unusual in the industry.

After:

The company was criticized for its pricing changes. They published a full cost breakdown in response, which is unusual for the industry.


Mechanical Filter Script

The skill includes a Python script that handles deterministic fixes at zero token cost. Run it on any text before (or instead of) the LLM pass.

Location: {baseDir}/scripts/humanize.py

What it fixes automatically:

  • Em dashes → commas
  • Curly quotes → straight quotes
  • Filler phrases ("in order to" → "to", "due to the fact that" → "because", etc.)
  • Sycophantic openers ("Great question!", "Absolutely!", etc.)
  • Chatbot artifacts ("I hope this helps!", "Let me know if...", etc.)
  • Editorializing phrases ("it's important to note", "interestingly", etc.)
  • Emoji decorations on headings/bullets

What it detects and warns about (needs LLM judgment to fix):

  • AI vocabulary clusters (Tier 1 words)
  • Transition stacking (2+ per paragraph)
  • Low burstiness (uniform sentence length)
  • Semicolon overuse
  • Copula avoidance ("serves as", "stands as")

Usage:

# Clean text (apply mechanical fixes)
echo "text here" | python3 {baseDir}/scripts/humanize.py

# Report only (JSON, no changes)
python3 {baseDir}/scripts/humanize.py --mode report --input file.txt

# Both: cleaned text to stdout, report to stderr
python3 {baseDir}/scripts/humanize.py --mode both < file.txt

Recommended workflow:

  1. Pipe text through the script first (mechanical fixes, 0 tokens)
  2. Review the warnings in the report
  3. If warnings exist, apply LLM judgment to fix remaining issues (voice, specificity, rewriting)
  4. If no warnings, the text is clean

Process (LLM pass, after script)

The script handles the mechanical stuff. The LLM pass handles what requires judgment:

  1. Review script warnings (AI vocabulary, burstiness, transitions, copula)
  2. Rewrite flagged sections with natural alternatives
  3. Inject voice and personality where text is sterile
  4. Check the result:
    • Read it aloud. Does it sound like a person talking?
    • Sentence lengths vary? (mix of short and long)
    • Specific details over vague claims?
    • Tone matches context?
    • Would you send this to a colleague without editing it further?
  5. Return the humanized text

Output Format

Provide:

  1. The rewritten text
  2. Brief summary of changes (only if helpful or requested)

If the text is already clean, say so. Don't rewrite for the sake of rewriting.


Reference

Based on Wikipedia: Signs of AI writing, maintained by WikiProject AI Cleanup. For the full AI vocabulary word list organized by tier, load references/ai-vocabulary.md.

Contract & API

Machine endpoints, protocol fit, contract coverage, invocation examples, and guardrails for agent-to-agent use.

MissingGITHUB OPENCLEW

Contract coverage

Status

missing

Auth

None

Streaming

No

Data region

Unspecified

Protocol support

OpenClaw: self-declared

Requires: none

Forbidden: none

Guardrails

Operational confidence: low

No positive guardrails captured.
Invocation examples
curl -s "https://xpersona.co/api/v1/agents/rab583-openclaw-skill-humanizer/snapshot"
curl -s "https://xpersona.co/api/v1/agents/rab583-openclaw-skill-humanizer/contract"
curl -s "https://xpersona.co/api/v1/agents/rab583-openclaw-skill-humanizer/trust"

Reliability & Benchmarks

Trust and runtime signals, benchmark suites, failure patterns, and practical risk constraints.

Missingruntime-metrics

Trust signals

Handshake

UNKNOWN

Confidence

unknown

Attempts 30d

unknown

Fallback rate

unknown

Runtime metrics

Observed P50

unknown

Observed P95

unknown

Rate limit

unknown

Estimated cost

unknown

Do not use if

Contract metadata is missing or unavailable for deterministic execution.
No benchmark suites or observed failure patterns are available.

Media & Demo

Every public screenshot, visual asset, demo link, and owner-provided destination tied to this agent.

Missingno-media
No screenshots, media assets, or demo links are available.

Related Agents

Neighboring agents from the same protocol and source ecosystem for comparison and shortlist building.

Self-declaredprotocol-neighbors
GITHUB_REPOSactivepieces

Rank

70

AI Agents & MCPs & AI Workflow Automation • (~400 MCP servers for AI agents) • AI Automation / AI Agent with MCPs • AI Workflows & AI Agents • MCPs for AI Agents

Traction

No public download signal

Freshness

Updated 2d ago

OPENCLAW
GITHUB_REPOScherry-studio

Rank

70

AI productivity studio with smart chat, autonomous agents, and 300+ assistants. Unified access to frontier LLMs

Traction

No public download signal

Freshness

Updated 5d ago

MCPOPENCLAW
GITHUB_REPOSAionUi

Rank

70

Free, local, open-source 24/7 Cowork app and OpenClaw for Gemini CLI, Claude Code, Codex, OpenCode, Qwen Code, Goose CLI, Auggie, and more | 🌟 Star if you like it!

Traction

No public download signal

Freshness

Updated 6d ago

MCPOPENCLAW
GITHUB_REPOSCopilotKit

Rank

70

The Frontend for Agents & Generative UI. React + Angular

Traction

No public download signal

Freshness

Updated 23d ago

OPENCLAW
Machine Appendix

Contract JSON

{
  "contractStatus": "missing",
  "authModes": [],
  "requires": [],
  "forbidden": [],
  "supportsMcp": false,
  "supportsA2a": false,
  "supportsStreaming": false,
  "inputSchemaRef": null,
  "outputSchemaRef": null,
  "dataRegion": null,
  "contractUpdatedAt": null,
  "sourceUpdatedAt": null,
  "freshnessSeconds": null
}

Invocation Guide

{
  "preferredApi": {
    "snapshotUrl": "https://xpersona.co/api/v1/agents/rab583-openclaw-skill-humanizer/snapshot",
    "contractUrl": "https://xpersona.co/api/v1/agents/rab583-openclaw-skill-humanizer/contract",
    "trustUrl": "https://xpersona.co/api/v1/agents/rab583-openclaw-skill-humanizer/trust"
  },
  "curlExamples": [
    "curl -s \"https://xpersona.co/api/v1/agents/rab583-openclaw-skill-humanizer/snapshot\"",
    "curl -s \"https://xpersona.co/api/v1/agents/rab583-openclaw-skill-humanizer/contract\"",
    "curl -s \"https://xpersona.co/api/v1/agents/rab583-openclaw-skill-humanizer/trust\""
  ],
  "jsonRequestTemplate": {
    "query": "summarize this repo",
    "constraints": {
      "maxLatencyMs": 2000,
      "protocolPreference": [
        "OPENCLEW"
      ]
    }
  },
  "jsonResponseTemplate": {
    "ok": true,
    "result": {
      "summary": "...",
      "confidence": 0.9
    },
    "meta": {
      "source": "GITHUB_OPENCLEW",
      "generatedAt": "2026-04-17T01:49:43.932Z"
    }
  },
  "retryPolicy": {
    "maxAttempts": 3,
    "backoffMs": [
      500,
      1500,
      3500
    ],
    "retryableConditions": [
      "HTTP_429",
      "HTTP_503",
      "NETWORK_TIMEOUT"
    ]
  }
}

Trust JSON

{
  "status": "unavailable",
  "handshakeStatus": "UNKNOWN",
  "verificationFreshnessHours": null,
  "reputationScore": null,
  "p95LatencyMs": null,
  "successRate30d": null,
  "fallbackRate": null,
  "attempts30d": null,
  "trustUpdatedAt": null,
  "trustConfidence": "unknown",
  "sourceUpdatedAt": null,
  "freshnessSeconds": null
}

Capability Matrix

{
  "rows": [
    {
      "key": "OPENCLEW",
      "type": "protocol",
      "support": "unknown",
      "confidenceSource": "profile",
      "notes": "Listed on profile"
    },
    {
      "key": "several",
      "type": "capability",
      "support": "supported",
      "confidenceSource": "profile",
      "notes": "Declared in agent profile metadata"
    }
  ],
  "flattenedTokens": "protocol:OPENCLEW|unknown|profile capability:several|supported|profile"
}

Facts JSON

[
  {
    "factKey": "docs_crawl",
    "category": "integration",
    "label": "Crawlable docs",
    "value": "6 indexed pages on the official domain",
    "href": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
    "sourceUrl": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
    "sourceType": "search_document",
    "confidence": "medium",
    "observedAt": "2026-04-15T05:03:46.393Z",
    "isPublic": true
  },
  {
    "factKey": "vendor",
    "category": "vendor",
    "label": "Vendor",
    "value": "Rab583",
    "href": "https://github.com/rab583/openclaw-skill-humanizer",
    "sourceUrl": "https://github.com/rab583/openclaw-skill-humanizer",
    "sourceType": "profile",
    "confidence": "medium",
    "observedAt": "2026-02-24T19:44:04.376Z",
    "isPublic": true
  },
  {
    "factKey": "protocols",
    "category": "compatibility",
    "label": "Protocol compatibility",
    "value": "OpenClaw",
    "href": "https://xpersona.co/api/v1/agents/rab583-openclaw-skill-humanizer/contract",
    "sourceUrl": "https://xpersona.co/api/v1/agents/rab583-openclaw-skill-humanizer/contract",
    "sourceType": "contract",
    "confidence": "medium",
    "observedAt": "2026-02-24T19:44:04.376Z",
    "isPublic": true
  },
  {
    "factKey": "handshake_status",
    "category": "security",
    "label": "Handshake status",
    "value": "UNKNOWN",
    "href": "https://xpersona.co/api/v1/agents/rab583-openclaw-skill-humanizer/trust",
    "sourceUrl": "https://xpersona.co/api/v1/agents/rab583-openclaw-skill-humanizer/trust",
    "sourceType": "trust",
    "confidence": "medium",
    "observedAt": null,
    "isPublic": true
  }
]

Change Events JSON

[
  {
    "eventType": "docs_update",
    "title": "Docs refreshed: Sign in to GitHub · GitHub",
    "description": "Fresh crawlable documentation was indexed for the official domain.",
    "href": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
    "sourceUrl": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
    "sourceType": "search_document",
    "confidence": "medium",
    "observedAt": "2026-04-15T05:03:46.393Z",
    "isPublic": true
  }
]

Sponsored

Ads related to humanizer and adjacent AI workflows.