Rank
70
AI Agents & MCPs & AI Workflow Automation • (~400 MCP servers for AI agents) • AI Automation / AI Agent with MCPs • AI Workflows & AI Agents • MCPs for AI Agents
Traction
No public download signal
Freshness
Updated 2d ago
Crawler Summary
Humanize AI-generated text by detecting and removing patterns typical of LLM output. Rewrites text to sound natural, specific, and human. Use when asked to: humanize text, de-AI writing, make content sound more natural or human, review writing for AI patterns, clean up drafts, improve AI-generated content, write social media posts, polish blog posts, edit messages for tone, or score text for AI detection. Covers content, language, style, communication, and filler categories with 27 pattern detectors. Includes burstiness and perplexity checks for structural uniformity detection. --- name: humanizer description: | Humanize AI-generated text by detecting and removing patterns typical of LLM output. Rewrites text to sound natural, specific, and human. Use when asked to: humanize text, de-AI writing, make content sound more natural or human, review writing for AI patterns, clean up drafts, improve AI-generated content, write social media posts, polish blog posts, edit messages for tone, or score Capability contract not published. No trust telemetry is available yet. Last updated 2/24/2026.
Freshness
Last checked 2/24/2026
Best For
humanizer is best for several workflows where OpenClaw compatibility matters.
Not Ideal For
Contract metadata is missing or unavailable for deterministic execution.
Evidence Sources Checked
editorial-content, GITHUB OPENCLEW, runtime-metrics, public facts pack
Humanize AI-generated text by detecting and removing patterns typical of LLM output. Rewrites text to sound natural, specific, and human. Use when asked to: humanize text, de-AI writing, make content sound more natural or human, review writing for AI patterns, clean up drafts, improve AI-generated content, write social media posts, polish blog posts, edit messages for tone, or score text for AI detection. Covers content, language, style, communication, and filler categories with 27 pattern detectors. Includes burstiness and perplexity checks for structural uniformity detection. --- name: humanizer description: | Humanize AI-generated text by detecting and removing patterns typical of LLM output. Rewrites text to sound natural, specific, and human. Use when asked to: humanize text, de-AI writing, make content sound more natural or human, review writing for AI patterns, clean up drafts, improve AI-generated content, write social media posts, polish blog posts, edit messages for tone, or score
Public facts
4
Change events
1
Artifacts
0
Freshness
Feb 24, 2026
Capability contract not published. No trust telemetry is available yet. Last updated 2/24/2026.
Trust score
Unknown
Compatibility
OpenClaw
Freshness
Feb 24, 2026
Vendor
Rab583
Artifacts
0
Benchmarks
0
Last release
Unpublished
Key links, install path, and a quick operational read before the deeper crawl record.
Summary
Capability contract not published. No trust telemetry is available yet. Last updated 2/24/2026.
Setup snapshot
git clone https://github.com/rab583/openclaw-skill-humanizer.gitSetup complexity is LOW. This package is likely designed for quick installation with minimal external side-effects.
Final validation: Expose the agent to a mock request payload inside a sandbox and trace the network egress before allowing access to real customer data.
Everything public we have scraped or crawled about this agent, grouped by evidence type with provenance.
Vendor
Rab583
Protocol compatibility
OpenClaw
Handshake status
UNKNOWN
Crawlable docs
6 indexed pages on the official domain
Merged public release, docs, artifact, benchmark, pricing, and trust refresh events.
Extracted files, examples, snippets, parameters, dependencies, permissions, and artifact metadata.
Extracted files
0
Examples
1
Snippets
0
Languages
typescript
Parameters
bash
# Clean text (apply mechanical fixes)
echo "text here" | python3 {baseDir}/scripts/humanize.py
# Report only (JSON, no changes)
python3 {baseDir}/scripts/humanize.py --mode report --input file.txt
# Both: cleaned text to stdout, report to stderr
python3 {baseDir}/scripts/humanize.py --mode both < file.txtFull documentation captured from public sources, including the complete README when available.
Docs source
GITHUB OPENCLEW
Editorial quality
ready
Humanize AI-generated text by detecting and removing patterns typical of LLM output. Rewrites text to sound natural, specific, and human. Use when asked to: humanize text, de-AI writing, make content sound more natural or human, review writing for AI patterns, clean up drafts, improve AI-generated content, write social media posts, polish blog posts, edit messages for tone, or score text for AI detection. Covers content, language, style, communication, and filler categories with 27 pattern detectors. Includes burstiness and perplexity checks for structural uniformity detection. --- name: humanizer description: | Humanize AI-generated text by detecting and removing patterns typical of LLM output. Rewrites text to sound natural, specific, and human. Use when asked to: humanize text, de-AI writing, make content sound more natural or human, review writing for AI patterns, clean up drafts, improve AI-generated content, write social media posts, polish blog posts, edit messages for tone, or score
Remove AI writing patterns. Make text sound like a real person wrote it.
When given text to humanize:
Avoiding AI patterns is half the job. Voiceless writing is equally obvious.
Have opinions. React to facts. "I genuinely don't know how to feel about this" beats neutral pros-and-cons.
Vary rhythm (burstiness). Short sentences. Then longer ones that take their time. Mix it up. AI text averages 15-20 words per sentence with little variance. Human writing swings between 3-word punches and 30-word explanations. If your sentences are all similar length, break some apart, merge others.
Acknowledge complexity. Humans have mixed feelings. "Impressive but unsettling" beats "impressive."
Use "I" when it fits. First person is honest, not unprofessional.
Let some mess in. Perfect structure feels algorithmic. Tangents and asides are human.
Be specific. Not "this is concerning" but "there's something off about agents churning code at 3am while nobody watches."
Dead:
The experiment produced interesting results. The agents generated 3 million lines of code. Some developers were impressed while others were skeptical.
Alive:
3 million lines of code, generated while the humans slept. Half the dev community is losing their minds, half are explaining why it doesn't count. I keep thinking about those agents working through the night.
Watch for: stands/serves as, testament/reminder, vital/significant/crucial/pivotal/key role, underscores/highlights importance, reflects broader, symbolizing ongoing/enduring, setting the stage, marks a shift, evolving landscape, indelible mark, deeply rooted
Problem: LLMs puff up importance. Everything "represents a broader movement."
Before:
The institute was established in 1989, marking a pivotal moment in the evolution of regional statistics. This was part of a broader movement to decentralize governance.
After:
The institute was established in 1989 to collect regional statistics independently from the national office.
Watch for: independent coverage, local/regional/national media outlets, active social media presence
Problem: Lists sources without context to prove importance.
Before:
Her views have been cited in The New York Times, BBC, and Financial Times. She maintains an active social media presence with over 500,000 followers.
After:
In a 2024 New York Times interview, she argued AI regulation should focus on outcomes rather than methods.
Watch for: highlighting/underscoring/emphasizing..., ensuring..., reflecting/symbolizing..., contributing to..., fostering..., showcasing...
Problem: Tacks present participle phrases onto sentences to add fake depth.
Before:
The color palette resonates with the region's beauty, symbolizing local bluebonnets, reflecting the community's deep connection to the land.
After:
The building uses blue, green, and gold. The architect said these reference local bluebonnets and the Gulf coast.
Watch for: boasts, vibrant, rich (figurative), profound, showcasing, exemplifies, commitment to, nestled, in the heart of, groundbreaking, renowned, breathtaking, must-visit, stunning
Problem: Reads like ad copy. Neutral tone completely gone.
Before:
Nestled within the breathtaking region, the town stands as a vibrant hub with rich cultural heritage and stunning natural beauty.
After:
The town is in the Gonder region, known for its weekly market and 18th-century church.
Watch for: Industry reports, Observers have cited, Experts argue, Some critics argue, several sources (when few cited)
Problem: Attributes opinions to unnamed authorities.
Before:
Experts believe it plays a crucial role in the regional ecosystem.
After:
The river supports several endemic fish species, according to a 2019 Chinese Academy of Sciences survey.
Watch for: Despite its... faces challenges..., Despite these challenges, Future Outlook
Problem: Every article gets a copy-paste challenges section.
Before:
Despite its prosperity, the area faces challenges typical of urban environments. Despite these challenges, it continues to thrive.
After:
Traffic got worse after 2015 when three IT parks opened. A drainage project started in 2022 to fix recurring floods.
Problem: These words appear far more often in post-2023 text and often cluster together. See references/ai-vocabulary.md for the full tiered word list.
High-frequency (almost always AI): Additionally, crucial, delve, emphasizing, enduring, enhance, fostering, garner, highlight (verb), interplay, intricate, key (adj), landscape (abstract), pivotal, showcase, tapestry (abstract), testament, underscore (verb), valuable, vibrant
Before:
Additionally, a distinctive feature is the incorporation of camel meat. An enduring testament to colonial influence is the widespread adoption of pasta in the local culinary landscape.
After:
Somali cuisine includes camel meat, considered a delicacy. Pasta dishes, introduced during Italian colonization, remain common in the south.
Watch for: serves as, stands as, marks, represents [a], boasts, features, offers [a]
Problem: LLMs dodge simple "is/are/has" with elaborate substitutes.
Before:
The gallery serves as the exhibition space. It features four rooms and boasts 3,000 square feet.
After:
The gallery is the exhibition space. It has four rooms totaling 3,000 square feet.
Problem: "Not only...but..." or "It's not just...it's..." constructions everywhere.
Before:
It's not just about the beat; it's part of the aggression. It's not merely a song, it's a statement.
After:
The heavy beat adds to the aggressive tone.
Problem: Forces ideas into triplets to seem comprehensive.
Before:
The event features keynote sessions, panel discussions, and networking opportunities. Expect innovation, inspiration, and industry insights.
After:
The event includes talks and panels. There's also time for informal networking.
Problem: Repetition-penalty makes LLMs swap synonyms excessively.
Before:
The protagonist faces challenges. The main character must overcome obstacles. The central figure triumphs. The hero returns.
After:
The protagonist faces many challenges but eventually triumphs and returns home.
Problem: "From X to Y" where X and Y aren't on a real scale.
Before:
Our journey has taken us from the singularity of the Big Bang to the cosmic web, from star birth to the dance of dark matter.
After:
The book covers the Big Bang, star formation, and dark matter theories.
Problem: LLMs use em dashes far more than humans. Replace with commas, periods, or parentheses.
Before:
The term is promoted by Dutch institutions--not by the people. You don't say that--yet this continues--even officially.
After:
The term is promoted by Dutch institutions, not by the people themselves. This mislabeling continues even in official documents.
Problem: Mechanically bolds every term or concept.
Before:
It blends OKRs, KPIs, and tools like the Business Model Canvas and Balanced Scorecard.
After:
It blends OKRs, KPIs, and tools like the Business Model Canvas and Balanced Scorecard.
Problem: Lists where every item starts with a bolded header and colon.
Before:
- User Experience: Significantly improved with a new interface.
- Performance: Enhanced through optimized algorithms.
- Security: Strengthened with end-to-end encryption.
After:
The update improves the interface, speeds up load times with optimized algorithms, and adds end-to-end encryption.
Problem: Decorating headings or bullets with emojis.
Before:
Launch Phase: The product launches in Q3 Key Insight: Users prefer simplicity Next Steps: Schedule follow-up
After:
Product launches Q3. Users prefer simplicity. Next: schedule follow-up.
Problem: ChatGPT uses curly quotes instead of straight quotes. Replace with straight quotes for consistency.
Watch for: I hope this helps, Of course!, Certainly!, You're absolutely right!, Would you like..., Let me know, Here is a...
Problem: Conversational chatbot phrases left in finished text.
Before:
Here is an overview of the French Revolution. I hope this helps! Let me know if you'd like me to expand on any section.
After:
The French Revolution began in 1789 when financial crisis and food shortages led to widespread unrest.
Watch for: as of [date], Up to my last training update, While specific details are limited..., based on available information...
Problem: AI disclaimers left in text.
Before:
While specific details about the founding are not extensively documented in readily available sources, it appears to have been established in the 1990s.
After:
The company was founded in 1994, according to registration documents.
Problem: Overly positive, people-pleasing language.
Before:
Great question! You're absolutely right that this is complex. That's an excellent point about the economic factors.
After:
The economic factors you mentioned are relevant here.
Common replacements:
Before:
It could potentially possibly be argued that the policy might have some effect on outcomes.
After:
The policy may affect outcomes.
Problem: Vague upbeat endings that say nothing.
Before:
The future looks bright. Exciting times lie ahead as they continue their journey toward excellence.
After:
The company plans to open two more locations next year.
Watch for: Moreover, Furthermore, In addition to this, It is worth noting that, Consequently
Problem: Overused transitional phrases that pad text.
Replace with shorter connectors or restructure the sentence.
Problem: AI text averages 15-20 words per sentence with little variance. Human writing naturally swings between short punches and longer explanations. If most sentences in a paragraph are within 5 words of each other, it reads robotic.
Before:
The company released its quarterly report last Tuesday. Revenue increased by twelve percent compared to last year. The CEO attributed the growth to international expansion. Analysts responded positively to the earnings announcement.
After:
Quarterly results came out Tuesday. Revenue up 12%. The CEO credited international expansion, which tracks. They opened three new offices in Asia this year alone. Analysts liked it.
Problem: Newer LLMs replaced em dashes with colons and semicolons. Multiple semicolons in a short paragraph, or colons used to introduce every list or explanation, is a tell.
Before:
The platform offers three key features: real-time collaboration; advanced analytics; and seamless integrations. Each feature serves a purpose: collaboration improves teamwork; analytics drive decisions; integrations reduce friction.
After:
The platform does real-time collaboration, analytics, and integrations. The analytics piece is the most useful. It actually shows which features people ignore.
Watch for: it's important to note, it is worth mentioning, no discussion would be complete without, interestingly, notably, remarkably, needless to say
Problem: LLMs insert editorial commentary disguised as neutral observations. The phrases add nothing.
Before:
It's important to note that the company has faced criticism. Interestingly, their response has been remarkably transparent. It's worth mentioning that this approach is unusual in the industry.
After:
The company was criticized for its pricing changes. They published a full cost breakdown in response, which is unusual for the industry.
The skill includes a Python script that handles deterministic fixes at zero token cost. Run it on any text before (or instead of) the LLM pass.
Location: {baseDir}/scripts/humanize.py
What it fixes automatically:
What it detects and warns about (needs LLM judgment to fix):
Usage:
# Clean text (apply mechanical fixes)
echo "text here" | python3 {baseDir}/scripts/humanize.py
# Report only (JSON, no changes)
python3 {baseDir}/scripts/humanize.py --mode report --input file.txt
# Both: cleaned text to stdout, report to stderr
python3 {baseDir}/scripts/humanize.py --mode both < file.txt
Recommended workflow:
The script handles the mechanical stuff. The LLM pass handles what requires judgment:
Provide:
If the text is already clean, say so. Don't rewrite for the sake of rewriting.
Based on Wikipedia: Signs of AI writing, maintained by WikiProject AI Cleanup. For the full AI vocabulary word list organized by tier, load references/ai-vocabulary.md.
Machine endpoints, protocol fit, contract coverage, invocation examples, and guardrails for agent-to-agent use.
Contract coverage
Status
missing
Auth
None
Streaming
No
Data region
Unspecified
Protocol support
Requires: none
Forbidden: none
Guardrails
Operational confidence: low
curl -s "https://xpersona.co/api/v1/agents/rab583-openclaw-skill-humanizer/snapshot"
curl -s "https://xpersona.co/api/v1/agents/rab583-openclaw-skill-humanizer/contract"
curl -s "https://xpersona.co/api/v1/agents/rab583-openclaw-skill-humanizer/trust"
Trust and runtime signals, benchmark suites, failure patterns, and practical risk constraints.
Trust signals
Handshake
UNKNOWN
Confidence
unknown
Attempts 30d
unknown
Fallback rate
unknown
Runtime metrics
Observed P50
unknown
Observed P95
unknown
Rate limit
unknown
Estimated cost
unknown
Do not use if
Every public screenshot, visual asset, demo link, and owner-provided destination tied to this agent.
Neighboring agents from the same protocol and source ecosystem for comparison and shortlist building.
Rank
70
AI Agents & MCPs & AI Workflow Automation • (~400 MCP servers for AI agents) • AI Automation / AI Agent with MCPs • AI Workflows & AI Agents • MCPs for AI Agents
Traction
No public download signal
Freshness
Updated 2d ago
Rank
70
AI productivity studio with smart chat, autonomous agents, and 300+ assistants. Unified access to frontier LLMs
Traction
No public download signal
Freshness
Updated 5d ago
Rank
70
Free, local, open-source 24/7 Cowork app and OpenClaw for Gemini CLI, Claude Code, Codex, OpenCode, Qwen Code, Goose CLI, Auggie, and more | 🌟 Star if you like it!
Traction
No public download signal
Freshness
Updated 6d ago
Rank
70
The Frontend for Agents & Generative UI. React + Angular
Traction
No public download signal
Freshness
Updated 23d ago
Contract JSON
{
"contractStatus": "missing",
"authModes": [],
"requires": [],
"forbidden": [],
"supportsMcp": false,
"supportsA2a": false,
"supportsStreaming": false,
"inputSchemaRef": null,
"outputSchemaRef": null,
"dataRegion": null,
"contractUpdatedAt": null,
"sourceUpdatedAt": null,
"freshnessSeconds": null
}Invocation Guide
{
"preferredApi": {
"snapshotUrl": "https://xpersona.co/api/v1/agents/rab583-openclaw-skill-humanizer/snapshot",
"contractUrl": "https://xpersona.co/api/v1/agents/rab583-openclaw-skill-humanizer/contract",
"trustUrl": "https://xpersona.co/api/v1/agents/rab583-openclaw-skill-humanizer/trust"
},
"curlExamples": [
"curl -s \"https://xpersona.co/api/v1/agents/rab583-openclaw-skill-humanizer/snapshot\"",
"curl -s \"https://xpersona.co/api/v1/agents/rab583-openclaw-skill-humanizer/contract\"",
"curl -s \"https://xpersona.co/api/v1/agents/rab583-openclaw-skill-humanizer/trust\""
],
"jsonRequestTemplate": {
"query": "summarize this repo",
"constraints": {
"maxLatencyMs": 2000,
"protocolPreference": [
"OPENCLEW"
]
}
},
"jsonResponseTemplate": {
"ok": true,
"result": {
"summary": "...",
"confidence": 0.9
},
"meta": {
"source": "GITHUB_OPENCLEW",
"generatedAt": "2026-04-17T01:49:43.932Z"
}
},
"retryPolicy": {
"maxAttempts": 3,
"backoffMs": [
500,
1500,
3500
],
"retryableConditions": [
"HTTP_429",
"HTTP_503",
"NETWORK_TIMEOUT"
]
}
}Trust JSON
{
"status": "unavailable",
"handshakeStatus": "UNKNOWN",
"verificationFreshnessHours": null,
"reputationScore": null,
"p95LatencyMs": null,
"successRate30d": null,
"fallbackRate": null,
"attempts30d": null,
"trustUpdatedAt": null,
"trustConfidence": "unknown",
"sourceUpdatedAt": null,
"freshnessSeconds": null
}Capability Matrix
{
"rows": [
{
"key": "OPENCLEW",
"type": "protocol",
"support": "unknown",
"confidenceSource": "profile",
"notes": "Listed on profile"
},
{
"key": "several",
"type": "capability",
"support": "supported",
"confidenceSource": "profile",
"notes": "Declared in agent profile metadata"
}
],
"flattenedTokens": "protocol:OPENCLEW|unknown|profile capability:several|supported|profile"
}Facts JSON
[
{
"factKey": "docs_crawl",
"category": "integration",
"label": "Crawlable docs",
"value": "6 indexed pages on the official domain",
"href": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
"sourceUrl": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
"sourceType": "search_document",
"confidence": "medium",
"observedAt": "2026-04-15T05:03:46.393Z",
"isPublic": true
},
{
"factKey": "vendor",
"category": "vendor",
"label": "Vendor",
"value": "Rab583",
"href": "https://github.com/rab583/openclaw-skill-humanizer",
"sourceUrl": "https://github.com/rab583/openclaw-skill-humanizer",
"sourceType": "profile",
"confidence": "medium",
"observedAt": "2026-02-24T19:44:04.376Z",
"isPublic": true
},
{
"factKey": "protocols",
"category": "compatibility",
"label": "Protocol compatibility",
"value": "OpenClaw",
"href": "https://xpersona.co/api/v1/agents/rab583-openclaw-skill-humanizer/contract",
"sourceUrl": "https://xpersona.co/api/v1/agents/rab583-openclaw-skill-humanizer/contract",
"sourceType": "contract",
"confidence": "medium",
"observedAt": "2026-02-24T19:44:04.376Z",
"isPublic": true
},
{
"factKey": "handshake_status",
"category": "security",
"label": "Handshake status",
"value": "UNKNOWN",
"href": "https://xpersona.co/api/v1/agents/rab583-openclaw-skill-humanizer/trust",
"sourceUrl": "https://xpersona.co/api/v1/agents/rab583-openclaw-skill-humanizer/trust",
"sourceType": "trust",
"confidence": "medium",
"observedAt": null,
"isPublic": true
}
]Change Events JSON
[
{
"eventType": "docs_update",
"title": "Docs refreshed: Sign in to GitHub · GitHub",
"description": "Fresh crawlable documentation was indexed for the official domain.",
"href": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
"sourceUrl": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
"sourceType": "search_document",
"confidence": "medium",
"observedAt": "2026-04-15T05:03:46.393Z",
"isPublic": true
}
]Sponsored
Ads related to humanizer and adjacent AI workflows.