Rank
70
AI Agents & MCPs & AI Workflow Automation • (~400 MCP servers for AI agents) • AI Automation / AI Agent with MCPs • AI Workflows & AI Agents • MCPs for AI Agents
Traction
No public download signal
Freshness
Updated 2d ago
Crawler Summary
Call 100+ LLM providers through LiteLLM's unified API. Use when you need to call a different model than your primary (e.g., use GPT-4 for code review while running on Claude), compare outputs from multiple models, route to cheaper models for simple tasks, or access models your runtime doesn't natively support. --- name: litellm description: Call 100+ LLM providers through LiteLLM's unified API. Use when you need to call a different model than your primary (e.g., use GPT-4 for code review while running on Claude), compare outputs from multiple models, route to cheaper models for simple tasks, or access models your runtime doesn't natively support. --- LiteLLM - Multi-Model LLM Calls Use LiteLLM when you need to call LLMs be Published capability contract available. No trust telemetry is available yet. Last updated 3/1/2026.
Freshness
Last checked 3/1/2026
Best For
Contract is available with explicit auth and schema references.
Not Ideal For
litellm is not ideal for teams that need stronger public trust telemetry, lower setup complexity, or more explicit contract coverage before production rollout.
Evidence Sources Checked
editorial-content, capability-contract, runtime-metrics, public facts pack
Call 100+ LLM providers through LiteLLM's unified API. Use when you need to call a different model than your primary (e.g., use GPT-4 for code review while running on Claude), compare outputs from multiple models, route to cheaper models for simple tasks, or access models your runtime doesn't natively support. --- name: litellm description: Call 100+ LLM providers through LiteLLM's unified API. Use when you need to call a different model than your primary (e.g., use GPT-4 for code review while running on Claude), compare outputs from multiple models, route to cheaper models for simple tasks, or access models your runtime doesn't natively support. --- LiteLLM - Multi-Model LLM Calls Use LiteLLM when you need to call LLMs be
Public facts
6
Change events
1
Artifacts
0
Freshness
Mar 1, 2026
Published capability contract available. No trust telemetry is available yet. Last updated 3/1/2026.
Trust score
Unknown
Compatibility
OpenClaw
Freshness
Mar 1, 2026
Vendor
Shin Bot Litellm
Artifacts
0
Benchmarks
0
Last release
Unpublished
Key links, install path, and a quick operational read before the deeper crawl record.
Summary
Published capability contract available. No trust telemetry is available yet. Last updated 3/1/2026.
Setup snapshot
git clone https://github.com/shin-bot-litellm/openclaw-litellm-skill.gitSetup complexity is LOW. This package is likely designed for quick installation with minimal external side-effects.
Final validation: Expose the agent to a mock request payload inside a sandbox and trace the network egress before allowing access to real customer data.
Everything public we have scraped or crawled about this agent, grouped by evidence type with provenance.
Vendor
Shin Bot Litellm
Protocol compatibility
OpenClaw
Auth modes
api_key
Machine-readable schemas
OpenAPI or schema references published
Handshake status
UNKNOWN
Crawlable docs
6 indexed pages on the official domain
Merged public release, docs, artifact, benchmark, pricing, and trust refresh events.
Extracted files, examples, snippets, parameters, dependencies, permissions, and artifact metadata.
Extracted files
0
Examples
5
Snippets
0
Languages
typescript
Parameters
python
import litellm
# Call any model with unified API
response = litellm.completion(
model="gpt-4o",
messages=[{"role": "user", "content": "Explain this code"}]
)
print(response.choices[0].message.content)python
import litellm
prompt = [{"role": "user", "content": "What's the best approach to X?"}]
models = ["gpt-4o", "claude-sonnet-4-20250514", "gemini/gemini-1.5-pro"]
for model in models:
resp = litellm.completion(model=model, messages=prompt)
print(f"{model}: {resp.choices[0].message.content[:200]}...")python
import litellm
def smart_call(task_type: str, prompt: str) -> str:
model_map = {
"code": "gpt-4o", # Strong at code
"writing": "claude-sonnet-4-20250514", # Strong at prose
"simple": "gpt-4o-mini", # Cheap for simple tasks
"reasoning": "o1-preview", # Deep reasoning
}
model = model_map.get(task_type, "gpt-4o")
resp = litellm.completion(
model=model,
messages=[{"role": "user", "content": prompt}]
)
return resp.choices[0].message.contentpython
import litellm
litellm.api_base = "https://your-litellm-proxy.com"
litellm.api_key = "sk-your-key"
response = litellm.completion(
model="gpt-4o", # Proxy routes to configured provider
messages=[{"role": "user", "content": "Hello"}]
)bash
pip install litellm # Set provider keys (or configure in proxy) export OPENAI_API_KEY="sk-..." export ANTHROPIC_API_KEY="sk-..."
Full documentation captured from public sources, including the complete README when available.
Docs source
GITHUB OPENCLEW
Editorial quality
ready
Call 100+ LLM providers through LiteLLM's unified API. Use when you need to call a different model than your primary (e.g., use GPT-4 for code review while running on Claude), compare outputs from multiple models, route to cheaper models for simple tasks, or access models your runtime doesn't natively support. --- name: litellm description: Call 100+ LLM providers through LiteLLM's unified API. Use when you need to call a different model than your primary (e.g., use GPT-4 for code review while running on Claude), compare outputs from multiple models, route to cheaper models for simple tasks, or access models your runtime doesn't natively support. --- LiteLLM - Multi-Model LLM Calls Use LiteLLM when you need to call LLMs be
Use LiteLLM when you need to call LLMs beyond your primary model.
import litellm
# Call any model with unified API
response = litellm.completion(
model="gpt-4o",
messages=[{"role": "user", "content": "Explain this code"}]
)
print(response.choices[0].message.content)
import litellm
prompt = [{"role": "user", "content": "What's the best approach to X?"}]
models = ["gpt-4o", "claude-sonnet-4-20250514", "gemini/gemini-1.5-pro"]
for model in models:
resp = litellm.completion(model=model, messages=prompt)
print(f"{model}: {resp.choices[0].message.content[:200]}...")
import litellm
def smart_call(task_type: str, prompt: str) -> str:
model_map = {
"code": "gpt-4o", # Strong at code
"writing": "claude-sonnet-4-20250514", # Strong at prose
"simple": "gpt-4o-mini", # Cheap for simple tasks
"reasoning": "o1-preview", # Deep reasoning
}
model = model_map.get(task_type, "gpt-4o")
resp = litellm.completion(
model=model,
messages=[{"role": "user", "content": prompt}]
)
return resp.choices[0].message.content
If a LiteLLM proxy is available, point to it for caching, rate limiting, and observability:
import litellm
litellm.api_base = "https://your-litellm-proxy.com"
litellm.api_key = "sk-your-key"
response = litellm.completion(
model="gpt-4o", # Proxy routes to configured provider
messages=[{"role": "user", "content": "Hello"}]
)
Ensure litellm is installed and API keys are set:
pip install litellm
# Set provider keys (or configure in proxy)
export OPENAI_API_KEY="sk-..."
export ANTHROPIC_API_KEY="sk-..."
Common model identifiers:
gpt-4o, gpt-4o-mini, o1-preview, o1-miniclaude-sonnet-4-20250514, claude-opus-4-20250514gemini/gemini-1.5-pro, gemini/gemini-1.5-flashmistral/mistral-large-latestFull list: https://docs.litellm.ai/docs/providers
Machine endpoints, protocol fit, contract coverage, invocation examples, and guardrails for agent-to-agent use.
Contract coverage
Status
ready
Auth
api_key
Streaming
No
Data region
global
Protocol support
Requires: openclew, lang:typescript
Forbidden: none
Guardrails
Operational confidence: medium
curl -s "https://xpersona.co/api/v1/agents/shin-bot-litellm-openclaw-litellm-skill/snapshot"
curl -s "https://xpersona.co/api/v1/agents/shin-bot-litellm-openclaw-litellm-skill/contract"
curl -s "https://xpersona.co/api/v1/agents/shin-bot-litellm-openclaw-litellm-skill/trust"
Trust and runtime signals, benchmark suites, failure patterns, and practical risk constraints.
Trust signals
Handshake
UNKNOWN
Confidence
unknown
Attempts 30d
unknown
Fallback rate
unknown
Runtime metrics
Observed P50
unknown
Observed P95
unknown
Rate limit
unknown
Estimated cost
unknown
Every public screenshot, visual asset, demo link, and owner-provided destination tied to this agent.
Neighboring agents from the same protocol and source ecosystem for comparison and shortlist building.
Rank
70
AI Agents & MCPs & AI Workflow Automation • (~400 MCP servers for AI agents) • AI Automation / AI Agent with MCPs • AI Workflows & AI Agents • MCPs for AI Agents
Traction
No public download signal
Freshness
Updated 2d ago
Rank
70
AI productivity studio with smart chat, autonomous agents, and 300+ assistants. Unified access to frontier LLMs
Traction
No public download signal
Freshness
Updated 6d ago
Rank
70
Free, local, open-source 24/7 Cowork app and OpenClaw for Gemini CLI, Claude Code, Codex, OpenCode, Qwen Code, Goose CLI, Auggie, and more | 🌟 Star if you like it!
Traction
No public download signal
Freshness
Updated 6d ago
Rank
70
The Frontend for Agents & Generative UI. React + Angular
Traction
No public download signal
Freshness
Updated 23d ago
Contract JSON
{
"contractStatus": "ready",
"authModes": [
"api_key"
],
"requires": [
"openclew",
"lang:typescript"
],
"forbidden": [],
"supportsMcp": false,
"supportsA2a": false,
"supportsStreaming": false,
"inputSchemaRef": "https://github.com/shin-bot-litellm/openclaw-litellm-skill#input",
"outputSchemaRef": "https://github.com/shin-bot-litellm/openclaw-litellm-skill#output",
"dataRegion": "global",
"contractUpdatedAt": "2026-02-24T19:44:20.977Z",
"sourceUpdatedAt": "2026-02-24T19:44:20.977Z",
"freshnessSeconds": 4439815
}Invocation Guide
{
"preferredApi": {
"snapshotUrl": "https://xpersona.co/api/v1/agents/shin-bot-litellm-openclaw-litellm-skill/snapshot",
"contractUrl": "https://xpersona.co/api/v1/agents/shin-bot-litellm-openclaw-litellm-skill/contract",
"trustUrl": "https://xpersona.co/api/v1/agents/shin-bot-litellm-openclaw-litellm-skill/trust"
},
"curlExamples": [
"curl -s \"https://xpersona.co/api/v1/agents/shin-bot-litellm-openclaw-litellm-skill/snapshot\"",
"curl -s \"https://xpersona.co/api/v1/agents/shin-bot-litellm-openclaw-litellm-skill/contract\"",
"curl -s \"https://xpersona.co/api/v1/agents/shin-bot-litellm-openclaw-litellm-skill/trust\""
],
"jsonRequestTemplate": {
"query": "summarize this repo",
"constraints": {
"maxLatencyMs": 2000,
"protocolPreference": [
"OPENCLEW"
]
}
},
"jsonResponseTemplate": {
"ok": true,
"result": {
"summary": "...",
"confidence": 0.9
},
"meta": {
"source": "GITHUB_OPENCLEW",
"generatedAt": "2026-04-17T05:01:16.960Z"
}
},
"retryPolicy": {
"maxAttempts": 3,
"backoffMs": [
500,
1500,
3500
],
"retryableConditions": [
"HTTP_429",
"HTTP_503",
"NETWORK_TIMEOUT"
]
}
}Trust JSON
{
"status": "unavailable",
"handshakeStatus": "UNKNOWN",
"verificationFreshnessHours": null,
"reputationScore": null,
"p95LatencyMs": null,
"successRate30d": null,
"fallbackRate": null,
"attempts30d": null,
"trustUpdatedAt": null,
"trustConfidence": "unknown",
"sourceUpdatedAt": null,
"freshnessSeconds": null
}Capability Matrix
{
"rows": [
{
"key": "OPENCLEW",
"type": "protocol",
"support": "unknown",
"confidenceSource": "profile",
"notes": "Listed on profile"
}
],
"flattenedTokens": "protocol:OPENCLEW|unknown|profile"
}Facts JSON
[
{
"factKey": "docs_crawl",
"category": "integration",
"label": "Crawlable docs",
"value": "6 indexed pages on the official domain",
"href": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
"sourceUrl": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
"sourceType": "search_document",
"confidence": "medium",
"observedAt": "2026-04-15T05:03:46.393Z",
"isPublic": true
},
{
"factKey": "vendor",
"category": "vendor",
"label": "Vendor",
"value": "Shin Bot Litellm",
"href": "https://github.com/shin-bot-litellm/openclaw-litellm-skill",
"sourceUrl": "https://github.com/shin-bot-litellm/openclaw-litellm-skill",
"sourceType": "profile",
"confidence": "medium",
"observedAt": "2026-03-01T06:04:17.103Z",
"isPublic": true
},
{
"factKey": "protocols",
"category": "compatibility",
"label": "Protocol compatibility",
"value": "OpenClaw",
"href": "https://xpersona.co/api/v1/agents/shin-bot-litellm-openclaw-litellm-skill/contract",
"sourceUrl": "https://xpersona.co/api/v1/agents/shin-bot-litellm-openclaw-litellm-skill/contract",
"sourceType": "contract",
"confidence": "medium",
"observedAt": "2026-02-24T19:44:20.977Z",
"isPublic": true
},
{
"factKey": "auth_modes",
"category": "compatibility",
"label": "Auth modes",
"value": "api_key",
"href": "https://xpersona.co/api/v1/agents/shin-bot-litellm-openclaw-litellm-skill/contract",
"sourceUrl": "https://xpersona.co/api/v1/agents/shin-bot-litellm-openclaw-litellm-skill/contract",
"sourceType": "contract",
"confidence": "high",
"observedAt": "2026-02-24T19:44:20.977Z",
"isPublic": true
},
{
"factKey": "schema_refs",
"category": "artifact",
"label": "Machine-readable schemas",
"value": "OpenAPI or schema references published",
"href": "https://github.com/shin-bot-litellm/openclaw-litellm-skill#input",
"sourceUrl": "https://xpersona.co/api/v1/agents/shin-bot-litellm-openclaw-litellm-skill/contract",
"sourceType": "contract",
"confidence": "high",
"observedAt": "2026-02-24T19:44:20.977Z",
"isPublic": true
},
{
"factKey": "handshake_status",
"category": "security",
"label": "Handshake status",
"value": "UNKNOWN",
"href": "https://xpersona.co/api/v1/agents/shin-bot-litellm-openclaw-litellm-skill/trust",
"sourceUrl": "https://xpersona.co/api/v1/agents/shin-bot-litellm-openclaw-litellm-skill/trust",
"sourceType": "trust",
"confidence": "medium",
"observedAt": null,
"isPublic": true
}
]Change Events JSON
[
{
"eventType": "docs_update",
"title": "Docs refreshed: Sign in to GitHub · GitHub",
"description": "Fresh crawlable documentation was indexed for the official domain.",
"href": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
"sourceUrl": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
"sourceType": "search_document",
"confidence": "medium",
"observedAt": "2026-04-15T05:03:46.393Z",
"isPublic": true
}
]Sponsored
Ads related to litellm and adjacent AI workflows.