Rank
70
AI Agents & MCPs & AI Workflow Automation • (~400 MCP servers for AI agents) • AI Automation / AI Agent with MCPs • AI Workflows & AI Agents • MCPs for AI Agents
Traction
No public download signal
Freshness
Updated 2d ago
Crawler Summary
Supreme model router for Venice.ai — the privacy-first, uncensored AI platform. Automatically classifies query complexity and routes to the cheapest adequate model. Supports web search, uncensored mode, private-only mode (zero data retention), conversation-aware routing, cost budgets, function calling, thinking/reasoning mode, and 35+ Venice.ai text models. Use when the user wants to chat via Venice.ai, send prompts through Venice, or needs smart model selection to minimize API costs while keeping data private from Big Tech. --- name: venice-router version: 1.5.0 description: Supreme model router for Venice.ai — the privacy-first, uncensored AI platform. Automatically classifies query complexity and routes to the cheapest adequate model. Supports web search, uncensored mode, private-only mode (zero data retention), conversation-aware routing, cost budgets, function calling, thinking/reasoning mode, and 35+ Venice.ai text models. Use when Published capability contract available. No trust telemetry is available yet. Last updated 2/24/2026.
Freshness
Last checked 2/24/2026
Best For
Contract is available with explicit auth and schema references.
Not Ideal For
venice-router is not ideal for teams that need stronger public trust telemetry, lower setup complexity, or more explicit contract coverage before production rollout.
Evidence Sources Checked
editorial-content, capability-contract, runtime-metrics, public facts pack
Supreme model router for Venice.ai — the privacy-first, uncensored AI platform. Automatically classifies query complexity and routes to the cheapest adequate model. Supports web search, uncensored mode, private-only mode (zero data retention), conversation-aware routing, cost budgets, function calling, thinking/reasoning mode, and 35+ Venice.ai text models. Use when the user wants to chat via Venice.ai, send prompts through Venice, or needs smart model selection to minimize API costs while keeping data private from Big Tech. --- name: venice-router version: 1.5.0 description: Supreme model router for Venice.ai — the privacy-first, uncensored AI platform. Automatically classifies query complexity and routes to the cheapest adequate model. Supports web search, uncensored mode, private-only mode (zero data retention), conversation-aware routing, cost budgets, function calling, thinking/reasoning mode, and 35+ Venice.ai text models. Use when
Public facts
5
Change events
0
Artifacts
0
Freshness
Feb 24, 2026
Published capability contract available. No trust telemetry is available yet. Last updated 2/24/2026.
Trust score
Unknown
Compatibility
OpenClaw
Freshness
Feb 24, 2026
Vendor
Venice
Artifacts
0
Benchmarks
0
Last release
Unpublished
Key links, install path, and a quick operational read before the deeper crawl record.
Summary
Published capability contract available. No trust telemetry is available yet. Last updated 2/24/2026.
Setup snapshot
git clone https://github.com/PlusOne/venice.ai-router-openclaw.gitSetup complexity is LOW. This package is likely designed for quick installation with minimal external side-effects.
Final validation: Expose the agent to a mock request payload inside a sandbox and trace the network egress before allowing access to real customer data.
Everything public we have scraped or crawled about this agent, grouped by evidence type with provenance.
Vendor
Venice
Protocol compatibility
OpenClaw
Auth modes
api_key
Machine-readable schemas
OpenAPI or schema references published
Handshake status
UNKNOWN
Merged public release, docs, artifact, benchmark, pricing, and trust refresh events.
Extracted files, examples, snippets, parameters, dependencies, permissions, and artifact metadata.
Extracted files
0
Examples
6
Snippets
0
Languages
typescript
Parameters
bash
export VENICE_API_KEY="your-key-here"
json
{
"skills": {
"entries": {
"venice-router": {
"enabled": true,
"apiKey": "YOUR_VENICE_API_KEY"
}
}
}
}bash
python3 {baseDir}/scripts/venice-router.py --prompt "What is 2+2?"bash
python3 {baseDir}/scripts/venice-router.py --tier cheap --prompt "Tell me a joke"
python3 {baseDir}/scripts/venice-router.py --tier budget-medium --prompt "Write a Python function"
python3 {baseDir}/scripts/venice-router.py --tier mid --prompt "Explain quantum computing"
python3 {baseDir}/scripts/venice-router.py --tier premium --prompt "Write a distributed systems architecture"bash
python3 {baseDir}/scripts/venice-router.py --stream --prompt "Write a poem about lobsters"bash
python3 {baseDir}/scripts/venice-router.py --web-search --prompt "Latest news on AI regulation"Full documentation captured from public sources, including the complete README when available.
Docs source
GITHUB OPENCLEW
Editorial quality
ready
Supreme model router for Venice.ai — the privacy-first, uncensored AI platform. Automatically classifies query complexity and routes to the cheapest adequate model. Supports web search, uncensored mode, private-only mode (zero data retention), conversation-aware routing, cost budgets, function calling, thinking/reasoning mode, and 35+ Venice.ai text models. Use when the user wants to chat via Venice.ai, send prompts through Venice, or needs smart model selection to minimize API costs while keeping data private from Big Tech. --- name: venice-router version: 1.5.0 description: Supreme model router for Venice.ai — the privacy-first, uncensored AI platform. Automatically classifies query complexity and routes to the cheapest adequate model. Supports web search, uncensored mode, private-only mode (zero data retention), conversation-aware routing, cost budgets, function calling, thinking/reasoning mode, and 35+ Venice.ai text models. Use when
Smart, cost-optimized model routing for Venice.ai — the AI platform for people who don't want Big Tech watching over their shoulder.
Unlike OpenAI, Anthropic, and Google — where every prompt is logged, analyzed, and potentially used to train future models — Venice offers true privacy with zero data retention on private models. Your conversations stay yours. Venice is also uncensored: no content filters, no refusals, no "I can't help with that."
export VENICE_API_KEY="your-key-here"
Or configure in ~/.openclaw/openclaw.json:
{
"skills": {
"entries": {
"venice-router": {
"enabled": true,
"apiKey": "YOUR_VENICE_API_KEY"
}
}
}
}
python3 {baseDir}/scripts/venice-router.py --prompt "What is 2+2?"
python3 {baseDir}/scripts/venice-router.py --tier cheap --prompt "Tell me a joke"
python3 {baseDir}/scripts/venice-router.py --tier budget-medium --prompt "Write a Python function"
python3 {baseDir}/scripts/venice-router.py --tier mid --prompt "Explain quantum computing"
python3 {baseDir}/scripts/venice-router.py --tier premium --prompt "Write a distributed systems architecture"
python3 {baseDir}/scripts/venice-router.py --stream --prompt "Write a poem about lobsters"
python3 {baseDir}/scripts/venice-router.py --web-search --prompt "Latest news on AI regulation"
python3 {baseDir}/scripts/venice-router.py --uncensored --prompt "Write edgy creative fiction"
python3 {baseDir}/scripts/venice-router.py --private-only --prompt "Analyze this confidential contract"
# Save conversation history as JSON, then route follow-ups with context
python3 {baseDir}/scripts/venice-router.py --conversation history.json --prompt "Can you add tests too?"
The router analyzes conversation history to keep context: trivial follow-ups ("thanks") go cheap, while follow-ups in complex code discussions stay at the right tier.
# Define tools in a JSON file (OpenAI tools format)
python3 {baseDir}/scripts/venice-router.py --tools tools.json --prompt "What's the weather in NYC?"
python3 {baseDir}/scripts/venice-router.py --tools tools.json --tool-choice auto --prompt "Search for latest AI news"
Tool definitions use the standard OpenAI format. The router auto-bumps to mid tier minimum for function calling since it requires capable models.
# Show current spending
python3 {baseDir}/scripts/venice-router.py --budget-status
# Track per-session costs
python3 {baseDir}/scripts/venice-router.py --session-id my-project --prompt "help me code"
Set VENICE_DAILY_BUDGET and/or VENICE_SESSION_BUDGET to enforce spending limits. The router auto-downgrades tiers as you approach budget limits.
python3 {baseDir}/scripts/venice-router.py --classify "Explain the Riemann hypothesis"
python3 {baseDir}/scripts/venice-router.py --list-models
python3 {baseDir}/scripts/venice-router.py --model deepseek-v3.2 --prompt "Hello"
| Tier | Models | Cost (input/output per 1M tokens) | Best For | |------|--------|-----------------------------------|----------| | cheap | Venice Small (qwen3-4b), GLM 4.7 Flash, GPT OSS 120B, Llama 3.2 3B | $0.05–$0.15 / $0.15–$0.60 | Simple Q&A, greetings, math, lookups | | budget | Qwen 3 235B, Venice Uncensored, GLM 4.7 Flash Heretic | $0.14–$0.20 / $0.75–$0.90 | Moderate questions, summaries, translations | | budget-medium | Grok Code Fast, DeepSeek V3.2, MiniMax M2.1 | $0.25–$0.40 / $1.00–$1.87 | Moderate-to-complex tasks, code snippets, structured output | | mid | DeepSeek V3.2, MiniMax M2.1/M2.5, Qwen3 Thinking 235B, Venice Medium, Llama 3.3 70B | $0.25–$0.70 / $1.00–$3.50 | Code generation, analysis, longer writing, reasoning | | high | GLM 5, Kimi K2 Thinking, Kimi K2.5, Grok 4.1 Fast, Hermes 3 405B, Gemini 3 Flash | $0.50–$1.10 / $1.25–$3.75 | Complex reasoning, multi-step tasks, code review | | premium | GPT-5.2, GPT-5.2 Codex, Gemini 3 Pro, Gemini 3.1 Pro (1M ctx), Claude Opus/Sonnet 4.5/4.6 | $2.19–$6.00 / $15.00–$30.00 | Expert-level analysis, architecture, research papers |
The router classifies each prompt using keyword + heuristic analysis:
--conversation is provided, analyzes full chat context: code in history boosts tier, trivial follow-ups ("thanks") downgrade, tool calls in history signal complexity--tools auto-bumps to at least mid tier (capable models required)--thinking prefers chain-of-thought reasoning models (Qwen3 Thinking, Kimi K2) and bumps to at least mid tierThe classifier errs on the side of cheaper models — it only escalates when there's strong signal for complexity.
| Variable | Description | Default |
|----------|-------------|---------|
| VENICE_API_KEY | Venice.ai API key (required) | — |
| VENICE_DEFAULT_TIER | Minimum floor tier — auto-classification never goes below this. Valid: cheap, budget, budget-medium, mid, high, premium | budget |
| VENICE_MAX_TIER | Maximum tier to ever use (cost cap) | premium |
| VENICE_TEMPERATURE | Default temperature | 0.7 |
| VENICE_MAX_TOKENS | Default max tokens | 4096 |
| VENICE_STREAM | Enable streaming by default | false |
| VENICE_UNCENSORED | Always prefer uncensored models | false |
| VENICE_PRIVATE_ONLY | Only use private models (zero data retention) | false |
| VENICE_WEB_SEARCH | Enable web search by default ($10/1K calls) | false |
| VENICE_THINKING | Always prefer thinking/reasoning models | false |
| VENICE_DAILY_BUDGET | Max daily spend in USD (0 = unlimited) | 0 |
| VENICE_SESSION_BUDGET | Max per-session spend in USD (0 = unlimited) | 0 |
--classify to preview which tier a prompt would hit before spending tokensVENICE_MAX_TIER=mid to cap costs and never hit premium models--uncensored for creative, security research, or other content mainstream AI won't touch--private-only when processing sensitive/confidential data — zero retention guaranteed--web-search when you need up-to-date information with cited sources--conversation with a JSON message history for smarter multi-turn routing--tools to enable function calling — the router auto-bumps to capable modelsVENICE_DAILY_BUDGET=1.00 to cap daily spend at $1 — the router auto-downgrades tiers as you approach the limit--budget-status to see a detailed breakdown of your spending by tier--thinking for math proofs, logic puzzles, and multi-step reasoning — routes to Qwen3 Thinking or Kimi K2 models--uncensored is active, the router auto-bumps to the nearest tier with uncensored modelsMachine endpoints, protocol fit, contract coverage, invocation examples, and guardrails for agent-to-agent use.
Contract coverage
Status
ready
Auth
api_key
Streaming
Yes
Data region
global
Protocol support
Requires: openclew, lang:typescript, streaming
Forbidden: none
Guardrails
Operational confidence: medium
curl -s "https://xpersona.co/api/v1/agents/plusone-venice-ai-router-openclaw/snapshot"
curl -s "https://xpersona.co/api/v1/agents/plusone-venice-ai-router-openclaw/contract"
curl -s "https://xpersona.co/api/v1/agents/plusone-venice-ai-router-openclaw/trust"
Trust and runtime signals, benchmark suites, failure patterns, and practical risk constraints.
Trust signals
Handshake
UNKNOWN
Confidence
unknown
Attempts 30d
unknown
Fallback rate
unknown
Runtime metrics
Observed P50
unknown
Observed P95
unknown
Rate limit
unknown
Estimated cost
unknown
Every public screenshot, visual asset, demo link, and owner-provided destination tied to this agent.
Neighboring agents from the same protocol and source ecosystem for comparison and shortlist building.
Rank
70
AI Agents & MCPs & AI Workflow Automation • (~400 MCP servers for AI agents) • AI Automation / AI Agent with MCPs • AI Workflows & AI Agents • MCPs for AI Agents
Traction
No public download signal
Freshness
Updated 2d ago
Rank
70
AI productivity studio with smart chat, autonomous agents, and 300+ assistants. Unified access to frontier LLMs
Traction
No public download signal
Freshness
Updated 5d ago
Rank
70
Free, local, open-source 24/7 Cowork app and OpenClaw for Gemini CLI, Claude Code, Codex, OpenCode, Qwen Code, Goose CLI, Auggie, and more | 🌟 Star if you like it!
Traction
No public download signal
Freshness
Updated 6d ago
Rank
70
The Frontend for Agents & Generative UI. React + Angular
Traction
No public download signal
Freshness
Updated 23d ago
Contract JSON
{
"contractStatus": "ready",
"authModes": [
"api_key"
],
"requires": [
"openclew",
"lang:typescript",
"streaming"
],
"forbidden": [],
"supportsMcp": false,
"supportsA2a": false,
"supportsStreaming": true,
"inputSchemaRef": "https://github.com/PlusOne/venice.ai-router-openclaw#input",
"outputSchemaRef": "https://github.com/PlusOne/venice.ai-router-openclaw#output",
"dataRegion": "global",
"contractUpdatedAt": "2026-02-24T19:44:13.179Z",
"sourceUpdatedAt": "2026-02-24T19:44:13.179Z",
"freshnessSeconds": 4420889
}Invocation Guide
{
"preferredApi": {
"snapshotUrl": "https://xpersona.co/api/v1/agents/plusone-venice-ai-router-openclaw/snapshot",
"contractUrl": "https://xpersona.co/api/v1/agents/plusone-venice-ai-router-openclaw/contract",
"trustUrl": "https://xpersona.co/api/v1/agents/plusone-venice-ai-router-openclaw/trust"
},
"curlExamples": [
"curl -s \"https://xpersona.co/api/v1/agents/plusone-venice-ai-router-openclaw/snapshot\"",
"curl -s \"https://xpersona.co/api/v1/agents/plusone-venice-ai-router-openclaw/contract\"",
"curl -s \"https://xpersona.co/api/v1/agents/plusone-venice-ai-router-openclaw/trust\""
],
"jsonRequestTemplate": {
"query": "summarize this repo",
"constraints": {
"maxLatencyMs": 2000,
"protocolPreference": [
"OPENCLEW"
]
}
},
"jsonResponseTemplate": {
"ok": true,
"result": {
"summary": "...",
"confidence": 0.9
},
"meta": {
"source": "GITHUB_OPENCLEW",
"generatedAt": "2026-04-16T23:45:42.611Z"
}
},
"retryPolicy": {
"maxAttempts": 3,
"backoffMs": [
500,
1500,
3500
],
"retryableConditions": [
"HTTP_429",
"HTTP_503",
"NETWORK_TIMEOUT"
]
}
}Trust JSON
{
"status": "unavailable",
"handshakeStatus": "UNKNOWN",
"verificationFreshnessHours": null,
"reputationScore": null,
"p95LatencyMs": null,
"successRate30d": null,
"fallbackRate": null,
"attempts30d": null,
"trustUpdatedAt": null,
"trustConfidence": "unknown",
"sourceUpdatedAt": null,
"freshnessSeconds": null
}Capability Matrix
{
"rows": [
{
"key": "OPENCLEW",
"type": "protocol",
"support": "unknown",
"confidenceSource": "profile",
"notes": "Listed on profile"
},
{
"key": "you",
"type": "capability",
"support": "supported",
"confidenceSource": "profile",
"notes": "Declared in agent profile metadata"
},
{
"key": "search",
"type": "capability",
"support": "supported",
"confidenceSource": "profile",
"notes": "Declared in agent profile metadata"
}
],
"flattenedTokens": "protocol:OPENCLEW|unknown|profile capability:you|supported|profile capability:search|supported|profile"
}Facts JSON
[
{
"factKey": "protocols",
"category": "compatibility",
"label": "Protocol compatibility",
"value": "OpenClaw",
"href": "https://xpersona.co/api/v1/agents/plusone-venice-ai-router-openclaw/contract",
"sourceUrl": "https://xpersona.co/api/v1/agents/plusone-venice-ai-router-openclaw/contract",
"sourceType": "contract",
"confidence": "medium",
"observedAt": "2026-02-24T19:44:13.179Z",
"isPublic": true
},
{
"factKey": "auth_modes",
"category": "compatibility",
"label": "Auth modes",
"value": "api_key",
"href": "https://xpersona.co/api/v1/agents/plusone-venice-ai-router-openclaw/contract",
"sourceUrl": "https://xpersona.co/api/v1/agents/plusone-venice-ai-router-openclaw/contract",
"sourceType": "contract",
"confidence": "high",
"observedAt": "2026-02-24T19:44:13.179Z",
"isPublic": true
},
{
"factKey": "schema_refs",
"category": "artifact",
"label": "Machine-readable schemas",
"value": "OpenAPI or schema references published",
"href": "https://github.com/PlusOne/venice.ai-router-openclaw#input",
"sourceUrl": "https://xpersona.co/api/v1/agents/plusone-venice-ai-router-openclaw/contract",
"sourceType": "contract",
"confidence": "high",
"observedAt": "2026-02-24T19:44:13.179Z",
"isPublic": true
},
{
"factKey": "vendor",
"category": "vendor",
"label": "Vendor",
"value": "Venice",
"href": "https://venice.ai",
"sourceUrl": "https://venice.ai",
"sourceType": "profile",
"confidence": "medium",
"observedAt": "2026-02-24T19:43:14.176Z",
"isPublic": true
},
{
"factKey": "handshake_status",
"category": "security",
"label": "Handshake status",
"value": "UNKNOWN",
"href": "https://xpersona.co/api/v1/agents/plusone-venice-ai-router-openclaw/trust",
"sourceUrl": "https://xpersona.co/api/v1/agents/plusone-venice-ai-router-openclaw/trust",
"sourceType": "trust",
"confidence": "medium",
"observedAt": null,
"isPublic": true
}
]Change Events JSON
[]
Sponsored
Ads related to venice-router and adjacent AI workflows.