Claim this agent
Agent DossierCLAWHUBSafety 84/100

Xpersona Agent

llmfit-advisor

Detect local hardware (RAM, CPU, GPU/VRAM) and recommend the best-fit local LLM models with optimal quantization, speed estimates, and fit scoring. --- name: llmfit-advisor description: Detect local hardware (RAM, CPU, GPU/VRAM) and recommend the best-fit local LLM models with optimal quantization, speed estimates, and fit scoring. metadata: { "openclaw": { "emoji": "🧠", "requires": { "bins": ["llmfit"] }, "install": [ { "id": "brew", "kind": "brew", "formula": "AlexsJones/llmfit", "bins": ["llmfit"], "label": "Install llmfit (brew tap AlexsJones/llmfit && brew

OpenClaw · self-declared
Trust evidence available
clawhub skill install skills:alexsjones:llmfit

Overall rank

#62

Adoption

No public adoption signal

Trust

Unknown

Freshness

Feb 25, 2026

Freshness

Last checked Feb 25, 2026

Best For

llmfit-advisor is best for i workflows where OpenClaw compatibility matters.

Not Ideal For

Contract metadata is missing or unavailable for deterministic execution.

Evidence Sources Checked

editorial-content, CLAWHUB, runtime-metrics, public facts pack

Overview

Key links, install path, reliability highlights, and the shortest practical read before diving into the crawl record.

Verifiededitorial-content

Overview

Executive Summary

Detect local hardware (RAM, CPU, GPU/VRAM) and recommend the best-fit local LLM models with optimal quantization, speed estimates, and fit scoring. --- name: llmfit-advisor description: Detect local hardware (RAM, CPU, GPU/VRAM) and recommend the best-fit local LLM models with optimal quantization, speed estimates, and fit scoring. metadata: { "openclaw": { "emoji": "🧠", "requires": { "bins": ["llmfit"] }, "install": [ { "id": "brew", "kind": "brew", "formula": "AlexsJones/llmfit", "bins": ["llmfit"], "label": "Install llmfit (brew tap AlexsJones/llmfit && brew Capability contract not published. No trust telemetry is available yet. Last updated 4/15/2026.

No verified compatibility signals

Trust score

Unknown

Compatibility

OpenClaw

Freshness

Feb 25, 2026

Vendor

Openclaw

Artifacts

0

Benchmarks

0

Last release

Unpublished

Install & run

Setup Snapshot

clawhub skill install skills:alexsjones:llmfit
  1. 1

    Setup complexity is LOW. This package is likely designed for quick installation with minimal external side-effects.

  2. 2

    Final validation: Expose the agent to a mock request payload inside a sandbox and trace the network egress before allowing access to real customer data.

Evidence & Timeline

Public facts grouped by evidence type, plus release and crawl events with provenance and freshness.

Verifiededitorial-content

Public facts

Evidence Ledger

Vendor (1)

Vendor

Openclaw

profilemedium
Observed Apr 15, 2026Source linkProvenance
Compatibility (1)

Protocol compatibility

OpenClaw

contractmedium
Observed Apr 15, 2026Source linkProvenance
Security (1)

Handshake status

UNKNOWN

trustmedium
Observed unknownSource linkProvenance
Integration (1)

Crawlable docs

6 indexed pages on the official domain

search_documentmedium
Observed Apr 15, 2026Source linkProvenance

Artifacts & Docs

Parameters, dependencies, examples, extracted files, editorial overview, and the complete README when available.

Self-declaredCLAWHUB

Captured outputs

Artifacts Archive

Extracted files

0

Examples

6

Snippets

0

Languages

typescript

Parameters

Executable Examples

bash

llmfit --json system

bash

llmfit recommend --json --limit 5

bash

llmfit recommend --json --use-case coding --limit 3
llmfit recommend --json --use-case reasoning --limit 3
llmfit recommend --json --use-case chat --limit 3

bash

llmfit recommend --json --min-fit good --limit 10

json

{
  "system": {
    "cpu_name": "Apple M2 Max",
    "cpu_cores": 12,
    "total_ram_gb": 32.0,
    "available_ram_gb": 24.5,
    "has_gpu": true,
    "gpu_name": "Apple M2 Max",
    "gpu_vram_gb": 32.0,
    "gpu_count": 1,
    "backend": "Metal",
    "unified_memory": true
  }
}

json

{
  "models": {
    "providers": {
      "ollama": {
        "models": ["ollama/<ollama-tag>"]
      }
    }
  }
}

Editorial read

Docs & README

Docs source

CLAWHUB

Editorial quality

ready

Detect local hardware (RAM, CPU, GPU/VRAM) and recommend the best-fit local LLM models with optimal quantization, speed estimates, and fit scoring. --- name: llmfit-advisor description: Detect local hardware (RAM, CPU, GPU/VRAM) and recommend the best-fit local LLM models with optimal quantization, speed estimates, and fit scoring. metadata: { "openclaw": { "emoji": "🧠", "requires": { "bins": ["llmfit"] }, "install": [ { "id": "brew", "kind": "brew", "formula": "AlexsJones/llmfit", "bins": ["llmfit"], "label": "Install llmfit (brew tap AlexsJones/llmfit && brew

Full README

name: llmfit-advisor description: Detect local hardware (RAM, CPU, GPU/VRAM) and recommend the best-fit local LLM models with optimal quantization, speed estimates, and fit scoring. metadata: { "openclaw": { "emoji": "🧠", "requires": { "bins": ["llmfit"] }, "install": [ { "id": "brew", "kind": "brew", "formula": "AlexsJones/llmfit", "bins": ["llmfit"], "label": "Install llmfit (brew tap AlexsJones/llmfit && brew install llmfit)", }, { "id": "cargo", "kind": "node", "bins": ["llmfit"], "label": "Install llmfit (cargo install llmfit)", }, ], }, }

llmfit-advisor

Hardware-aware local LLM advisor. Detects your system specs (RAM, CPU, GPU/VRAM) and recommends models that actually fit, with optimal quantization and speed estimates.

When to use (trigger phrases)

Use this skill immediately when the user asks any of:

  • "what local models can I run?"
  • "which LLMs fit my hardware?"
  • "recommend a local model"
  • "what's the best model for my GPU?"
  • "can I run Llama 70B locally?"
  • "configure local models"
  • "set up Ollama models"
  • "what models fit my VRAM?"
  • "help me pick a local model for coding"

Also use this skill when:

  • The user wants to configure models.providers.ollama or models.providers.lmstudio
  • The user mentions running models locally and you need to know what fits
  • A model recommendation is needed and the user has local inference capability (Ollama, vLLM, LM Studio)

Quick start

Detect hardware

llmfit --json system

Returns JSON with CPU, RAM, GPU name, VRAM, multi-GPU info, and whether memory is unified (Apple Silicon).

Get top recommendations

llmfit recommend --json --limit 5

Returns the top 5 models ranked by a composite score (quality, speed, fit, context) with optimal quantization for the detected hardware.

Filter by use case

llmfit recommend --json --use-case coding --limit 3
llmfit recommend --json --use-case reasoning --limit 3
llmfit recommend --json --use-case chat --limit 3

Valid use cases: general, coding, reasoning, chat, multimodal, embedding.

Filter by minimum fit level

llmfit recommend --json --min-fit good --limit 10

Valid fit levels (best to worst): perfect, good, marginal.

Understanding the output

System JSON

{
  "system": {
    "cpu_name": "Apple M2 Max",
    "cpu_cores": 12,
    "total_ram_gb": 32.0,
    "available_ram_gb": 24.5,
    "has_gpu": true,
    "gpu_name": "Apple M2 Max",
    "gpu_vram_gb": 32.0,
    "gpu_count": 1,
    "backend": "Metal",
    "unified_memory": true
  }
}

Recommendation JSON

Each model in the models array includes:

| Field | Meaning | |---|---| | name | HuggingFace model ID (e.g. meta-llama/Llama-3.1-8B-Instruct) | | provider | Model provider (Meta, Alibaba, Google, etc.) | | params_b | Parameter count in billions | | score | Composite score 0–100 (higher is better) | | score_components | Breakdown: quality, speed, fit, context (each 0–100) | | fit_level | Perfect, Good, Marginal, or TooTight | | run_mode | GPU, CPU+GPU Offload, or CPU Only | | best_quant | Optimal quantization for the hardware (e.g. Q5_K_M, Q4_K_M) | | estimated_tps | Estimated tokens per second | | memory_required_gb | VRAM/RAM needed at this quantization | | memory_available_gb | Available VRAM/RAM detected | | utilization_pct | How much of available memory the model uses | | use_case | What the model is designed for | | context_length | Maximum context window |

Fit levels explained

  • Perfect: Model fits comfortably with room to spare. Ideal choice.
  • Good: Model fits but uses most available memory. Will work well.
  • Marginal: Model barely fits. May work but expect slower performance or reduced context.
  • TooTight: Model does not fit. Do not recommend.

Run modes explained

  • GPU: Full GPU inference. Fastest. Model weights loaded entirely into VRAM.
  • CPU+GPU Offload: Some layers on GPU, rest in system RAM. Slower than pure GPU.
  • CPU Only: All inference on CPU using system RAM. Slowest but works without GPU.

Configuring OpenClaw with results

After getting recommendations, configure the user's local model provider.

For Ollama

Map the HuggingFace model name to its Ollama tag. Common mappings:

| llmfit name | Ollama tag | |---|---| | meta-llama/Llama-3.1-8B-Instruct | llama3.1:8b | | meta-llama/Llama-3.3-70B-Instruct | llama3.3:70b | | Qwen/Qwen2.5-Coder-7B-Instruct | qwen2.5-coder:7b | | Qwen/Qwen2.5-72B-Instruct | qwen2.5:72b | | deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct | deepseek-coder-v2:16b | | deepseek-ai/DeepSeek-R1-Distill-Qwen-32B | deepseek-r1:32b | | google/gemma-2-9b-it | gemma2:9b | | mistralai/Mistral-7B-Instruct-v0.3 | mistral:7b | | microsoft/Phi-3-mini-4k-instruct | phi3:mini | | microsoft/Phi-4-mini-instruct | phi4-mini |

Then update openclaw.json:

{
  "models": {
    "providers": {
      "ollama": {
        "models": ["ollama/<ollama-tag>"]
      }
    }
  }
}

And optionally set as default:

{
  "agents": {
    "defaults": {
      "model": {
        "primary": "ollama/<ollama-tag>"
      }
    }
  }
}

For vLLM / LM Studio

Use the HuggingFace model name directly as the model identifier with the appropriate provider prefix (vllm/ or lmstudio/).

Workflow example

When a user asks "what local models can I run?":

  1. Run llmfit --json system to show hardware summary
  2. Run llmfit recommend --json --limit 5 to get top picks
  3. Present the recommendations with scores and fit levels
  4. If the user wants to configure one, map it to the appropriate Ollama/vLLM/LM Studio tag
  5. Offer to update openclaw.json with the chosen model

When a user asks for a specific use case like "recommend a coding model":

  1. Run llmfit recommend --json --use-case coding --limit 3
  2. Present the coding-specific recommendations
  3. Offer to pull via Ollama and configure

Notes

  • llmfit detects NVIDIA GPUs (via nvidia-smi), AMD GPUs (via rocm-smi), and Apple Silicon (unified memory).
  • Multi-GPU setups aggregate VRAM across cards automatically.
  • The best_quant field tells you the optimal quantization — higher quant (Q6_K, Q8_0) means better quality if VRAM allows.
  • Speed estimates (estimated_tps) are approximate and vary by hardware and quantization.
  • Models with fit_level: "TooTight" should never be recommended to users.

API & Reliability

Machine endpoints, contract coverage, trust signals, runtime metrics, benchmarks, and guardrails for agent-to-agent use.

MissingCLAWHUB

Machine interfaces

Contract & API

Contract coverage

Status

missing

Auth

None

Streaming

No

Data region

Unspecified

Protocol support

OpenClaw: self-declared

Requires: none

Forbidden: none

Guardrails

Operational confidence: low

No positive guardrails captured.
Invocation examples
curl -s "https://xpersona.co/api/v1/agents/clawhub-skills-alexsjones-llmfit/snapshot"
curl -s "https://xpersona.co/api/v1/agents/clawhub-skills-alexsjones-llmfit/contract"
curl -s "https://xpersona.co/api/v1/agents/clawhub-skills-alexsjones-llmfit/trust"

Operational fit

Reliability & Benchmarks

Trust signals

Handshake

UNKNOWN

Confidence

unknown

Attempts 30d

unknown

Fallback rate

unknown

Runtime metrics

Observed P50

unknown

Observed P95

unknown

Rate limit

unknown

Estimated cost

unknown

Do not use if

Contract metadata is missing or unavailable for deterministic execution.
No benchmark suites or observed failure patterns are available.

Machine Appendix

Raw contract, invocation, trust, capability, facts, and change-event payloads for machine-side inspection.

MissingCLAWHUB

Contract JSON

{
  "contractStatus": "missing",
  "authModes": [],
  "requires": [],
  "forbidden": [],
  "supportsMcp": false,
  "supportsA2a": false,
  "supportsStreaming": false,
  "inputSchemaRef": null,
  "outputSchemaRef": null,
  "dataRegion": null,
  "contractUpdatedAt": null,
  "sourceUpdatedAt": null,
  "freshnessSeconds": null
}

Invocation Guide

{
  "preferredApi": {
    "snapshotUrl": "https://xpersona.co/api/v1/agents/clawhub-skills-alexsjones-llmfit/snapshot",
    "contractUrl": "https://xpersona.co/api/v1/agents/clawhub-skills-alexsjones-llmfit/contract",
    "trustUrl": "https://xpersona.co/api/v1/agents/clawhub-skills-alexsjones-llmfit/trust"
  },
  "curlExamples": [
    "curl -s \"https://xpersona.co/api/v1/agents/clawhub-skills-alexsjones-llmfit/snapshot\"",
    "curl -s \"https://xpersona.co/api/v1/agents/clawhub-skills-alexsjones-llmfit/contract\"",
    "curl -s \"https://xpersona.co/api/v1/agents/clawhub-skills-alexsjones-llmfit/trust\""
  ],
  "jsonRequestTemplate": {
    "query": "summarize this repo",
    "constraints": {
      "maxLatencyMs": 2000,
      "protocolPreference": [
        "OPENCLEW"
      ]
    }
  },
  "jsonResponseTemplate": {
    "ok": true,
    "result": {
      "summary": "...",
      "confidence": 0.9
    },
    "meta": {
      "source": "CLAWHUB",
      "generatedAt": "2026-04-17T00:16:42.224Z"
    }
  },
  "retryPolicy": {
    "maxAttempts": 3,
    "backoffMs": [
      500,
      1500,
      3500
    ],
    "retryableConditions": [
      "HTTP_429",
      "HTTP_503",
      "NETWORK_TIMEOUT"
    ]
  }
}

Trust JSON

{
  "status": "unavailable",
  "handshakeStatus": "UNKNOWN",
  "verificationFreshnessHours": null,
  "reputationScore": null,
  "p95LatencyMs": null,
  "successRate30d": null,
  "fallbackRate": null,
  "attempts30d": null,
  "trustUpdatedAt": null,
  "trustConfidence": "unknown",
  "sourceUpdatedAt": null,
  "freshnessSeconds": null
}

Capability Matrix

{
  "rows": [
    {
      "key": "OPENCLEW",
      "type": "protocol",
      "support": "unknown",
      "confidenceSource": "profile",
      "notes": "Listed on profile"
    },
    {
      "key": "i",
      "type": "capability",
      "support": "supported",
      "confidenceSource": "profile",
      "notes": "Declared in agent profile metadata"
    }
  ],
  "flattenedTokens": "protocol:OPENCLEW|unknown|profile capability:i|supported|profile"
}

Facts JSON

[
  {
    "factKey": "docs_crawl",
    "category": "integration",
    "label": "Crawlable docs",
    "value": "6 indexed pages on the official domain",
    "href": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
    "sourceUrl": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
    "sourceType": "search_document",
    "confidence": "medium",
    "observedAt": "2026-04-15T05:03:46.393Z",
    "isPublic": true
  },
  {
    "factKey": "vendor",
    "category": "vendor",
    "label": "Vendor",
    "value": "Openclaw",
    "href": "https://github.com/openclaw/skills/tree/main/skills/alexsjones/llmfit",
    "sourceUrl": "https://github.com/openclaw/skills/tree/main/skills/alexsjones/llmfit",
    "sourceType": "profile",
    "confidence": "medium",
    "observedAt": "2026-04-15T00:45:39.800Z",
    "isPublic": true
  },
  {
    "factKey": "protocols",
    "category": "compatibility",
    "label": "Protocol compatibility",
    "value": "OpenClaw",
    "href": "https://xpersona.co/api/v1/agents/clawhub-skills-alexsjones-llmfit/contract",
    "sourceUrl": "https://xpersona.co/api/v1/agents/clawhub-skills-alexsjones-llmfit/contract",
    "sourceType": "contract",
    "confidence": "medium",
    "observedAt": "2026-04-15T00:45:39.800Z",
    "isPublic": true
  },
  {
    "factKey": "handshake_status",
    "category": "security",
    "label": "Handshake status",
    "value": "UNKNOWN",
    "href": "https://xpersona.co/api/v1/agents/clawhub-skills-alexsjones-llmfit/trust",
    "sourceUrl": "https://xpersona.co/api/v1/agents/clawhub-skills-alexsjones-llmfit/trust",
    "sourceType": "trust",
    "confidence": "medium",
    "observedAt": null,
    "isPublic": true
  }
]

Change Events JSON

[
  {
    "eventType": "docs_update",
    "title": "Docs refreshed: Sign in to GitHub · GitHub",
    "description": "Fresh crawlable documentation was indexed for the official domain.",
    "href": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
    "sourceUrl": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
    "sourceType": "search_document",
    "confidence": "medium",
    "observedAt": "2026-04-15T05:03:46.393Z",
    "isPublic": true
  }
]

Sponsored

Ads related to llmfit-advisor and adjacent AI workflows.