Crawler Summary

huggingface answer-first brief

Helps developers interact with HuggingFace Hub for machine learning workflows. Supports authentication, model/dataset operations, inference, and fine-tuning. Activate with: huggingface, hugging face, hf, transformers, pipeline, inference, model hub, dataset hub, AutoModel, AutoTokenizer, fine-tune, deploy model --- name: huggingface description: | Helps developers interact with HuggingFace Hub for machine learning workflows. Supports authentication, model/dataset operations, inference, and fine-tuning. Activate with: huggingface, hugging face, hf, transformers, pipeline, inference, model hub, dataset hub, AutoModel, AutoTokenizer, fine-tune, deploy model license: Apache-2.0 compatibility: Requires Python 3.8+, internet acce Capability contract not published. No trust telemetry is available yet. Last updated 4/15/2026.

Freshness

Last checked 4/15/2026

Best For

huggingface is best for both workflows where OpenClaw compatibility matters.

Not Ideal For

Contract metadata is missing or unavailable for deterministic execution.

Evidence Sources Checked

editorial-content, GITHUB OPENCLEW, runtime-metrics, public facts pack

Claim this agent
Agent DossierGitHubSafety: 94/100

huggingface

Helps developers interact with HuggingFace Hub for machine learning workflows. Supports authentication, model/dataset operations, inference, and fine-tuning. Activate with: huggingface, hugging face, hf, transformers, pipeline, inference, model hub, dataset hub, AutoModel, AutoTokenizer, fine-tune, deploy model --- name: huggingface description: | Helps developers interact with HuggingFace Hub for machine learning workflows. Supports authentication, model/dataset operations, inference, and fine-tuning. Activate with: huggingface, hugging face, hf, transformers, pipeline, inference, model hub, dataset hub, AutoModel, AutoTokenizer, fine-tune, deploy model license: Apache-2.0 compatibility: Requires Python 3.8+, internet acce

OpenClawself-declared

Public facts

4

Change events

1

Artifacts

0

Freshness

Apr 15, 2026

Verifiededitorial-contentNo verified compatibility signals

Capability contract not published. No trust telemetry is available yet. Last updated 4/15/2026.

Trust evidence available

Trust score

Unknown

Compatibility

OpenClaw

Freshness

Apr 15, 2026

Vendor

Ak Skill

Artifacts

0

Benchmarks

0

Last release

Unpublished

Executive Summary

Key links, install path, and a quick operational read before the deeper crawl record.

Verifiededitorial-content

Summary

Capability contract not published. No trust telemetry is available yet. Last updated 4/15/2026.

Setup snapshot

git clone https://github.com/ak-skill/huggingface.git
  1. 1

    Setup complexity is LOW. This package is likely designed for quick installation with minimal external side-effects.

  2. 2

    Final validation: Expose the agent to a mock request payload inside a sandbox and trace the network egress before allowing access to real customer data.

Evidence Ledger

Everything public we have scraped or crawled about this agent, grouped by evidence type with provenance.

Verifiededitorial-content
Vendor (1)

Vendor

Ak Skill

profilemedium
Observed Apr 15, 2026Source linkProvenance
Compatibility (1)

Protocol compatibility

OpenClaw

contractmedium
Observed Apr 15, 2026Source linkProvenance
Security (1)

Handshake status

UNKNOWN

trustmedium
Observed unknownSource linkProvenance
Integration (1)

Crawlable docs

6 indexed pages on the official domain

search_documentmedium
Observed Apr 15, 2026Source linkProvenance

Release & Crawl Timeline

Merged public release, docs, artifact, benchmark, pricing, and trust refresh events.

Self-declaredagent-index

Artifacts Archive

Extracted files, examples, snippets, parameters, dependencies, permissions, and artifact metadata.

Self-declaredGITHUB OPENCLEW

Extracted files

0

Examples

6

Snippets

0

Languages

typescript

Parameters

Executable Examples

bash

# Login interactively (recommended)
hf auth login

# Or set token via environment variable
export HF_TOKEN="hf_xxxxxxxxxxxxxxxxxxxx"

# Verify authentication
hf auth whoami

bash

# Download entire model repository
hf download meta-llama/Llama-2-7b-hf

# Download specific files
hf download gpt2 config.json pytorch_model.bin

# Download to specific directory
hf download gpt2 --local-dir ./models/gpt2

python

# Quick inference with pipeline
from transformers import pipeline

generator = pipeline("text-generation", model="gpt2")
result = generator("Once upon a time", max_length=50)
print(result[0]["generated_text"])

bash

# Create a new repository
hf repo create my-fine-tuned-model --type model

# Upload files
hf upload my-username/my-fine-tuned-model ./output/

python

from transformers import AutoModelForSequenceClassification, Trainer, TrainingArguments
from datasets import load_dataset

# Load model and dataset
model = AutoModelForSequenceClassification.from_pretrained("bert-base-uncased", num_labels=2)
dataset = load_dataset("imdb")

# Configure training
training_args = TrainingArguments(
    output_dir="./results",
    num_train_epochs=3,
    per_device_train_batch_size=8,
    push_to_hub=True,
)

# Train
trainer = Trainer(model=model, args=training_args, train_dataset=dataset["train"])
trainer.train()
trainer.push_to_hub()

bash

# Authenticate (required for gated models like Mistral)
hf auth login

# Download the model
hf download mistralai/Mistral-7B-v0.1 --local-dir ./models/mistral-7b

Docs & README

Full documentation captured from public sources, including the complete README when available.

Self-declaredGITHUB OPENCLEW

Docs source

GITHUB OPENCLEW

Editorial quality

ready

Helps developers interact with HuggingFace Hub for machine learning workflows. Supports authentication, model/dataset operations, inference, and fine-tuning. Activate with: huggingface, hugging face, hf, transformers, pipeline, inference, model hub, dataset hub, AutoModel, AutoTokenizer, fine-tune, deploy model --- name: huggingface description: | Helps developers interact with HuggingFace Hub for machine learning workflows. Supports authentication, model/dataset operations, inference, and fine-tuning. Activate with: huggingface, hugging face, hf, transformers, pipeline, inference, model hub, dataset hub, AutoModel, AutoTokenizer, fine-tune, deploy model license: Apache-2.0 compatibility: Requires Python 3.8+, internet acce

Full README

name: huggingface description: | Helps developers interact with HuggingFace Hub for machine learning workflows. Supports authentication, model/dataset operations, inference, and fine-tuning. Activate with: huggingface, hugging face, hf, transformers, pipeline, inference, model hub, dataset hub, AutoModel, AutoTokenizer, fine-tune, deploy model license: Apache-2.0 compatibility: Requires Python 3.8+, internet access for Hub operations metadata: author: Claude Agent version: "1.0.0" allowed-tools: Bash Read Write Edit WebFetch

HuggingFace Hub Integration

Interact with the HuggingFace ecosystem for model management, inference, and training workflows.

Overview

This skill helps you work with HuggingFace Hub - authenticate, download/upload models and datasets, run inference using pipelines or AutoModel patterns, and fine-tune models. It supports both CLI (hf command) and Python SDK approaches.

Instructions

1. Authentication Setup

# Login interactively (recommended)
hf auth login

# Or set token via environment variable
export HF_TOKEN="hf_xxxxxxxxxxxxxxxxxxxx"

# Verify authentication
hf auth whoami

2. Download Models or Datasets

# Download entire model repository
hf download meta-llama/Llama-2-7b-hf

# Download specific files
hf download gpt2 config.json pytorch_model.bin

# Download to specific directory
hf download gpt2 --local-dir ./models/gpt2

3. Run Inference

# Quick inference with pipeline
from transformers import pipeline

generator = pipeline("text-generation", model="gpt2")
result = generator("Once upon a time", max_length=50)
print(result[0]["generated_text"])

4. Upload Models to Hub

# Create a new repository
hf repo create my-fine-tuned-model --type model

# Upload files
hf upload my-username/my-fine-tuned-model ./output/

5. Fine-tune a Model

from transformers import AutoModelForSequenceClassification, Trainer, TrainingArguments
from datasets import load_dataset

# Load model and dataset
model = AutoModelForSequenceClassification.from_pretrained("bert-base-uncased", num_labels=2)
dataset = load_dataset("imdb")

# Configure training
training_args = TrainingArguments(
    output_dir="./results",
    num_train_epochs=3,
    per_device_train_batch_size=8,
    push_to_hub=True,
)

# Train
trainer = Trainer(model=model, args=training_args, train_dataset=dataset["train"])
trainer.train()
trainer.push_to_hub()

Usage Examples

Example 1: Authenticate and Download a Model

Input: "I need to download the Mistral-7B model from HuggingFace"

Output: First authenticate, then download the model:

# Authenticate (required for gated models like Mistral)
hf auth login

# Download the model
hf download mistralai/Mistral-7B-v0.1 --local-dir ./models/mistral-7b

Example 2: Run Text Generation Inference

Input: "Run inference on GPT-2 to generate text"

Output: Use the pipeline API for simple inference:

from transformers import pipeline

# Create generator pipeline
generator = pipeline("text-generation", model="gpt2", device="cpu")

# Generate text
outputs = generator(
    "The future of AI is",
    max_new_tokens=100,
    num_return_sequences=1,
    temperature=0.7
)

for output in outputs:
    print(output["generated_text"])

Example 3: Upload a Fine-tuned Model

Input: "Upload my trained model to HuggingFace Hub"

Output: Create repository and upload:

# Create the repository
hf repo create my-bert-classifier --type model

# Upload all files from your output directory
hf upload your-username/my-bert-classifier ./fine-tuned-model/

# Or upload with Python
from huggingface_hub import HfApi
api = HfApi()
api.upload_folder(
    folder_path="./fine-tuned-model",
    repo_id="your-username/my-bert-classifier",
    repo_type="model"
)

Guidelines

  • Always authenticate first for private or gated models (Llama, Mistral, etc.)
  • Use cache effectively: Models are cached in ~/.cache/huggingface/hub by default
  • Check model task type: Match pipeline task to model (text-generation, text-classification, etc.)
  • Manage tokens securely: Use hf auth login instead of hardcoding tokens in scripts
  • Specify device explicitly: Use device="cuda" for GPU or device="cpu" for CPU inference
  • Use revision parameter: Pin model versions with revision="v1.0" for reproducibility

Common Patterns

CLI Pattern: Quick Download and Cache Check

# Check what's in cache
hf cache ls

# Download model
hf download facebook/opt-350m

# Remove old cached models
hf cache rm facebook/opt-125m

Python Pattern: AutoModel for Full Control

from transformers import AutoTokenizer, AutoModelForCausalLM
import torch

# Load tokenizer and model
tokenizer = AutoTokenizer.from_pretrained("gpt2")
model = AutoModelForCausalLM.from_pretrained("gpt2")

# Move to GPU if available
device = "cuda" if torch.cuda.is_available() else "cpu"
model = model.to(device)

# Tokenize and generate
inputs = tokenizer("Hello, world!", return_tensors="pt").to(device)
outputs = model.generate(**inputs, max_new_tokens=50)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

Python Pattern: InferenceClient for API Access

from huggingface_hub import InferenceClient

# Use serverless inference API
client = InferenceClient(token="hf_xxxx")

# Text generation
response = client.text_generation(
    "Explain quantum computing:",
    model="mistralai/Mistral-7B-Instruct-v0.1",
    max_new_tokens=200
)
print(response)

Edge Cases

  • Private/Gated Models: Require authentication and accepting model license on HF website first
  • Large Models (>10GB): Use hf download --resume for interrupted downloads; consider quantized versions
  • GPU Memory Errors: Use device_map="auto" for automatic model sharding across devices
  • Organization Repos: Use format org-name/model-name for organization repositories
  • Offline Usage: Set HF_HUB_OFFLINE=1 to use only cached models

References

For detailed technical reference, see:

  • REFERENCE.md - Complete CLI commands, Python APIs, and troubleshooting

Limitations

  • Requires internet connection for downloading models and Hub operations
  • Large models need significant disk space (some exceed 100GB)
  • GPU inference requires CUDA-compatible hardware and drivers
  • Some models require accepting license terms on the HuggingFace website
  • Rate limits apply to serverless Inference API (use dedicated endpoints for production)

Contract & API

Machine endpoints, protocol fit, contract coverage, invocation examples, and guardrails for agent-to-agent use.

MissingGITHUB OPENCLEW

Contract coverage

Status

missing

Auth

None

Streaming

No

Data region

Unspecified

Protocol support

OpenClaw: self-declared

Requires: none

Forbidden: none

Guardrails

Operational confidence: low

No positive guardrails captured.
Invocation examples
curl -s "https://xpersona.co/api/v1/agents/ak-skill-huggingface/snapshot"
curl -s "https://xpersona.co/api/v1/agents/ak-skill-huggingface/contract"
curl -s "https://xpersona.co/api/v1/agents/ak-skill-huggingface/trust"

Reliability & Benchmarks

Trust and runtime signals, benchmark suites, failure patterns, and practical risk constraints.

Missingruntime-metrics

Trust signals

Handshake

UNKNOWN

Confidence

unknown

Attempts 30d

unknown

Fallback rate

unknown

Runtime metrics

Observed P50

unknown

Observed P95

unknown

Rate limit

unknown

Estimated cost

unknown

Do not use if

Contract metadata is missing or unavailable for deterministic execution.
No benchmark suites or observed failure patterns are available.

Media & Demo

Every public screenshot, visual asset, demo link, and owner-provided destination tied to this agent.

Missingno-media
No screenshots, media assets, or demo links are available.

Related Agents

Neighboring agents from the same protocol and source ecosystem for comparison and shortlist building.

Self-declaredprotocol-neighbors
GITHUB_REPOSactivepieces

Rank

70

AI Agents & MCPs & AI Workflow Automation • (~400 MCP servers for AI agents) • AI Automation / AI Agent with MCPs • AI Workflows & AI Agents • MCPs for AI Agents

Traction

No public download signal

Freshness

Updated 2d ago

OPENCLAW
GITHUB_REPOScherry-studio

Rank

70

AI productivity studio with smart chat, autonomous agents, and 300+ assistants. Unified access to frontier LLMs

Traction

No public download signal

Freshness

Updated 5d ago

MCPOPENCLAW
GITHUB_REPOSAionUi

Rank

70

Free, local, open-source 24/7 Cowork app and OpenClaw for Gemini CLI, Claude Code, Codex, OpenCode, Qwen Code, Goose CLI, Auggie, and more | 🌟 Star if you like it!

Traction

No public download signal

Freshness

Updated 6d ago

MCPOPENCLAW
GITHUB_REPOSCopilotKit

Rank

70

The Frontend for Agents & Generative UI. React + Angular

Traction

No public download signal

Freshness

Updated 23d ago

OPENCLAW
Machine Appendix

Contract JSON

{
  "contractStatus": "missing",
  "authModes": [],
  "requires": [],
  "forbidden": [],
  "supportsMcp": false,
  "supportsA2a": false,
  "supportsStreaming": false,
  "inputSchemaRef": null,
  "outputSchemaRef": null,
  "dataRegion": null,
  "contractUpdatedAt": null,
  "sourceUpdatedAt": null,
  "freshnessSeconds": null
}

Invocation Guide

{
  "preferredApi": {
    "snapshotUrl": "https://xpersona.co/api/v1/agents/ak-skill-huggingface/snapshot",
    "contractUrl": "https://xpersona.co/api/v1/agents/ak-skill-huggingface/contract",
    "trustUrl": "https://xpersona.co/api/v1/agents/ak-skill-huggingface/trust"
  },
  "curlExamples": [
    "curl -s \"https://xpersona.co/api/v1/agents/ak-skill-huggingface/snapshot\"",
    "curl -s \"https://xpersona.co/api/v1/agents/ak-skill-huggingface/contract\"",
    "curl -s \"https://xpersona.co/api/v1/agents/ak-skill-huggingface/trust\""
  ],
  "jsonRequestTemplate": {
    "query": "summarize this repo",
    "constraints": {
      "maxLatencyMs": 2000,
      "protocolPreference": [
        "OPENCLEW"
      ]
    }
  },
  "jsonResponseTemplate": {
    "ok": true,
    "result": {
      "summary": "...",
      "confidence": 0.9
    },
    "meta": {
      "source": "GITHUB_OPENCLEW",
      "generatedAt": "2026-04-17T01:47:33.946Z"
    }
  },
  "retryPolicy": {
    "maxAttempts": 3,
    "backoffMs": [
      500,
      1500,
      3500
    ],
    "retryableConditions": [
      "HTTP_429",
      "HTTP_503",
      "NETWORK_TIMEOUT"
    ]
  }
}

Trust JSON

{
  "status": "unavailable",
  "handshakeStatus": "UNKNOWN",
  "verificationFreshnessHours": null,
  "reputationScore": null,
  "p95LatencyMs": null,
  "successRate30d": null,
  "fallbackRate": null,
  "attempts30d": null,
  "trustUpdatedAt": null,
  "trustConfidence": "unknown",
  "sourceUpdatedAt": null,
  "freshnessSeconds": null
}

Capability Matrix

{
  "rows": [
    {
      "key": "OPENCLEW",
      "type": "protocol",
      "support": "unknown",
      "confidenceSource": "profile",
      "notes": "Listed on profile"
    },
    {
      "key": "both",
      "type": "capability",
      "support": "supported",
      "confidenceSource": "profile",
      "notes": "Declared in agent profile metadata"
    }
  ],
  "flattenedTokens": "protocol:OPENCLEW|unknown|profile capability:both|supported|profile"
}

Facts JSON

[
  {
    "factKey": "docs_crawl",
    "category": "integration",
    "label": "Crawlable docs",
    "value": "6 indexed pages on the official domain",
    "href": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
    "sourceUrl": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
    "sourceType": "search_document",
    "confidence": "medium",
    "observedAt": "2026-04-15T05:03:46.393Z",
    "isPublic": true
  },
  {
    "factKey": "vendor",
    "category": "vendor",
    "label": "Vendor",
    "value": "Ak Skill",
    "href": "https://github.com/ak-skill/huggingface",
    "sourceUrl": "https://github.com/ak-skill/huggingface",
    "sourceType": "profile",
    "confidence": "medium",
    "observedAt": "2026-04-15T03:16:46.988Z",
    "isPublic": true
  },
  {
    "factKey": "protocols",
    "category": "compatibility",
    "label": "Protocol compatibility",
    "value": "OpenClaw",
    "href": "https://xpersona.co/api/v1/agents/ak-skill-huggingface/contract",
    "sourceUrl": "https://xpersona.co/api/v1/agents/ak-skill-huggingface/contract",
    "sourceType": "contract",
    "confidence": "medium",
    "observedAt": "2026-04-15T03:16:46.988Z",
    "isPublic": true
  },
  {
    "factKey": "handshake_status",
    "category": "security",
    "label": "Handshake status",
    "value": "UNKNOWN",
    "href": "https://xpersona.co/api/v1/agents/ak-skill-huggingface/trust",
    "sourceUrl": "https://xpersona.co/api/v1/agents/ak-skill-huggingface/trust",
    "sourceType": "trust",
    "confidence": "medium",
    "observedAt": null,
    "isPublic": true
  }
]

Change Events JSON

[
  {
    "eventType": "docs_update",
    "title": "Docs refreshed: Sign in to GitHub · GitHub",
    "description": "Fresh crawlable documentation was indexed for the official domain.",
    "href": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
    "sourceUrl": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
    "sourceType": "search_document",
    "confidence": "medium",
    "observedAt": "2026-04-15T05:03:46.393Z",
    "isPublic": true
  }
]

Sponsored

Ads related to huggingface and adjacent AI workflows.