{"id":"b22057d8-a4a1-4f1e-8cde-23582a5c3c81","slug":"smithery-hyperplexity-hyperplexity","name":"hyperplexity","description":"# Hyperplexity\n\n**Verified Research Engine** · [hyperplexity.ai](https://hyperplexity.ai) · [Launch App](https://hyperplexity.ai/app)\n\nHyperplexity generates, validates, and updates research tables by synthesizing hundreds of calls to Perplexity and Claude. Give it a prompt or an existing table and it returns structured, verified answers across an entire research domain — not just a single query, but a complete field of questions answered at once.\n\n| What you want to do | How | Live example |\n|---|---|---|\n| **Gather everything** — survey a complete research domain at once | Prompt → structured verified table | [50+ Phase 3 oncology trials](https://eliyahu.ai/viewer?demo=phase-3-oncology-drug-trials-9383b5) |\n| **Monitor anything** — news, analyst projections, time-sensitive data | Upload or generate → keep current | [Market info for 10 stocks](https://eliyahu.ai/viewer?demo=investmentresearch-779dfe) |\n| **See everywhere** — run the same questions across many entities | One table, many subjects | [GenAI adoption across Fortune 500](https://eliyahu.ai/viewer?demo=fortune-500-genai-deployment-and-upskilling-0c5503) |\n\n## How to Access\n\n| You want to… | Use |\n|---|---|\n| Try it out or iron out your use case | **[hyperplexity.ai/app](https://hyperplexity.ai/app)** — web GUI for table validation and generation |\n| Fact-check text or documents interactively | **[hyperplexity.ai/chex](https://hyperplexity.ai/chex)** — web GUI for reference checks |\n| Let an AI agent drive a workflow autonomously | **MCP server** — install once, describe your task in plain English |\n| One-off automation without writing code | **MCP server** via Claude Code, Claude Desktop, or any MCP-compatible client |\n| Run repeatable pipelines or batch jobs | **REST API** + example scripts |\n| Integrate into a product or SaaS | **REST API** directly |\n\n> **GUI → API:** The web GUIs are ideal for exploring and refining your use case. Once you know what you want, the MCP server or REST API is the better path — faster, repeatable, and fully automatable.\n\n---\n\n## Table of Contents\n\n- [Get Your API Key](#get-your-api-key)\n- [Download Examples](#download-examples)\n- [Quick Start: MCP](#quick-start-mcp)\n  - [Option A: Direct HTTP connection to Railway (recommended)](#option-a--direct-http-connection-to-railway-recommended-for-claude-code)\n  - [Option B: Local install via uvx](#option-b--local-install-via-uvx)\n  - [Option C: Smithery](#option-c--smithery)\n  - [What to Ask Your Agent](#what-to-ask-your-agent)\n- [Workflows](#workflows)\n  - [1. Validate an Existing Table](#1-validate-an-existing-table)\n  - [2. Generate a Table from a Prompt](#2-generate-a-table-from-a-prompt)\n  - [3. Update a Table](#3-update-a-table-re-run-validation-pass)\n  - [4. Fact-Check Text or Documents](#4-fact-check-text-or-documents-chex)\n- [Environment Variables](#environment-variables)\n- [Direct REST API](#direct-rest-api)\n  - [API Endpoint Reference](#api-endpoint-reference)\n- [MCP Prompts](#mcp-prompts)\n- [MCP Tool Reference](#mcp-tool-reference)\n- [Key Behaviors](#key-behaviors)\n- [Pricing](#pricing)\n- [Links](#links)\n\n---\n\n## Get Your API Key\n\nGet your API key at **[hyperplexity.ai/account](https://hyperplexity.ai/account)**. New accounts receive $20 in free credits.\n\n---\n\n## Download Examples\n\n> All scripts require Python 3.10+ and `pip install requests`.\n\n| Script | Description | Download |\n|--------|-------------|----------|\n| `hyperplexity_client.py` | Shared REST client (required by all examples) | [download](https://hyperplexity-storage.s3.amazonaws.com/website_downloads/examples/hyperplexity_client.py) |\n| `01_validate_table.py` | Validate an existing table | [download](https://hyperplexity-storage.s3.amazonaws.com/website_downloads/examples/01_validate_table.py) |\n| `02_generate_table.py` | Generate a table from a prompt | [download](https://hyperplexity-storage.s3.amazonaws.com/website_downloads/examples/02_generate_table.py) |\n| `03_update_table.py` | Re-run validation on a completed job | [download](https://hyperplexity-storage.s3.amazonaws.com/website_downloads/examples/03_update_table.py) |\n| `04_reference_check.py` | Fact-check text or documents | [download](https://hyperplexity-storage.s3.amazonaws.com/website_downloads/examples/04_reference_check.py) |\n\nOr clone the full example set:\n\n```bash\n# Download all examples at once\ncurl -O https://hyperplexity-storage.s3.amazonaws.com/website_downloads/examples/hyperplexity_client.py \\\n     -O https://hyperplexity-storage.s3.amazonaws.com/website_downloads/examples/01_validate_table.py \\\n     -O https://hyperplexity-storage.s3.amazonaws.com/website_downloads/examples/02_generate_table.py \\\n     -O https://hyperplexity-storage.s3.amazonaws.com/website_downloads/examples/03_update_table.py \\\n     -O https://hyperplexity-storage.s3.amazonaws.com/website_downloads/examples/04_reference_check.py\npip install requests\nexport HYPERPLEXITY_API_KEY=hpx_live_...\n```\n\n---\n\n## Quick Start: MCP\n\nThe MCP server lets any AI agent drive the full Hyperplexity workflow autonomously — no scripting required.\n\n### Option A — Direct HTTP connection to Railway (recommended for Claude Code)\n\nConnects directly to the hosted Hyperplexity server over HTTP. No local install, no `uvx`, no package management — just one command.\n\n**Claude Code:**\n```bash\nclaude mcp add hyperplexity \\\n  --transport http \\\n  https://mcp-server-hyperplexity-production.up.railway.app/ \\\n  --header \"X-Api-Key: hpx_live_your_key_here\"\n```\n\n**Via config file** (`.mcp.json` in your repo root, or `claude_desktop_config.json`):\n```json\n{\n  \"mcpServers\": {\n    \"hyperplexity\": {\n      \"type\": \"http\",\n      \"url\": \"https://mcp-server-hyperplexity-production.up.railway.app/\",\n      \"headers\": {\n        \"X-Api-Key\": \"hpx_live_your_key_here\"\n      }\n    }\n  }\n}\n```\n\n> **Why HTTP over uvx?** The HTTP connection runs on Railway — always up to date, no local Python environment needed, and no version drift between the package you installed and the live server. Recommended for Claude Code and any project-level config.\n\n### Option B — Local install via uvx\n\nRuns the server locally on your machine using `uvx`. Useful for Claude Desktop or offline/air-gapped environments.\n\n**Claude Code:**\n```bash\nclaude mcp add hyperplexity uvx mcp-server-hyperplexity \\\n  -e HYPERPLEXITY_API_KEY=hpx_live_your_key_here\n```\n\n**Claude Desktop** — add to `claude_desktop_config.json`:\n\n- **macOS:** `~/Library/Application Support/Claude/claude_desktop_config.json`\n- **Windows:** `%APPDATA%\\Claude\\claude_desktop_config.json`\n\n```json\n{\n  \"mcpServers\": {\n    \"hyperplexity\": {\n      \"command\": \"uvx\",\n      \"args\": [\"mcp-server-hyperplexity\"],\n      \"env\": {\n        \"HYPERPLEXITY_API_KEY\": \"hpx_live_your_key_here\"\n      }\n    }\n  }\n}\n```\n\n**Project config (shared repo)** — add `.mcp.json` to your repo root. Each person uses their own key; no key is committed to the repo:\n\n```json\n{\n  \"mcpServers\": {\n    \"hyperplexity\": {\n      \"command\": \"uvx\",\n      \"args\": [\"mcp-server-hyperplexity\"],\n      \"env\": {\n        \"HYPERPLEXITY_API_KEY\": \"${HYPERPLEXITY_API_KEY}\"\n      }\n    }\n  }\n}\n```\n\n**OpenAI Codex CLI** — add to your Codex config file (`~/.codex/config.toml` on macOS/Linux, `%USERPROFILE%\\.codex\\config.toml` on Windows):\n\n```toml\n[mcp_servers.hyperplexity]\ncommand = \"uvx\"\nargs = [\"mcp-server-hyperplexity\"]\n\n[mcp_servers.hyperplexity.env]\nHYPERPLEXITY_API_KEY = \"hpx_live_your_key_here\"\n```\n\nThen restart Codex and verify:\n```bash\ncodex mcp list\n```\n\n### Option C — Smithery\n\n[Smithery](https://smithery.ai) is an MCP registry that works with Claude Code and other MCP-compatible clients including OpenClaw.\n\n**Step 1 — Install and log in:**\n```bash\nnpx -y @smithery/cli@latest login\nnpx -y @smithery/cli@latest mcp add hyperplexity/hyperplexity --client claude-code\n```\n\n**Step 2 — Authenticate with your API key:**\n\nOpen your MCP client (e.g. Claude Code), go to `/mcp`, click **hyperplexity → Authenticate**, and enter your Hyperplexity API key in the Smithery page that opens.\n\n> Smithery login is a one-time step. You must log in before adding servers, or authentication will not be set up correctly.\n\n---\n\n## What to Ask Your Agent\n\nOnce the MCP server is installed, describe your task in plain English. The agent drives the full workflow, pausing only when your input is genuinely needed.\n\n**Validate a table:**\n> \"Validate `companies.xlsx` using Hyperplexity. Interview me about what each column means, then run the preview. If the results look good, approve the full validation.\"\n\n**Generate a table:**\n> \"Use Hyperplexity to generate a table of the top 20 US hedge funds with columns: fund name, AUM, primary strategy, founding year, and HQ city. Approve the full validation when the preview looks right.\"\n\n**Re-run validation on the same table:**\n> \"Re-run update_table on job `session_20260217_103045_abc123` to get an updated validation pass.\"\n\n**Fact-check a document:**\n> \"Use Hyperplexity to fact-check this analyst report.\" *(paste the text or share the file path)*\n\n---\n\n## Workflows\n\n### 1. Validate an Existing Table\n\n> **Minimum rows:** Hyperplexity is designed for tables with **4 or more data rows**. Fewer rows may produce low-quality results.\n\n**Full flow: upload → interview → preview → refine → approve → download**\n\n```\nupload_file(path)\n  → start_table_validation(session_id, s3_key, filename)\n      ┌── match found (score ≥ 0.85) → [preview auto-queued; response has preview_queued=true + job_id]\n      └── no match → interview auto-started\n            → wait_for_conversation / poll get_conversation\n              → send_conversation_reply  (if AI asks questions)\n              → [interview complete → preview auto-queued]\n\n  → wait_for_job(job_id or session_id)  ← blocks until preview_complete\n      → [optional] refine_config(conv_id, session_id, instructions)\n      → approve_validation(job_id, cost_usd)\n      → wait_for_job(job_id)            ← blocks until completed\n      → get_results(job_id)\n```\n\n> **Key behavior:** The preview is always auto-queued — after the interview finishes (`trigger_config_generation=true`), or when a config match is found (`match_score ≥ 0.85`, response includes `preview_queued: true` and `job_id`). Call `wait_for_job(session_id)` directly in all cases (see [Config reuse](#config-reuse)).\n\n> **Upload interview auto-approval:** The interview may auto-approve in a single turn. If the conversation response has `user_reply_needed: false` and `status: approved`, proceed to `wait_for_job(session_id)` immediately — no reply is needed, even if the AI's message appears to ask for confirmation.\n\n**Skip the interview with `instructions` (fire-and-forget config generation):**\n\nPass `instructions` to `start_table_validation` to bypass the interactive interview. The AI reads the table structure + your instructions and generates a config directly, then auto-triggers the preview — no clarifying questions needed.\n\n```\nstart_table_validation(session_id, s3_key, filename,\n  instructions=\"This table lists hedge funds. Validate AUM, strategy, and HQ city. Use Bloomberg and SEC filings.\")\n  → response includes instructions_mode=true\n  → wait_for_job(session_id)          ← config generation + preview tracked automatically\n  → approve_validation(job_id, cost_usd)\n  → wait_for_job(job_id)\n  → get_results(job_id)\n```\n\n> **Cost gate:** Config generation and the 3-row preview are **free**. Full validation is charged at `approve_validation` — you always see the cost estimate at `preview_complete` before anything is billed. If your balance is insufficient, `approve_validation` returns an `insufficient_balance` error with the required amount.\n\n**Refine the config** before approving by calling `refine_config`. This adjusts how columns are validated (sources, strictness, interpretation) — it cannot add or remove columns:\n\n```\nrefine_config(conversation_id, session_id,\n  \"Use SEC filings as the primary source for revenue. Require exact match for ticker symbols.\")\n```\n\nA new preview runs automatically after refinement.\n\n**Python script:** [`examples/01_validate_table.py`](https://hyperplexity-storage.s3.amazonaws.com/website_downloads/examples/01_validate_table.py)\n\n```bash\nexport HYPERPLEXITY_API_KEY=hpx_live_...\npython examples/01_validate_table.py companies.xlsx\npython examples/01_validate_table.py companies.xlsx --refine \"Add LinkedIn URL column\"\n\n Fire-and-forget: provide instructions to skip the interview entirely\npython examples/01_validate_table.py companies.xlsx \\\n    --instructions \"This table lists hedge funds. Validate AUM, strategy, and HQ city.\"\n```\n\n---\n\n### 2. Generate a Table from a Prompt\n\nDescribe the table you want — rows, columns, scope — and Hyperplexity builds and validates it from scratch. Designed for tables with **4 or more rows**.\n\n```\nstart_table_maker(\"Top 20 US biotech companies: name, ticker, market cap, lead drug, phase\")\n  → wait_for_conversation / poll get_conversation\n    → send_conversation_reply  (if AI asks clarifying questions)\n    → [table builds → preview auto-queued]\n\n  → wait_for_job(session_id)          ← spans table-maker + preview phases\n    → approve_validation(job_id, cost_usd)\n    → wait_for_job(job_id)\n    → get_results(job_id)\n```\n\n> **Auto-approve:** The agent can auto-approve the preview and proceed to full validation without human intervention. The preview table is included inline in the `preview_complete` response.\n\n> **Cost:** ~$0.05/cell (standard), up to ~$0.25/cell (advanced). $2 minimum per run.\n\n**Skip confirmation with `auto_start=True` (fire-and-forget generation):**\n\nPass `auto_start=True` to skip the AI's clarifying questions and structure-confirmation step. The AI generates the table immediately from the message alone. Use when your message fully describes the desired table.\n\n```\nstart_table_maker(\n  \"Top 20 US hedge funds: fund name, AUM, primary strategy, founding year, HQ city\",\n  auto_start=True)\n  → wait_for_conversation(conversation_id, session_id)\n      ← returns trigger_execution=true on first response (no Q&A)\n  → wait_for_job(session_id)          ← table building + preview\n  → approve_validation(job_id, cost_usd)\n  → wait_for_job(job_id)\n  → get_results(job_id)\n```\n\n> **Why `wait_for_conversation` with `auto_start=True`?** Even though there is no Q&A, `wait_for_conversation` is still required — it returns `trigger_execution: true` in a single blocking call (no reply needed), signaling that the table-maker has started. Calling `wait_for_job` before this call returns would be premature, as the table-maker may not have been triggered yet.\n\n> **Cost gate:** Table building and the 3-row preview are **free**. Full validation is charged at `approve_validation` — you always see the cost estimate at `preview_complete` before anything is billed. If your balance is insufficient, `approve_validation` returns an `insufficient_balance` error with the required amount.\n\n**Python script:** [`examples/02_generate_table.py`](https://hyperplexity-storage.s3.amazonaws.com/website_downloads/examples/02_generate_table.py)\n\n```bash\npython examples/02_generate_table.py \"Top 10 US hedge funds: fund name, AUM, strategy, HQ city\"\npython examples/02_generate_table.py --prompt-file my_spec.txt\n\n Fire-and-forget: skip clarifying Q&A and generate immediately from the prompt\npython examples/02_generate_table.py --auto-start \"Top 10 US hedge funds: fund name, AUM, strategy, HQ city\"\n```\n\n---\n\n### 3. Update a Table (Re-run Validation Pass)\n\nRe-run validation on a completed job — no re-upload or manual edits needed. The table iterates automatically, re-validating the same data with the same config to pick up any changes in source data.\n\nIf you want to incorporate manual edits to the output file, re-upload the edited file via `upload_file` + `start_table_validation` — a matching config will be found automatically (score ≥ 0.85).\n\n```\nupdate_table(source_job_id)           ← re-validates existing enriched output\n  → wait_for_job(new_job_id)          ← blocks until preview_complete\n    → approve_validation(new_job_id, cost_usd)\n    → wait_for_job(new_job_id)\n    → get_results(new_job_id)\n```\n\n**Python script:** [`examples/03_update_table.py`](https://hyperplexity-storage.s3.amazonaws.com/website_downloads/examples/03_update_table.py)\n\n```bash\npython examples/03_update_table.py session_20260217_103045_abc123\npython examples/03_update_table.py session_20260217_103045_abc123 --version 2\n```\n\n---\n\n### 4. Fact-Check Text or Documents (Chex)\n\nSubmit any text, report, or document. Hyperplexity checks each factual claim against authoritative sources and returns the same output format as standard table validation: an Excel (XLSX) file, an interactive viewer URL, and a metadata JSON.\n\n> **Minimum claims:** Hyperplexity is designed for text with **4 or more factual claims**. Fewer claims may produce low-quality results.\n\n```\nstart_reference_check(text=\"...\")           ← inline text (or auto_approve=True to skip the gate)\n  or\nupload_file(path, \"pdf\")              ← upload PDF/document first\n  → start_reference_check(s3_key=s3_key)\n\n→ wait_for_job(job_id)                ← spans extraction + 3-row preview; stops at preview_complete\n  → preview_table (3 validated sample claims) + cost_estimate shown in response\n  → approve_validation(job_id, approved_cost_usd=X)   ← triggers Phase 2\n  → wait_for_job(job_id)              ← waits for completed\n  → get_results(job_id)               ← download_url (XLSX) + interactive_viewer_url + metadata_url\n```\n\n> **Three-phase flow:** Phase 1 (claim extraction, free) runs automatically, then a 3-row preview validates sample claims (free, auto-triggered). Both phases are tracked by a single `wait_for_job` call that stops at `status=preview_complete`. Review `preview_table` (3 validated sample claims with support level and citations) and `cost_estimate`, then call `approve_validation` to start Phase 2 (full validation, charged). Pass `auto_approve=True` to skip the gate and run straight through to `completed`.\n\n> **Progress tracking:** `get_job_messages` always returns empty for reference-check jobs. Use `get_job_status` (`current_step`, `progress_percent`) to track progress.\n\n**Output:** Excel (XLSX) file with per-claim rows. Support levels: SUPPORTED / PARTIAL / UNSUPPORTED / UNVERIFIABLE. Share `interactive_viewer_url` with human stakeholders — it renders sources and confidence scores in a clean UI.\n\n**Python script:** [`examples/04_reference_check.py`](https://hyperplexity-storage.s3.amazonaws.com/website_downloads/examples/04_reference_check.py) | **Sample output:** [`sample_outputs/reference_check_output.json`](https://hyperplexity-storage.s3.amazonaws.com/website_downloads/examples/sample_outputs/reference_check_output.json)\n\n```bash\n# Fact-check inline text\npython examples/04_reference_check.py --text \"Bitcoin was created by Satoshi Nakamoto in 2009.\"\n\n# Fact-check a PDF\npython examples/04_reference_check.py --file analyst_report.pdf\n\n# Fact-check multiple documents concatenated\ncat doc1.txt doc2.txt | python examples/04_reference_check.py --stdin\n```\n\n> **--stdin:** Concatenates all piped content as a single inline text payload. All claims are attributed to the combined document.\n\n---\n\n## Environment Variables\n\n| Variable | Description |\n|----------|-------------|\n| `HYPERPLEXITY_API_KEY` | API key from [hyperplexity.ai/account](https://hyperplexity.ai/account). Required. New accounts get $20 free. |\n| `HYPERPLEXITY_API_URL` | Override the API base URL (useful for dev/staging environments). |\n\n---\n\n## Direct REST API\n\nAll tools in the MCP server are thin wrappers over the REST API. You can call it directly from any language.\n\n**Base URL:** `https://api.hyperplexity.ai/v1`\n\n**Auth:** `Authorization: Bearer hpx_live_your_key_here`\n\n**Response envelope:**\n\n```json\n{\n  \"success\": true,\n  \"data\": { ... },\n  \"meta\": { \"request_id\": \"...\", \"timestamp\": \"...\" }\n}\n```\n\n### Python client (minimal)\n\n```python\nimport os, requests\n\nBASE_URL = \"https://api.hyperplexity.ai/v1\"\nHEADERS  = {\"Authorization\": f\"Bearer {os.environ['HYPERPLEXITY_API_KEY']}\"}\n\ndef api_get(path, **kwargs):\n    r = requests.get(f\"{BASE_URL}{path}\", headers=HEADERS, **kwargs)\n    r.raise_for_status()\n    return r.json()[\"data\"]\n\ndef api_post(path, **kwargs):\n    r = requests.post(f\"{BASE_URL}{path}\", headers=HEADERS, **kwargs)\n    r.raise_for_status()\n    return r.json()[\"data\"]\n```\n\nA full standalone client module is in [`examples/hyperplexity_client.py`](https://hyperplexity-storage.s3.amazonaws.com/website_downloads/examples/hyperplexity_client.py).\n\n---\n\n## API Endpoint Reference\n\n### Uploads\n\n| Method | Path | Description |\n|--------|------|-------------|\n| `POST` | `/uploads/presigned` | Get a presigned S3 URL to upload a file |\n| `PUT`  | `<presigned_url>` | Upload file bytes directly to S3 (no auth header) |\n| `POST` | `/uploads/confirm` | Confirm upload; detect config matches; auto-start interview if no match |\n\n**Presigned upload request:**\n\n```json\n{\n  \"filename\": \"companies.xlsx\",\n  \"file_size\": 2048000,\n  \"file_type\": \"excel\",\n  \"content_type\": \"application/vnd.openxmlformats-officedocument.spreadsheetml.sheet\"\n}\n```\n\nContent types: `excel` → `.xlsx`, `csv` → `.csv`, `pdf` → `.pdf`\n\n**Confirm upload request** (optional fields):\n\n```json\n{\n  \"session_id\": \"session_20260305_...\",\n  \"s3_key\": \"results/.../file.xlsx\",\n  \"filename\": \"companies.xlsx\",\n  \"instructions\": \"Validate AUM, strategy, and HQ city. Use Bloomberg and SEC filings as sources.\",\n  \"config_id\": \"session_20260217_103045_abc123_config_v1_...\"\n}\n```\n\n`instructions` — if provided, bypasses the interactive upload interview. The AI generates the config directly from the table structure + instructions. Response includes `instructions_mode: true` and `conversation_id`. Use `wait_for_job(session_id)` to track progress — do NOT poll the conversation.\n\n`config_id` — if provided, skips matching and the interview entirely. The specified config is applied immediately and the preview is auto-queued. Response includes `preview_queued: true` and `job_id`. Use `wait_for_job(job_id)` to track progress. The `configuration_id` for any completed job is returned by `GET /jobs/{id}/results` under `job_info.configuration_id`.\n\n---\n\n### Conversations\n\n| Method | Path | Description |\n|--------|------|-------------|\n| `POST` | `/conversations/table-maker` | Start a Table Maker session with a natural language prompt |\n| `GET`  | `/conversations/{id}?session_id=` | Poll conversation for status / AI messages |\n| `POST` | `/conversations/{id}/message` | Send a reply to the AI |\n| `POST` | `/conversations/{id}/refine-config` | Refine the config with natural language instructions |\n\n**Table Maker request body:**\n\n```json\n{\n  \"message\": \"Top 20 US hedge funds: fund name, AUM, primary strategy, founding year, HQ city\",\n  \"auto_start\": true\n}\n```\n\n`auto_start` — if `true`, the AI skips clarifying questions and the structure-confirmation step, proceeding directly to table generation. The first `get_conversation` response will have `trigger_execution: true`. Use when your message fully describes the desired table.\n\n---\n\n### Jobs\n\n| Method | Path | Description |\n|--------|------|-------------|\n| `GET`  | `/jobs/{id}` | Get job status and progress |\n| `GET`  | `/jobs/{id}/messages` | Fetch live progress messages (paginated by `since_seq`) |\n| `POST` | `/jobs/{id}/validate` | Approve full validation — credits charged here |\n| `GET`  | `/jobs/{id}/results` | Fetch download URL, metadata, viewer URL |\n| `POST` | `/jobs/update-table` | Re-validate enriched output after corrections |\n| `POST` | `/jobs/reference-check` | Submit text or file for claim verification |\n\n**Job status values:**\n\n| Status | Meaning |\n|--------|---------|\n| `queued` | Accepted, waiting to start |\n| `processing` | Actively running |\n| `preview_complete` | Free preview done — review results and approve full run |\n| `completed` | Full validation complete, results ready |\n| `failed` | Error — check `error.message` |\n\n---\n\n### Account\n\n| Method | Path | Description |\n|--------|------|-------------|\n| `GET`  | `/account/balance` | Current credit balance and this-month usage |\n| `GET`  | `/account/usage` | Billing history (supports `start_date`, `end_date`, `limit`, `offset`) |\n\n---\n\n## MCP Prompts\n\nThree built-in prompts act as workflow starters — select them from the prompt picker in your MCP client (Claude Code: `/` menu; Claude Desktop: the prompt icon) and fill in the arguments.\n\n| Prompt | Arguments | What it does |\n|--------|-----------|--------------|\n| `generate_table` | `description` (required), `columns` (optional) | Builds a step-by-step instruction for creating a new research table from a natural language description |\n| `validate_file` | `file_path` (required), `instructions` (optional) | Generates the full validation workflow for an existing Excel or CSV file |\n| `fact_check_text` | `text` (required) | Generates the reference-check workflow for fact-checking a text passage |\n\n---\n\n## MCP Tool Reference\n\nEvery tool response includes a `_guidance` block with a plain-English summary and the exact next tool call(s) — enabling fully autonomous agent workflows.\n\n| Tool | Description |\n|------|-------------|\n| `upload_file` | Upload Excel, CSV, or PDF (handles presigned S3 automatically) |\n| `start_table_validation` | Confirm upload; detect config matches; auto-start interview if needed |\n| `start_table_maker` | Start an AI conversation to generate a table from a prompt |\n| `get_conversation` | Poll a conversation for AI responses or status changes |\n| `send_conversation_reply` | Reply to AI questions during an interview or table-maker session |\n| `wait_for_conversation` | Block until conversation needs input or finishes (emits live progress) |\n| `refine_config` | Refine the validation config with natural language instructions (adjusts sources, strictness, interpretation — cannot add or remove columns) |\n| `wait_for_job` | Block until `preview_complete`, `completed`, or `failed` (preferred progress tracker) |\n| `get_job_status` | One-shot status poll |\n| `get_job_messages` | Fetch progress messages with native percentages (paginated) |\n| `approve_validation` | Approve preview → start full validation (credits charged here) |\n| `get_results` | Download URL, inline metadata, interactive viewer URL |\n| `update_table` | Re-validate enriched output after analyst corrections |\n| `start_reference_check` | Submit text or file for claim and citation verification |\n| `get_balance` | Check credit balance |\n| `get_usage` | Review billing history |\n\n---\n\n## Key Behaviors\n\n### Auto-queued preview\n\nThe preview is **automatically queued** in all three paths after `start_table_validation`:\n\n| Path | Trigger | What to call next |\n|------|---------|-------------------|\n| Config match (score ≥ 0.85) | `preview_queued: true` in response | `wait_for_job(job_id)` |\n| `instructions=` provided | `instructions_mode: true` in response | `wait_for_job(session_id)` |\n| Interview ran | `trigger_config_generation=true` from conversation | `wait_for_job(session_id)` |\n\nTo reuse a config from a different session, pass `config_id` to `start_table_validation` — the preview will be auto-queued immediately.\n\n### Config reuse\n\nIf `start_table_validation` returns `match_score ≥ 0.85`, the preview is automatically queued using the matched config. The response includes `preview_queued: true` and `job_id` — call `wait_for_job(job_id)` directly, no interview needed.\n\nThe `configuration_id` from any completed job's `get_results` response can be reused on future uploads of similar tables.\n\n### Cost confirmation gate\n\n`approve_validation` requires `approved_cost_usd` matching the preview estimate. This prevents surprise charges. The estimate is in the `preview_complete` job status response under `cost_estimate.estimated_total_cost_usd`.\n\nThis gate applies regardless of whether `instructions` or `auto_start` was used — both only skip the *interview/confirmation conversation*, not the cost approval step. If your balance is insufficient when `approve_validation` is called, the API returns:\n\n```json\n{ \"error\": \"insufficient_balance\", \"required_usd\": 4.20, \"current_balance_usd\": 1.50 }\n```\n\n### Fire-and-forget shortcuts\n\nTwo optional flags let fully automated pipelines skip interactive steps:\n\n| Flag | Tool | Skips | Next step |\n|------|------|-------|-----------|\n| `instructions=\"...\"` | `start_table_validation` | Upload interview Q&A | `wait_for_job(session_id)` |\n| `auto_start=True` | `start_table_maker` | Structure confirmation | `wait_for_conversation` → `wait_for_job` |\n\nThese flags use different terminal signals: `instructions=` (a config-gen flow) causes `trigger_config_generation: true` on the conversation response; `auto_start=True` (a table-maker flow) causes `trigger_execution: true`. Both skip interactive Q&A but produce different fields — do not wait for `trigger_execution` when using the `instructions=` upload path. The `preview_complete` cost gate and `approve_validation` still apply.\n\n### Consuming results: humans vs AI agents\n\n**Output files generated per run:**\n\n| File | Format | Description |\n|------|--------|-------------|\n| Preview table | Markdown (inline) | First 3 rows as markdown text; returned inline in the `preview_complete` job status response (not a separate download). Also available in `metadata.json` under `markdown_table`. |\n| Enriched results | Excel (`.xlsx`) | Ideal for sharing with humans; sources and citations are embedded in cell comments |\n| Full metadata | `metadata.json` | Complete per-cell detail for every row; use the `row_key` field to drill into specific rows programmatically |\n\n`get_results` returns:\n\n| Field | Type | Best for |\n|-------|------|----------|\n| `results.interactive_viewer_url` | URL | **Humans** — web viewer with confidence indicators (requires login at hyperplexity.ai with the same email as your API key) |\n| `results.download_url` | Presigned URL | **Humans** — download the enriched Excel (.xlsx) directly |\n| `results.metadata_url` | Presigned URL | **AI agents** — JSON file with all rows, per-cell details, and source citations |\n\n**Recommended AI agent workflow:**\n\n1. At `preview_complete`: read the inline `preview_table` (markdown, 3 rows) from `GET /jobs/{id}` to survey the table structure and spot-check values. The AI agent can review this inline table and call `approve_validation` directly — no human approval step is required.\n2. After full validation: fetch `results.metadata_url` → `table_metadata.json`. This contains every validated row.\n3. Use `rows[].row_key` (stable SHA-256) to cross-reference rows between the markdown summary and the detailed JSON.\n4. Per-cell fields in `table_metadata.json`:\n   - `cells[col].value` — validated value (legacy files may use `full_value`)\n   - `cells[col].confidence` — `HIGH` / `MEDIUM` / `LOW` / `ID`\n   - `cells[col].comment.validator_explanation` — reasoning\n   - `cells[col].comment.key_citation` — top authoritative source\n   - `cells[col].comment.sources[]` — all sources with `url` and `snippet`\n\n---\n\n## Pricing\n\n| Mode | Cost |\n|------|------|\n| Preview (first 3 rows) | Free |\n| Standard validation | ~$0.05 / cell |\n| Advanced validation | up to ~$0.25 / cell |\n| Minimum per run | $2.00 |\n| Reference check | TBD — contact support |\n\nCredits are prepaid. Get $20 free at **[hyperplexity.ai/account](https://hyperplexity.ai/account)**.\n\nStandard validation is used for most tables. Advanced validation is selected automatically when the table requires more sophisticated reasoning (e.g., scientific data, complex financial metrics, or cells with high ambiguity).\n\n---\n\n## Links\n\n- **MCP server (HTTP, recommended):** `claude mcp add hyperplexity --transport http https://mcp-server-hyperplexity-production.up.railway.app/ --header \"X-Api-Key: hpx_live_...\"` — no install needed\n- **MCP server (PyPI/uvx):** `uvx mcp-server-hyperplexity` — for Claude Desktop or offline use\n- **Source:** [github.com/EliyahuAI/mcp-server-hyperplexity](https://github.com/EliyahuAI/mcp-server-hyperplexity)\n- **Documentation:** [hyperplexity.ai/mcp](https://hyperplexity.ai/mcp)\n- **API reference:** [hyperplexity.ai/api](https://hyperplexity.ai/api)\n- **Account & credits:** [hyperplexity.ai/account](https://hyperplexity.ai/account)\n","capabilities":[],"protocols":["MCP"],"safetyScore":86,"overallRank":37.9,"trustScore":null,"trust":null,"source":"SMITHERY","updatedAt":"2026-04-15T00:30:28.231Z"}