Rank
70
AI Agents & MCPs & AI Workflow Automation β’ (~400 MCP servers for AI agents) β’ AI Automation / AI Agent with MCPs β’ AI Workflows & AI Agents β’ MCPs for AI Agents
Traction
No public download signal
Freshness
Updated 2d ago
Crawler Summary
AI Clinical Documentation Assistant - Extract structured clinical data from any document format using local LLM, CrewAI multi-agent workflow, AWS Bedrock, FAISS vector search, PII masking & Databricks sync π₯ AI Clinical Documentation Assistant Healthcare-grade AI system for extracting structured clinical data from any document format using multi-agent workflows, local LLM extraction, and enterprise data governance. $1 $1 $1 $1 $1 --- β¨ Features - **π€ 5-Agent CrewAI Workflow** β Intelligent case analysis with specialized agents for retrieval, extraction, validation, explanation, and routing - **π Universal File Parse Capability contract not published. No trust telemetry is available yet. 1 GitHub stars reported by the source. Last updated 4/16/2026.
Freshness
Last checked 4/16/2026
Best For
Medical_Assistant is best for crewai, multi-agent workflows where OpenClaw compatibility matters.
Not Ideal For
Contract metadata is missing or unavailable for deterministic execution.
Evidence Sources Checked
editorial-content, GITHUB OPENCLEW, runtime-metrics, public facts pack
AI Clinical Documentation Assistant - Extract structured clinical data from any document format using local LLM, CrewAI multi-agent workflow, AWS Bedrock, FAISS vector search, PII masking & Databricks sync π₯ AI Clinical Documentation Assistant Healthcare-grade AI system for extracting structured clinical data from any document format using multi-agent workflows, local LLM extraction, and enterprise data governance. $1 $1 $1 $1 $1 --- β¨ Features - **π€ 5-Agent CrewAI Workflow** β Intelligent case analysis with specialized agents for retrieval, extraction, validation, explanation, and routing - **π Universal File Parse
Public facts
5
Change events
1
Artifacts
0
Freshness
Apr 16, 2026
Capability contract not published. No trust telemetry is available yet. 1 GitHub stars reported by the source. Last updated 4/16/2026.
Trust score
Unknown
Compatibility
OpenClaw
Freshness
Apr 16, 2026
Vendor
Kazinymul
Artifacts
0
Benchmarks
0
Last release
Unpublished
Key links, install path, and a quick operational read before the deeper crawl record.
Summary
Capability contract not published. No trust telemetry is available yet. 1 GitHub stars reported by the source. Last updated 4/16/2026.
Setup snapshot
git clone https://github.com/kaziNymul/Medical_Assistant.gitSetup complexity is LOW. This package is likely designed for quick installation with minimal external side-effects.
Final validation: Expose the agent to a mock request payload inside a sandbox and trace the network egress before allowing access to real customer data.
Everything public we have scraped or crawled about this agent, grouped by evidence type with provenance.
Vendor
Kazinymul
Protocol compatibility
OpenClaw
Adoption signal
1 GitHub stars
Handshake status
UNKNOWN
Crawlable docs
6 indexed pages on the official domain
Merged public release, docs, artifact, benchmark, pricing, and trust refresh events.
Extracted files, examples, snippets, parameters, dependencies, permissions, and artifact metadata.
Extracted files
0
Examples
6
Snippets
0
Languages
python
mermaid
flowchart TB
subgraph Frontend["π₯οΈ Frontend (React + Vite)"]
Dashboard[π Dashboard]
Query[π Query]
CaseBrief[π Case Brief]
SmartExtract[π§ Smart Extract]
Documents[π Documents]
end
subgraph API["β‘ FastAPI Backend"]
QueryAPI["/api/query"]
CrewAPI["/api/crew"]
ExtractAPI["/api/extract"]
DocsAPI["/api/documents"]
end
subgraph Core["π§ Core Services"]
RAG["RAG Pipeline"]
CrewAI["CrewAI Agents"]
Extractor["LLM Extractor"]
Masking["PII Masking"]
end
subgraph Storage["πΎ Data Layer"]
FAISS[(FAISS\n55,500 docs)]
SQLite[(SQLite\nPII Store)]
Vault[(HashiCorp\nVault)]
end
subgraph Cloud["βοΈ Cloud Services"]
Bedrock["AWS Bedrock\nClaude 3 Haiku"]
Titan["Titan Embeddings\nV2 1024-dim"]
Databricks["Databricks\nUnity Catalog"]
end
Frontend --> API
API --> Core
Core --> Storage
Core --> Cloud
style Frontend fill:#e1f5fe
style API fill:#fff3e0
style Core fill:#f3e5f5
style Storage fill:#e8f5e9
style Cloud fill:#fce4ecmermaid
flowchart LR
subgraph Input["π₯ Input Sources"]
PDF[PDF]
Word[Word]
HL7[HL7/FHIR]
CSV[CSV/Excel]
Images[Images]
end
subgraph Parser["π Universal Parser"]
FileParser[File Parser]
OCR[Tesseract OCR]
end
subgraph Processing["βοΈ Processing"]
Masking[PII Masking]
Chunking[Text Chunking]
Embedding[Titan Embeddings]
end
subgraph Storage["πΎ Storage"]
FAISS[(FAISS Index)]
PII[(PII Database)]
Vault[(Vault Secrets)]
end
subgraph AI["π€ AI Layer"]
LocalLLM[FLAN-T5 Base]
Bedrock[Claude 3 Haiku]
CrewAI[CrewAI Agents]
end
subgraph Output["π€ Output"]
JSON[Structured JSON]
Databricks[(Databricks)]
UI[React UI]
end
Input --> Parser
Parser --> Processing
Processing --> Storage
Storage --> AI
AI --> Output
style Input fill:#ffebee
style Parser fill:#fff8e1
style Processing fill:#e8f5e9
style Storage fill:#e3f2fd
style AI fill:#f3e5f5
style Output fill:#e0f2f1mermaid
flowchart TD
Start([π Clinical Query]) --> Retrieval
subgraph Agents["CrewAI Agent Workflow"]
Retrieval["π Retrieval Agent\nβββββββββββββββββ\nSearches FAISS index\nFinds relevant documents"]
Extraction["π Extraction Agent\nβββββββββββββββββ\nExtracts structured data\nDiagnoses, medications, labs"]
Validation["β
Validation Agent\nβββββββββββββββββ\nChecks completeness\nValidates consistency"]
Explanation["π¬ Explanation Agent\nβββββββββββββββββ\nGenerates summaries\nHuman-readable output"]
Routing["π Routing Agent\nβββββββββββββββββ\nRoutes to workflows\nPrioritizes cases"]
end
Retrieval --> Extraction
Extraction --> Validation
Validation --> Explanation
Explanation --> Routing
Routing --> End([π Structured Output])
style Retrieval fill:#bbdefb
style Extraction fill:#c8e6c9
style Validation fill:#fff9c4
style Explanation fill:#f8bbd9
style Routing fill:#d1c4e9mermaid
flowchart TB
subgraph Input["Raw Data Input"]
RawData[("π Clinical Documents\nwith PHI/PII")]
end
subgraph Masking["π PII Detection & Masking"]
Detect["Detect PHI Entities"]
Mask["Apply Masking Rules"]
Store["Store Mapping in Vault"]
end
subgraph MaskedEntities["Masked Entities"]
Name["[PATIENT_NAME]"]
SSN["[SSN]"]
DOB["[DOB]"]
Phone["[PHONE]"]
Email["[EMAIL]"]
MRN["[MRN]"]
end
subgraph Secure["π Secure Storage"]
Vault[("HashiCorp Vault\nSecrets & Mappings")]
PIIDb[("SQLite\nPII Database")]
end
subgraph Output["Safe Output"]
MaskedDocs["Masked Documents"]
Databricks["Databricks\nMasked Tables"]
end
RawData --> Detect
Detect --> Mask
Mask --> MaskedEntities
Mask --> Store
Store --> Secure
MaskedEntities --> Output
style Input fill:#ffcdd2
style Masking fill:#fff9c4
style MaskedEntities fill:#e1f5fe
style Secure fill:#c8e6c9
style Output fill:#d1c4e9mermaid
flowchart LR
subgraph Source["π₯ Source Data"]
Extractions[Clinical Extractions]
Metadata[Document Metadata]
end
subgraph Sync["π Databricks Sync"]
SyncService["databricks_sync.py"]
end
subgraph Unity["π Unity Catalog"]
subgraph Catalog["medical_ai"]
subgraph Schema["clinical_data"]
Unmasked[("clinical_extractions_unmasked\nββββββββββββββββ\nπ΄ Contains PHI\nπ Restricted Access")]
Masked[("clinical_extractions_masked\nββββββββββββββββ\nπ’ PHI Removed\nπ Analytics Safe")]
end
end
end
subgraph Access["π₯ Access Control"]
Clinicians["π¨ββοΈ Clinicians\nFull Access"]
Analysts["π Analysts\nMasked Only"]
AI["π€ AI Systems\nMasked Only"]
end
Source --> Sync
Sync --> Unity
Unmasked --> Clinicians
Masked --> Analysts
Masked --> AI
style Unmasked fill:#ffcdd2
style Masked fill:#c8e6c9mermaid
flowchart TD
subgraph Upload["π€ File Upload"]
File["Any File Format"]
end
subgraph Parse["π Universal Parser"]
Detect["Detect Format"]
Extract["Extract Text"]
OCR["OCR if Image"]
end
subgraph LLM["π§ FLAN-T5 Extraction"]
Prompt["Build Extraction Prompt"]
Inference["CPU Inference\n~2-5 seconds"]
Parse2["Parse JSON Output"]
end
subgraph Fields["π Extracted Fields"]
Demographics["Demographics\nName, DOB, MRN"]
Diagnoses["Diagnoses\nICD codes, descriptions"]
Medications["Medications\nDrugs, dosages"]
Labs["Lab Results\nValues, dates"]
Vitals["Vital Signs\nBP, HR, Temp"]
Plan["Assessment & Plan"]
end
subgraph Output["π€ Output"]
JSON["Structured JSON"]
Masked["Masked Version"]
DB["Save to Database"]
end
Upload --> Parse
Parse --> LLM
LLM --> Fields
Fields --> Output
style Upload fill:#e3f2fd
style Parse fill:#fff8e1
style LLM fill:#f3e5f5
style Fields fill:#e8f5e9
style Output fill:#fce4ecFull documentation captured from public sources, including the complete README when available.
Docs source
GITHUB OPENCLEW
Editorial quality
ready
AI Clinical Documentation Assistant - Extract structured clinical data from any document format using local LLM, CrewAI multi-agent workflow, AWS Bedrock, FAISS vector search, PII masking & Databricks sync π₯ AI Clinical Documentation Assistant Healthcare-grade AI system for extracting structured clinical data from any document format using multi-agent workflows, local LLM extraction, and enterprise data governance. $1 $1 $1 $1 $1 --- β¨ Features - **π€ 5-Agent CrewAI Workflow** β Intelligent case analysis with specialized agents for retrieval, extraction, validation, explanation, and routing - **π Universal File Parse
Healthcare-grade AI system for extracting structured clinical data from any document format using multi-agent workflows, local LLM extraction, and enterprise data governance.
flowchart TB
subgraph Frontend["π₯οΈ Frontend (React + Vite)"]
Dashboard[π Dashboard]
Query[π Query]
CaseBrief[π Case Brief]
SmartExtract[π§ Smart Extract]
Documents[π Documents]
end
subgraph API["β‘ FastAPI Backend"]
QueryAPI["/api/query"]
CrewAPI["/api/crew"]
ExtractAPI["/api/extract"]
DocsAPI["/api/documents"]
end
subgraph Core["π§ Core Services"]
RAG["RAG Pipeline"]
CrewAI["CrewAI Agents"]
Extractor["LLM Extractor"]
Masking["PII Masking"]
end
subgraph Storage["πΎ Data Layer"]
FAISS[(FAISS\n55,500 docs)]
SQLite[(SQLite\nPII Store)]
Vault[(HashiCorp\nVault)]
end
subgraph Cloud["βοΈ Cloud Services"]
Bedrock["AWS Bedrock\nClaude 3 Haiku"]
Titan["Titan Embeddings\nV2 1024-dim"]
Databricks["Databricks\nUnity Catalog"]
end
Frontend --> API
API --> Core
Core --> Storage
Core --> Cloud
style Frontend fill:#e1f5fe
style API fill:#fff3e0
style Core fill:#f3e5f5
style Storage fill:#e8f5e9
style Cloud fill:#fce4ec
flowchart LR
subgraph Input["π₯ Input Sources"]
PDF[PDF]
Word[Word]
HL7[HL7/FHIR]
CSV[CSV/Excel]
Images[Images]
end
subgraph Parser["π Universal Parser"]
FileParser[File Parser]
OCR[Tesseract OCR]
end
subgraph Processing["βοΈ Processing"]
Masking[PII Masking]
Chunking[Text Chunking]
Embedding[Titan Embeddings]
end
subgraph Storage["πΎ Storage"]
FAISS[(FAISS Index)]
PII[(PII Database)]
Vault[(Vault Secrets)]
end
subgraph AI["π€ AI Layer"]
LocalLLM[FLAN-T5 Base]
Bedrock[Claude 3 Haiku]
CrewAI[CrewAI Agents]
end
subgraph Output["π€ Output"]
JSON[Structured JSON]
Databricks[(Databricks)]
UI[React UI]
end
Input --> Parser
Parser --> Processing
Processing --> Storage
Storage --> AI
AI --> Output
style Input fill:#ffebee
style Parser fill:#fff8e1
style Processing fill:#e8f5e9
style Storage fill:#e3f2fd
style AI fill:#f3e5f5
style Output fill:#e0f2f1
flowchart TD
Start([π Clinical Query]) --> Retrieval
subgraph Agents["CrewAI Agent Workflow"]
Retrieval["π Retrieval Agent\nβββββββββββββββββ\nSearches FAISS index\nFinds relevant documents"]
Extraction["π Extraction Agent\nβββββββββββββββββ\nExtracts structured data\nDiagnoses, medications, labs"]
Validation["β
Validation Agent\nβββββββββββββββββ\nChecks completeness\nValidates consistency"]
Explanation["π¬ Explanation Agent\nβββββββββββββββββ\nGenerates summaries\nHuman-readable output"]
Routing["π Routing Agent\nβββββββββββββββββ\nRoutes to workflows\nPrioritizes cases"]
end
Retrieval --> Extraction
Extraction --> Validation
Validation --> Explanation
Explanation --> Routing
Routing --> End([π Structured Output])
style Retrieval fill:#bbdefb
style Extraction fill:#c8e6c9
style Validation fill:#fff9c4
style Explanation fill:#f8bbd9
style Routing fill:#d1c4e9
flowchart TB
subgraph Input["Raw Data Input"]
RawData[("π Clinical Documents\nwith PHI/PII")]
end
subgraph Masking["π PII Detection & Masking"]
Detect["Detect PHI Entities"]
Mask["Apply Masking Rules"]
Store["Store Mapping in Vault"]
end
subgraph MaskedEntities["Masked Entities"]
Name["[PATIENT_NAME]"]
SSN["[SSN]"]
DOB["[DOB]"]
Phone["[PHONE]"]
Email["[EMAIL]"]
MRN["[MRN]"]
end
subgraph Secure["π Secure Storage"]
Vault[("HashiCorp Vault\nSecrets & Mappings")]
PIIDb[("SQLite\nPII Database")]
end
subgraph Output["Safe Output"]
MaskedDocs["Masked Documents"]
Databricks["Databricks\nMasked Tables"]
end
RawData --> Detect
Detect --> Mask
Mask --> MaskedEntities
Mask --> Store
Store --> Secure
MaskedEntities --> Output
style Input fill:#ffcdd2
style Masking fill:#fff9c4
style MaskedEntities fill:#e1f5fe
style Secure fill:#c8e6c9
style Output fill:#d1c4e9
flowchart LR
subgraph Source["π₯ Source Data"]
Extractions[Clinical Extractions]
Metadata[Document Metadata]
end
subgraph Sync["π Databricks Sync"]
SyncService["databricks_sync.py"]
end
subgraph Unity["π Unity Catalog"]
subgraph Catalog["medical_ai"]
subgraph Schema["clinical_data"]
Unmasked[("clinical_extractions_unmasked\nββββββββββββββββ\nπ΄ Contains PHI\nπ Restricted Access")]
Masked[("clinical_extractions_masked\nββββββββββββββββ\nπ’ PHI Removed\nπ Analytics Safe")]
end
end
end
subgraph Access["π₯ Access Control"]
Clinicians["π¨ββοΈ Clinicians\nFull Access"]
Analysts["π Analysts\nMasked Only"]
AI["π€ AI Systems\nMasked Only"]
end
Source --> Sync
Sync --> Unity
Unmasked --> Clinicians
Masked --> Analysts
Masked --> AI
style Unmasked fill:#ffcdd2
style Masked fill:#c8e6c9
flowchart TD
subgraph Upload["π€ File Upload"]
File["Any File Format"]
end
subgraph Parse["π Universal Parser"]
Detect["Detect Format"]
Extract["Extract Text"]
OCR["OCR if Image"]
end
subgraph LLM["π§ FLAN-T5 Extraction"]
Prompt["Build Extraction Prompt"]
Inference["CPU Inference\n~2-5 seconds"]
Parse2["Parse JSON Output"]
end
subgraph Fields["π Extracted Fields"]
Demographics["Demographics\nName, DOB, MRN"]
Diagnoses["Diagnoses\nICD codes, descriptions"]
Medications["Medications\nDrugs, dosages"]
Labs["Lab Results\nValues, dates"]
Vitals["Vital Signs\nBP, HR, Temp"]
Plan["Assessment & Plan"]
end
subgraph Output["π€ Output"]
JSON["Structured JSON"]
Masked["Masked Version"]
DB["Save to Database"]
end
Upload --> Parse
Parse --> LLM
LLM --> Fields
Fields --> Output
style Upload fill:#e3f2fd
style Parse fill:#fff8e1
style LLM fill:#f3e5f5
style Fields fill:#e8f5e9
style Output fill:#fce4ec
flowchart TB
subgraph App["React Application"]
Router["React Router"]
end
subgraph Pages["π Pages"]
Dashboard["π Dashboard\nββββββββββββ\nStats & Metrics\nRecent Activity"]
Query["π Query\nββββββββββββ\nRAG Search\nBedrock LLM"]
CaseBrief["π Case Brief\nββββββββββββ\nCrewAI Analysis\nStructured Output"]
Extract["π§ Smart Extract\nββββββββββββ\nLocal LLM\nFile Upload"]
Docs["π Documents\nββββββββββββ\nDocument List\nUpload/Manage"]
Settings["βοΈ Settings\nββββββββββββ\nConfiguration\nAPI Keys"]
end
subgraph Components["π§© Components"]
Layout["Layout"]
Sidebar["Sidebar"]
Cards["Cards"]
Forms["Forms"]
end
subgraph State["π¦ State Management"]
TanStack["TanStack Query"]
LocalState["React State"]
end
subgraph API["π API Layer"]
ApiClient["api.js"]
end
Router --> Pages
Pages --> Components
Pages --> State
State --> API
style App fill:#e3f2fd
style Pages fill:#fff8e1
style Components fill:#e8f5e9
style State fill:#f3e5f5
style API fill:#fce4ec
flowchart TB
subgraph Docker["π³ Docker Compose"]
subgraph Backend["Backend Container"]
FastAPI["FastAPI :8000"]
Python["Python 3.11"]
end
subgraph Frontend["Frontend Container"]
Vite["Vite Dev :5173"]
React["React 18"]
end
subgraph Services["Service Containers"]
VaultC["HashiCorp Vault :8200"]
end
end
subgraph Volumes["π Volumes"]
Data["./data"]
FAISS["./data/faiss_index"]
Models["./models"]
end
subgraph External["βοΈ External Services"]
AWS["AWS Bedrock"]
DB["Databricks"]
end
Docker --> Volumes
Backend --> External
style Docker fill:#e3f2fd
style Volumes fill:#fff8e1
style External fill:#fce4ec
# Clone the repository
git clone git@github.com:kaziNymul/Medical_Assistant.git
cd Medical_Assistant
# Create virtual environment
python -m venv venv
source venv/bin/activate # Linux/Mac
# or: venv\Scripts\activate # Windows
# Install Python dependencies
pip install -r requirements.txt
# Install frontend dependencies
cd frontend && npm install && cd ..
# Setup environment
cp .env.example .env
# Edit .env with your credentials
Create a .env file with:
# AWS Bedrock
AWS_REGION=us-east-1
AWS_ACCESS_KEY_ID=your_key
AWS_SECRET_ACCESS_KEY=your_secret
BEDROCK_MODEL_ID=anthropic.claude-3-haiku-20240307-v1:0
BEDROCK_EMBEDDING_MODEL=amazon.titan-embed-text-v2:0
# HashiCorp Vault
VAULT_ADDR=http://127.0.0.1:8200
VAULT_TOKEN=your_vault_token
# Databricks (optional)
DATABRICKS_HOST=your_workspace.cloud.databricks.com
DATABRICKS_TOKEN=your_token
DATABRICKS_CATALOG=medical_ai
DATABRICKS_SCHEMA=clinical_data
# Local LLM
EXTRACTION_MODEL=google/flan-t5-base
EXTRACTION_DEVICE=cpu
# Terminal 1: Start the backend
source venv/bin/activate
uvicorn src.api.app:app --reload --host 0.0.0.0 --port 8000
# Terminal 2: Start the frontend
cd frontend
npm run dev
Access the application at http://localhost:5173
medical_assistant/
βββ src/
β βββ api/
β β βββ app.py # FastAPI application
β β βββ routes.py # API endpoints
β βββ crew/
β β βββ agents.py # CrewAI agent definitions
β β βββ tasks.py # Agent task definitions
β β βββ tools.py # Custom agent tools
β β βββ crew.py # Crew orchestration
β βββ extraction/
β β βββ file_parser.py # Universal file parser
β β βββ llm_extractor.py # Local FLAN-T5 extractor
β βββ rag/
β β βββ pipeline.py # RAG orchestration
β β βββ embeddings.py # Bedrock embeddings
β β βββ vector_store.py # FAISS operations
β βββ utils/
β β βββ masking.py # PHI/PII masking
β β βββ vault.py # HashiCorp Vault client
β β βββ databricks_sync.py # Databricks sync
β βββ databricks/
β βββ client.py # Databricks client
β βββ tables.py # Table management
βββ frontend/
β βββ src/
β β βββ components/ # React components
β β βββ pages/ # Page components
β β βββ utils/api.js # API client
β βββ package.json
βββ config/
β βββ settings.py # Application settings
β βββ extraction.env # Extraction config
βββ scripts/
β βββ setup_local_llm.sh # LLM setup script
β βββ setup_vault.sh # Vault setup
β βββ deploy.sh # Deployment script
βββ data/
β βββ raw/ # Raw input files
β βββ processed/ # Processed documents
β βββ pii/ # PII database
βββ docker-compose.yml
βββ Dockerfile
βββ requirements.txt
βββ README.md
| Agent | Role | Description | |-------|------|-------------| | Retrieval Agent | Document Finder | Searches FAISS index for relevant clinical documents | | Extraction Agent | Data Extractor | Extracts structured fields (diagnoses, medications, labs) | | Validation Agent | Quality Checker | Validates completeness and consistency of extracted data | | Explanation Agent | Communicator | Generates human-readable clinical summaries | | Routing Agent | Coordinator | Routes cases to appropriate workflows |
| Format | Extension | Parser |
|--------|-----------|--------|
| PDF | .pdf | PyMuPDF + OCR fallback |
| Word | .docx, .doc | python-docx |
| Excel | .xlsx, .xls | openpyxl |
| CSV | .csv | pandas |
| JSON | .json | Native Python |
| XML | .xml | ElementTree |
| HL7 v2 | .hl7 | Custom parser |
| FHIR | .json | FHIR R4 parser |
| CDA | .xml | CDA R2 parser |
| Images | .png, .jpg | Tesseract OCR |
POST /api/query # RAG query with Bedrock
POST /api/query/semantic # Semantic search only
POST /api/crew/analyze # Full crew analysis
POST /api/crew/case-brief # Generate case brief
GET /api/crew/status/{id} # Check task status
POST /api/extract/upload # Upload and parse file
POST /api/extract/process # Extract with local LLM
GET /api/extract/models # List available models
POST /api/extract/databricks # Sync to Databricks
# Build and run with Docker Compose
docker-compose up --build
# Or build individually
docker build -t medical-assistant .
docker run -p 8000:8000 -p 5173:5173 medical-assistant
| Metric | Value | |--------|-------| | FAISS Index Size | 238.7 MB | | Documents Indexed | 55,500 | | Embedding Dimensions | 1,024 | | Average Query Time | ~200ms | | LLM Extraction Time | ~2-5s (CPU) |
This system is designed for clinical documentation assistance only. It does NOT:
All AI outputs must be reviewed by qualified healthcare professionals.
Kazi Nymul β GitHub
This project is licensed under the MIT License - see the LICENSE file for details.
Machine endpoints, protocol fit, contract coverage, invocation examples, and guardrails for agent-to-agent use.
Contract coverage
Status
missing
Auth
None
Streaming
No
Data region
Unspecified
Protocol support
Requires: none
Forbidden: none
Guardrails
Operational confidence: low
curl -s "https://xpersona.co/api/v1/agents/crewai-kazinymul-medical-assistant/snapshot"
curl -s "https://xpersona.co/api/v1/agents/crewai-kazinymul-medical-assistant/contract"
curl -s "https://xpersona.co/api/v1/agents/crewai-kazinymul-medical-assistant/trust"
Trust and runtime signals, benchmark suites, failure patterns, and practical risk constraints.
Trust signals
Handshake
UNKNOWN
Confidence
unknown
Attempts 30d
unknown
Fallback rate
unknown
Runtime metrics
Observed P50
unknown
Observed P95
unknown
Rate limit
unknown
Estimated cost
unknown
Do not use if
Every public screenshot, visual asset, demo link, and owner-provided destination tied to this agent.
Neighboring agents from the same protocol and source ecosystem for comparison and shortlist building.
Rank
70
AI Agents & MCPs & AI Workflow Automation β’ (~400 MCP servers for AI agents) β’ AI Automation / AI Agent with MCPs β’ AI Workflows & AI Agents β’ MCPs for AI Agents
Traction
No public download signal
Freshness
Updated 2d ago
Rank
70
AI productivity studio with smart chat, autonomous agents, and 300+ assistants. Unified access to frontier LLMs
Traction
No public download signal
Freshness
Updated 6d ago
Rank
70
Free, local, open-source 24/7 Cowork app and OpenClaw for Gemini CLI, Claude Code, Codex, OpenCode, Qwen Code, Goose CLI, Auggie, and more | π Star if you like it!
Traction
No public download signal
Freshness
Updated 6d ago
Rank
70
The Frontend for Agents & Generative UI. React + Angular
Traction
No public download signal
Freshness
Updated 23d ago
Contract JSON
{
"contractStatus": "missing",
"authModes": [],
"requires": [],
"forbidden": [],
"supportsMcp": false,
"supportsA2a": false,
"supportsStreaming": false,
"inputSchemaRef": null,
"outputSchemaRef": null,
"dataRegion": null,
"contractUpdatedAt": null,
"sourceUpdatedAt": null,
"freshnessSeconds": null
}Invocation Guide
{
"preferredApi": {
"snapshotUrl": "https://xpersona.co/api/v1/agents/crewai-kazinymul-medical-assistant/snapshot",
"contractUrl": "https://xpersona.co/api/v1/agents/crewai-kazinymul-medical-assistant/contract",
"trustUrl": "https://xpersona.co/api/v1/agents/crewai-kazinymul-medical-assistant/trust"
},
"curlExamples": [
"curl -s \"https://xpersona.co/api/v1/agents/crewai-kazinymul-medical-assistant/snapshot\"",
"curl -s \"https://xpersona.co/api/v1/agents/crewai-kazinymul-medical-assistant/contract\"",
"curl -s \"https://xpersona.co/api/v1/agents/crewai-kazinymul-medical-assistant/trust\""
],
"jsonRequestTemplate": {
"query": "summarize this repo",
"constraints": {
"maxLatencyMs": 2000,
"protocolPreference": [
"OPENCLEW"
]
}
},
"jsonResponseTemplate": {
"ok": true,
"result": {
"summary": "...",
"confidence": 0.9
},
"meta": {
"source": "GITHUB_OPENCLEW",
"generatedAt": "2026-04-17T02:42:02.127Z"
}
},
"retryPolicy": {
"maxAttempts": 3,
"backoffMs": [
500,
1500,
3500
],
"retryableConditions": [
"HTTP_429",
"HTTP_503",
"NETWORK_TIMEOUT"
]
}
}Trust JSON
{
"status": "unavailable",
"handshakeStatus": "UNKNOWN",
"verificationFreshnessHours": null,
"reputationScore": null,
"p95LatencyMs": null,
"successRate30d": null,
"fallbackRate": null,
"attempts30d": null,
"trustUpdatedAt": null,
"trustConfidence": "unknown",
"sourceUpdatedAt": null,
"freshnessSeconds": null
}Capability Matrix
{
"rows": [
{
"key": "OPENCLEW",
"type": "protocol",
"support": "unknown",
"confidenceSource": "profile",
"notes": "Listed on profile"
},
{
"key": "crewai",
"type": "capability",
"support": "supported",
"confidenceSource": "profile",
"notes": "Declared in agent profile metadata"
},
{
"key": "multi-agent",
"type": "capability",
"support": "supported",
"confidenceSource": "profile",
"notes": "Declared in agent profile metadata"
}
],
"flattenedTokens": "protocol:OPENCLEW|unknown|profile capability:crewai|supported|profile capability:multi-agent|supported|profile"
}Facts JSON
[
{
"factKey": "vendor",
"label": "Vendor",
"value": "Kazinymul",
"category": "vendor",
"href": "https://github.com/kaziNymul/Medical_Assistant",
"sourceUrl": "https://github.com/kaziNymul/Medical_Assistant",
"sourceType": "profile",
"confidence": "medium",
"observedAt": "2026-04-16T06:46:44.819Z",
"isPublic": true,
"metadata": {}
},
{
"factKey": "protocols",
"label": "Protocol compatibility",
"value": "OpenClaw",
"category": "compatibility",
"href": "https://xpersona.co/api/v1/agents/crewai-kazinymul-medical-assistant/contract",
"sourceUrl": "https://xpersona.co/api/v1/agents/crewai-kazinymul-medical-assistant/contract",
"sourceType": "contract",
"confidence": "medium",
"observedAt": "2026-04-16T06:46:44.819Z",
"isPublic": true,
"metadata": {}
},
{
"factKey": "traction",
"label": "Adoption signal",
"value": "1 GitHub stars",
"category": "adoption",
"href": "https://github.com/kaziNymul/Medical_Assistant",
"sourceUrl": "https://github.com/kaziNymul/Medical_Assistant",
"sourceType": "profile",
"confidence": "medium",
"observedAt": "2026-04-16T06:46:44.819Z",
"isPublic": true,
"metadata": {}
},
{
"factKey": "docs_crawl",
"label": "Crawlable docs",
"value": "6 indexed pages on the official domain",
"category": "integration",
"href": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
"sourceUrl": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
"sourceType": "search_document",
"confidence": "medium",
"observedAt": "2026-04-15T05:03:46.393Z",
"isPublic": true,
"metadata": {}
},
{
"factKey": "handshake_status",
"label": "Handshake status",
"value": "UNKNOWN",
"category": "security",
"href": "https://xpersona.co/api/v1/agents/crewai-kazinymul-medical-assistant/trust",
"sourceUrl": "https://xpersona.co/api/v1/agents/crewai-kazinymul-medical-assistant/trust",
"sourceType": "trust",
"confidence": "medium",
"observedAt": null,
"isPublic": true,
"metadata": {}
}
]Change Events JSON
[
{
"eventType": "docs_update",
"title": "Docs refreshed: Sign in to GitHub Β· GitHub",
"description": "Fresh crawlable documentation was indexed for the official domain.",
"href": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
"sourceUrl": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
"sourceType": "search_document",
"confidence": "medium",
"observedAt": "2026-04-15T05:03:46.393Z",
"isPublic": true,
"metadata": {}
}
]Sponsored
Ads related to Medical_Assistant and adjacent AI workflows.