Rank
70
AI Agents & MCPs & AI Workflow Automation • (~400 MCP servers for AI agents) • AI Automation / AI Agent with MCPs • AI Workflows & AI Agents • MCPs for AI Agents
Traction
No public download signal
Freshness
Updated 2d ago
Crawler Summary
Comparative study of two Agentic AI architectures for automated data science: hidden-tool agents vs transparent code-generating agents. Built with CrewAI, OpenAI GPT-4o, tested on Titanic & House Prices datasets. What This Project Is About During this practical work, I explored how AI agents can automate data analysis tasks. I built and tested two different approaches to see which one works better for real-world data science problems. Think of it as having virtual data science assistants that can handle everything from data exploration to model training and report writing. I used the famous Titanic dataset (predicting passeng Capability contract not published. No trust telemetry is available yet. Last updated 4/16/2026.
Freshness
Last checked 4/16/2026
Best For
Agentic-AI-Data-Science-Assistant- is best for crewai, multi-agent workflows where OpenClaw compatibility matters.
Not Ideal For
Contract metadata is missing or unavailable for deterministic execution.
Evidence Sources Checked
editorial-content, GITHUB OPENCLEW, runtime-metrics, public facts pack
Comparative study of two Agentic AI architectures for automated data science: hidden-tool agents vs transparent code-generating agents. Built with CrewAI, OpenAI GPT-4o, tested on Titanic & House Prices datasets. What This Project Is About During this practical work, I explored how AI agents can automate data analysis tasks. I built and tested two different approaches to see which one works better for real-world data science problems. Think of it as having virtual data science assistants that can handle everything from data exploration to model training and report writing. I used the famous Titanic dataset (predicting passeng
Public facts
4
Change events
1
Artifacts
0
Freshness
Apr 16, 2026
Capability contract not published. No trust telemetry is available yet. Last updated 4/16/2026.
Trust score
Unknown
Compatibility
OpenClaw
Freshness
Apr 16, 2026
Vendor
Bechir23
Artifacts
0
Benchmarks
0
Last release
Unpublished
Key links, install path, and a quick operational read before the deeper crawl record.
Summary
Capability contract not published. No trust telemetry is available yet. Last updated 4/16/2026.
Setup snapshot
git clone https://github.com/bechir23/Agentic-AI-Data-Science-Assistant-.gitSetup complexity is LOW. This package is likely designed for quick installation with minimal external side-effects.
Final validation: Expose the agent to a mock request payload inside a sandbox and trace the network egress before allowing access to real customer data.
Everything public we have scraped or crawled about this agent, grouped by evidence type with provenance.
Vendor
Bechir23
Protocol compatibility
OpenClaw
Handshake status
UNKNOWN
Crawlable docs
6 indexed pages on the official domain
Merged public release, docs, artifact, benchmark, pricing, and trust refresh events.
Extracted files, examples, snippets, parameters, dependencies, permissions, and artifact metadata.
Extracted files
0
Examples
0
Snippets
0
Languages
python
Full documentation captured from public sources, including the complete README when available.
Docs source
GITHUB OPENCLEW
Editorial quality
ready
Comparative study of two Agentic AI architectures for automated data science: hidden-tool agents vs transparent code-generating agents. Built with CrewAI, OpenAI GPT-4o, tested on Titanic & House Prices datasets. What This Project Is About During this practical work, I explored how AI agents can automate data analysis tasks. I built and tested two different approaches to see which one works better for real-world data science problems. Think of it as having virtual data science assistants that can handle everything from data exploration to model training and report writing. I used the famous Titanic dataset (predicting passeng
During this practical work, I explored how AI agents can automate data analysis tasks. I built and tested two different approaches to see which one works better for real-world data science problems. Think of it as having virtual data science assistants that can handle everything from data exploration to model training and report writing.
I used the famous Titanic dataset (predicting passenger survival) and a house pricing dataset to put both systems through their paces. The goal was simple: let the AI agents do the heavy lifting while I evaluate how well they perform and where they struggle.
This system works like a traditional pipeline with four specialized agents working in sequence. Each agent has specific tools at their disposal, but all the Python code runs in the background where you can't see it.
How it works:
The good parts:
The not-so-good parts:
Files to run:
python main_classification.py # For Titanic survival prediction
python main_regression.py # For house price prediction
This one takes a completely different approach. Instead of hiding everything, it generates Python code that you can actually read, modify, and reuse. It's like having a coding buddy who writes the analysis for you.
How it works:
The good parts:
The not-so-good parts:
Files to run:
python main_code_interpreter.py classification # For Titanic
python main_code_interpreter.py regression # For house prices
Both systems initially made the same rookie mistake: they included the PassengerId column (just a number from 1 to 891) in the training features. This created fake correlations and inflated the accuracy scores. System 2 made it way easier to spot this bug because I could literally read the code line by line. With System 1, I had to dig through tool outputs to figure out what was happening.
The coolest thing I observed was System 2's ability to debug itself. During one test, it hit four errors in a row:
Each iteration consumed API tokens, but watching an AI agent reason through its mistakes and fix them was genuinely impressive.
I'm using OpenAI GPT-4o for this project, which doesn't have the strict rate limits that free services have. However, I did initially try Groq's free tier (100k tokens/day) and hit the limit pretty quickly - a single run with the self-correction iterations consumed about 40k tokens!
For production use or if you want to avoid API costs entirely, switching to Ollama with a local model would be the way to go. The code supports all these options through a simple config change in .env.
You'll need Python 3.12 and an OpenAI API key (I'm using GPT-4o for this project).
# Clone and navigate to the project
cd TP_Agentic_AI
# Create virtual environment
python -m venv .venv
.\.venv\Scripts\Activate.ps1
# Install dependencies
pip install -r requirements.txt
# Configure your API key
# Edit .env and add your OPENAI_API_KEY
# The project is configured with LLM_MODE=openai by default
python main_classification.py
# Wait 3-5 minutes, generates outputs/titanic_report.tex
python main_code_interpreter.py classification
# Takes longer (5-10 min) but shows all code generation
# Generates outputs/titanic_code_report.tex
# Using WSL with pdflatex installed
wsl pdflatex -interaction=nonstopmode outputs/titanic_report.tex
TP_Agentic_AI/
├── agents.py # System 1 agents (4 agents with hidden tools)
├── agents_code_interpreter.py # System 2 agents (code generators)
├── crew_setup.py # System 1 task definitions
├── tools.py # Python execution tools for both systems
├── llama_llm.py # LLM configuration (OpenAI/Groq/Ollama)
├── main_classification.py # System 1 entry point (Titanic)
├── main_regression.py # System 1 entry point (House Prices)
├── main_code_interpreter.py # System 2 entry point (both datasets)
├── data/
│ ├── titanic.csv # Classification dataset (891 samples)
│ └── house_prices.csv # Regression dataset (20640 samples)
├── outputs/
│ ├── titanic_report.tex # System 1 classification report
│ └── titanic_code_report.tex # System 2 classification report
└── Analysis_Crew_Systems.pdf # Comparative analysis
For this project, I'm using OpenAI GPT-4o as the primary language model. The .env file is configured with:
LLM_MODE=openai
OPENAI_API_KEY=your_key_here
The system supports multiple LLM providers through llama_llm.py. You can switch by changing LLM_MODE in .env:
Option 1: OpenAI (Current Setup)
LLM_MODE=openai
OPENAI_API_KEY=sk-...
Option 2: Groq (Free Alternative)
LLM_MODE=groq
GROQ_API_KEY=gsk_...
Option 3: HuggingFace
LLM_MODE=huggingface
HUGGINGFACE_API_KEY=hf_...
Option 4: Ollama (Local)
LLM_MODE=ollama
# No API key needed, runs on your machine
Note: The critical analysis (Analysis_Crew_Systems.pdf) contains a detailed comparison of both systems based on actual test results.
Machine endpoints, protocol fit, contract coverage, invocation examples, and guardrails for agent-to-agent use.
Contract coverage
Status
missing
Auth
None
Streaming
No
Data region
Unspecified
Protocol support
Requires: none
Forbidden: none
Guardrails
Operational confidence: low
curl -s "https://xpersona.co/api/v1/agents/crewai-bechir23-agentic-ai-data-science-assistant/snapshot"
curl -s "https://xpersona.co/api/v1/agents/crewai-bechir23-agentic-ai-data-science-assistant/contract"
curl -s "https://xpersona.co/api/v1/agents/crewai-bechir23-agentic-ai-data-science-assistant/trust"
Trust and runtime signals, benchmark suites, failure patterns, and practical risk constraints.
Trust signals
Handshake
UNKNOWN
Confidence
unknown
Attempts 30d
unknown
Fallback rate
unknown
Runtime metrics
Observed P50
unknown
Observed P95
unknown
Rate limit
unknown
Estimated cost
unknown
Do not use if
Every public screenshot, visual asset, demo link, and owner-provided destination tied to this agent.
Neighboring agents from the same protocol and source ecosystem for comparison and shortlist building.
Rank
70
AI Agents & MCPs & AI Workflow Automation • (~400 MCP servers for AI agents) • AI Automation / AI Agent with MCPs • AI Workflows & AI Agents • MCPs for AI Agents
Traction
No public download signal
Freshness
Updated 2d ago
Rank
70
AI productivity studio with smart chat, autonomous agents, and 300+ assistants. Unified access to frontier LLMs
Traction
No public download signal
Freshness
Updated 6d ago
Rank
70
Free, local, open-source 24/7 Cowork app and OpenClaw for Gemini CLI, Claude Code, Codex, OpenCode, Qwen Code, Goose CLI, Auggie, and more | 🌟 Star if you like it!
Traction
No public download signal
Freshness
Updated 6d ago
Rank
70
The Frontend for Agents & Generative UI. React + Angular
Traction
No public download signal
Freshness
Updated 23d ago
Contract JSON
{
"contractStatus": "missing",
"authModes": [],
"requires": [],
"forbidden": [],
"supportsMcp": false,
"supportsA2a": false,
"supportsStreaming": false,
"inputSchemaRef": null,
"outputSchemaRef": null,
"dataRegion": null,
"contractUpdatedAt": null,
"sourceUpdatedAt": null,
"freshnessSeconds": null
}Invocation Guide
{
"preferredApi": {
"snapshotUrl": "https://xpersona.co/api/v1/agents/crewai-bechir23-agentic-ai-data-science-assistant/snapshot",
"contractUrl": "https://xpersona.co/api/v1/agents/crewai-bechir23-agentic-ai-data-science-assistant/contract",
"trustUrl": "https://xpersona.co/api/v1/agents/crewai-bechir23-agentic-ai-data-science-assistant/trust"
},
"curlExamples": [
"curl -s \"https://xpersona.co/api/v1/agents/crewai-bechir23-agentic-ai-data-science-assistant/snapshot\"",
"curl -s \"https://xpersona.co/api/v1/agents/crewai-bechir23-agentic-ai-data-science-assistant/contract\"",
"curl -s \"https://xpersona.co/api/v1/agents/crewai-bechir23-agentic-ai-data-science-assistant/trust\""
],
"jsonRequestTemplate": {
"query": "summarize this repo",
"constraints": {
"maxLatencyMs": 2000,
"protocolPreference": [
"OPENCLEW"
]
}
},
"jsonResponseTemplate": {
"ok": true,
"result": {
"summary": "...",
"confidence": 0.9
},
"meta": {
"source": "GITHUB_OPENCLEW",
"generatedAt": "2026-04-17T05:22:12.604Z"
}
},
"retryPolicy": {
"maxAttempts": 3,
"backoffMs": [
500,
1500,
3500
],
"retryableConditions": [
"HTTP_429",
"HTTP_503",
"NETWORK_TIMEOUT"
]
}
}Trust JSON
{
"status": "unavailable",
"handshakeStatus": "UNKNOWN",
"verificationFreshnessHours": null,
"reputationScore": null,
"p95LatencyMs": null,
"successRate30d": null,
"fallbackRate": null,
"attempts30d": null,
"trustUpdatedAt": null,
"trustConfidence": "unknown",
"sourceUpdatedAt": null,
"freshnessSeconds": null
}Capability Matrix
{
"rows": [
{
"key": "OPENCLEW",
"type": "protocol",
"support": "unknown",
"confidenceSource": "profile",
"notes": "Listed on profile"
},
{
"key": "crewai",
"type": "capability",
"support": "supported",
"confidenceSource": "profile",
"notes": "Declared in agent profile metadata"
},
{
"key": "multi-agent",
"type": "capability",
"support": "supported",
"confidenceSource": "profile",
"notes": "Declared in agent profile metadata"
}
],
"flattenedTokens": "protocol:OPENCLEW|unknown|profile capability:crewai|supported|profile capability:multi-agent|supported|profile"
}Facts JSON
[
{
"factKey": "vendor",
"label": "Vendor",
"value": "Bechir23",
"category": "vendor",
"href": "https://github.com/bechir23/Agentic-AI-Data-Science-Assistant-",
"sourceUrl": "https://github.com/bechir23/Agentic-AI-Data-Science-Assistant-",
"sourceType": "profile",
"confidence": "medium",
"observedAt": "2026-04-16T06:46:50.668Z",
"isPublic": true,
"metadata": {}
},
{
"factKey": "protocols",
"label": "Protocol compatibility",
"value": "OpenClaw",
"category": "compatibility",
"href": "https://xpersona.co/api/v1/agents/crewai-bechir23-agentic-ai-data-science-assistant/contract",
"sourceUrl": "https://xpersona.co/api/v1/agents/crewai-bechir23-agentic-ai-data-science-assistant/contract",
"sourceType": "contract",
"confidence": "medium",
"observedAt": "2026-04-16T06:46:50.668Z",
"isPublic": true,
"metadata": {}
},
{
"factKey": "docs_crawl",
"label": "Crawlable docs",
"value": "6 indexed pages on the official domain",
"category": "integration",
"href": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
"sourceUrl": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
"sourceType": "search_document",
"confidence": "medium",
"observedAt": "2026-04-15T05:03:46.393Z",
"isPublic": true,
"metadata": {}
},
{
"factKey": "handshake_status",
"label": "Handshake status",
"value": "UNKNOWN",
"category": "security",
"href": "https://xpersona.co/api/v1/agents/crewai-bechir23-agentic-ai-data-science-assistant/trust",
"sourceUrl": "https://xpersona.co/api/v1/agents/crewai-bechir23-agentic-ai-data-science-assistant/trust",
"sourceType": "trust",
"confidence": "medium",
"observedAt": null,
"isPublic": true,
"metadata": {}
}
]Change Events JSON
[
{
"eventType": "docs_update",
"title": "Docs refreshed: Sign in to GitHub · GitHub",
"description": "Fresh crawlable documentation was indexed for the official domain.",
"href": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
"sourceUrl": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
"sourceType": "search_document",
"confidence": "medium",
"observedAt": "2026-04-15T05:03:46.393Z",
"isPublic": true,
"metadata": {}
}
]Sponsored
Ads related to Agentic-AI-Data-Science-Assistant- and adjacent AI workflows.