Rank
70
AI Agents & MCPs & AI Workflow Automation • (~400 MCP servers for AI agents) • AI Automation / AI Agent with MCPs • AI Workflows & AI Agents • MCPs for AI Agents
Traction
No public download signal
Freshness
Updated 2d ago
Crawler Summary
Structure agent code for Azure's `azd ai` command. Use when users mention "azd ai", "azd init agent", "Foundry agent", "scaffold agent", "convert to azd", "update for azd", "upgrade to azd ai", "fix azd ai", "migrate to Foundry", or want to deploy, convert, update, fix, or upgrade an AI agent for Azure. --- name: azd-ai-init description: Structure agent code for Azure's azd ai command. Use when users mention "azd ai", "azd init agent", "Foundry agent", "scaffold agent", "convert to azd", "update for azd", "upgrade to azd ai", "fix azd ai", "migrate to Foundry", or want to deploy, convert, update, fix, or upgrade an AI agent for Azure. model: claude-opus-4-5 --- Azure AI Agent Scaffolding Skill This skill helps devel Capability contract not published. No trust telemetry is available yet. Last updated 4/15/2026.
Freshness
Last checked 4/15/2026
Best For
azd-ai-init is best for answer, you, first workflows where OpenClaw compatibility matters.
Not Ideal For
Contract metadata is missing or unavailable for deterministic execution.
Evidence Sources Checked
editorial-content, GITHUB OPENCLEW, runtime-metrics, public facts pack
Structure agent code for Azure's `azd ai` command. Use when users mention "azd ai", "azd init agent", "Foundry agent", "scaffold agent", "convert to azd", "update for azd", "upgrade to azd ai", "fix azd ai", "migrate to Foundry", or want to deploy, convert, update, fix, or upgrade an AI agent for Azure. --- name: azd-ai-init description: Structure agent code for Azure's azd ai command. Use when users mention "azd ai", "azd init agent", "Foundry agent", "scaffold agent", "convert to azd", "update for azd", "upgrade to azd ai", "fix azd ai", "migrate to Foundry", or want to deploy, convert, update, fix, or upgrade an AI agent for Azure. model: claude-opus-4-5 --- Azure AI Agent Scaffolding Skill This skill helps devel
Public facts
4
Change events
1
Artifacts
0
Freshness
Apr 15, 2026
Capability contract not published. No trust telemetry is available yet. Last updated 4/15/2026.
Trust score
Unknown
Compatibility
OpenClaw
Freshness
Apr 15, 2026
Vendor
Spboyer
Artifacts
0
Benchmarks
0
Last release
Unpublished
Key links, install path, and a quick operational read before the deeper crawl record.
Summary
Capability contract not published. No trust telemetry is available yet. Last updated 4/15/2026.
Setup snapshot
git clone https://github.com/spboyer/skill-azd-ai-init.gitSetup complexity is LOW. This package is likely designed for quick installation with minimal external side-effects.
Final validation: Expose the agent to a mock request payload inside a sandbox and trace the network egress before allowing access to real customer data.
Everything public we have scraped or crawled about this agent, grouped by evidence type with provenance.
Vendor
Spboyer
Protocol compatibility
OpenClaw
Handshake status
UNKNOWN
Crawlable docs
6 indexed pages on the official domain
Merged public release, docs, artifact, benchmark, pricing, and trust refresh events.
Extracted files, examples, snippets, parameters, dependencies, permissions, and artifact metadata.
Extracted files
0
Examples
6
Snippets
0
Languages
typescript
Parameters
text
project-root/
├── azure.yaml # Project configuration (REQUIRED)
├── infra/ # Bicep infrastructure files (REQUIRED)
│ ├── main.bicep
│ ├── main.parameters.json
│ └── core/ # Reusable Bicep modules
│ └── ai/
│ └── ai-project.bicep
└── src/
└── <AgentName>/ # Agent source folder (REQUIRED)
├── agent.yaml # Agent definition (REQUIRED)
├── Dockerfile # Container build file (REQUIRED)
├── main.py # Agent entry point
└── requirements.txtyaml
# yaml-language-server: $schema=https://raw.githubusercontent.com/Azure/azure-dev/main/schemas/v1.0/azure.yaml.json
requiredVersions:
extensions:
azure.ai.agents: '>=0.1.0-preview'
name: <project-name>
services:
<AgentName>:
project: src/<AgentName>
host: azure.ai.agent
language: docker
docker:
remoteBuild: true
config:
container:
resources:
cpu: "1"
memory: 2Gi
scale:
maxReplicas: 3
minReplicas: 1
deployments:
- model:
format: OpenAI
name: gpt-4o-mini
version: "2024-07-18"
name: gpt-4o-mini
sku:
capacity: 10
name: GlobalStandard
infra:
provider: bicep
path: ./infrayaml
# yaml-language-server: $schema=https://raw.githubusercontent.com/microsoft/AgentSchema/refs/heads/main/schemas/v1.0/ContainerAgent.yaml
kind: hosted
name: <AgentName>
description: "<Brief description of what the agent does>"
metadata:
authors:
- <author-name>
example:
- content: "<Example user prompt - always quote strings with special characters>"
role: user
tags:
- <tag1>
- <tag2>
protocols:
- protocol: responses
version: v1
environment_variables:
- name: FOUNDRY_PROJECT_ENDPOINT
value: ${AZURE_AI_PROJECT_ENDPOINT}
- name: FOUNDRY_MODEL_DEPLOYMENT_NAME
value: gpt-4o-mini
- name: APPLICATIONINSIGHTS_CONNECTION_STRING
value: ${APPLICATIONINSIGHTS_CONNECTION_STRING}dockerfile
FROM python:3.11-slim
WORKDIR /app
COPY ./ user_agent/
WORKDIR /app/user_agent
RUN if [ -f requirements.txt ]; then \
pip install -r requirements.txt; \
else \
echo "No requirements.txt found"; \
fi
EXPOSE 8088
ENV PORT=8088
CMD ["python", "main.py"]dockerfile
FROM node:20-slim WORKDIR /app COPY package*.json ./ RUN npm ci --only=production COPY . . EXPOSE 8088 ENV PORT=8088 CMD ["node", "dist/main.js"]
python
import asyncio
import os
import logging
from typing import Annotated
from azure.identity.aio import DefaultAzureCredential
from agent_framework.azure import AzureAIAgentClient
from azure.ai.agentserver.agentframework import from_agent_framework
from azure.monitor.opentelemetry import configure_azure_monitor
from dotenv import load_dotenv
load_dotenv(override=True)
logger = logging.getLogger(__name__)
if os.getenv("APPLICATIONINSIGHTS_CONNECTION_STRING"):
configure_azure_monitor(enable_live_metrics=True, logger_name="__main__")
ENDPOINT = os.getenv("FOUNDRY_PROJECT_ENDPOINT", "")
MODEL_DEPLOYMENT_NAME = os.getenv("FOUNDRY_MODEL_DEPLOYMENT_NAME", "")
# Define your tools as functions with type annotations
# IMPORTANT: Use simple strings in Annotated[], NOT Pydantic Field objects
def my_tool(
param1: Annotated[str, "Description of param1"],
param2: Annotated[int, "Description of param2"]
) -> str:
"""Tool description that the model will see."""
# Tool implementation
return "result"
tools = [my_tool]
async def run_server():
"""Run the agent as an HTTP server."""
credential = DefaultAzureCredential()
try:
client = AzureAIAgentClient(
project_endpoint=ENDPOINT,
model_deployment_name=MODEL_DEPLOYMENT_NAME,
credential=credential,
)
agent = client.create_agent(
name="<AgentName>",
model=MODEL_DEPLOYMENT_NAME,
instructions="<Your agent system instructions>",
tools=tools,
)
logger.info("Starting Agent HTTP Server...")
await from_agent_framework(agent).run_async()
finally:
await credential.close()
def main():
asyncio.run(run_server())
if __name__ == "__main__":
main()Full documentation captured from public sources, including the complete README when available.
Docs source
GITHUB OPENCLEW
Editorial quality
ready
Structure agent code for Azure's `azd ai` command. Use when users mention "azd ai", "azd init agent", "Foundry agent", "scaffold agent", "convert to azd", "update for azd", "upgrade to azd ai", "fix azd ai", "migrate to Foundry", or want to deploy, convert, update, fix, or upgrade an AI agent for Azure. --- name: azd-ai-init description: Structure agent code for Azure's azd ai command. Use when users mention "azd ai", "azd init agent", "Foundry agent", "scaffold agent", "convert to azd", "update for azd", "upgrade to azd ai", "fix azd ai", "migrate to Foundry", or want to deploy, convert, update, fix, or upgrade an AI agent for Azure. model: claude-opus-4-5 --- Azure AI Agent Scaffolding Skill This skill helps devel
azd ai command. Use when users mention "azd ai", "azd init agent", "Foundry agent", "scaffold agent", "convert to azd", "update for azd", "upgrade to azd ai", "fix azd ai", "migrate to Foundry", or want to deploy, convert, update, fix, or upgrade an AI agent for Azure.
model: claude-opus-4-5This skill helps developers prepare their AI agent code for deployment to Azure AI Foundry using the azd ai extension of the Azure Developer CLI.
Use this skill when a user wants to:
azd ai expected formatazd upazd ai requiresFirst, understand what the user has:
The azd ai extension expects a specific project structure:
project-root/
├── azure.yaml # Project configuration (REQUIRED)
├── infra/ # Bicep infrastructure files (REQUIRED)
│ ├── main.bicep
│ ├── main.parameters.json
│ └── core/ # Reusable Bicep modules
│ └── ai/
│ └── ai-project.bicep
└── src/
└── <AgentName>/ # Agent source folder (REQUIRED)
├── agent.yaml # Agent definition (REQUIRED)
├── Dockerfile # Container build file (REQUIRED)
├── main.py # Agent entry point
└── requirements.txt
This is the main project configuration file that defines services and infrastructure:
# yaml-language-server: $schema=https://raw.githubusercontent.com/Azure/azure-dev/main/schemas/v1.0/azure.yaml.json
requiredVersions:
extensions:
azure.ai.agents: '>=0.1.0-preview'
name: <project-name>
services:
<AgentName>:
project: src/<AgentName>
host: azure.ai.agent
language: docker
docker:
remoteBuild: true
config:
container:
resources:
cpu: "1"
memory: 2Gi
scale:
maxReplicas: 3
minReplicas: 1
deployments:
- model:
format: OpenAI
name: gpt-4o-mini
version: "2024-07-18"
name: gpt-4o-mini
sku:
capacity: 10
name: GlobalStandard
infra:
provider: bicep
path: ./infra
Defines the agent's metadata, protocols, and environment variables:
# yaml-language-server: $schema=https://raw.githubusercontent.com/microsoft/AgentSchema/refs/heads/main/schemas/v1.0/ContainerAgent.yaml
kind: hosted
name: <AgentName>
description: "<Brief description of what the agent does>"
metadata:
authors:
- <author-name>
example:
- content: "<Example user prompt - always quote strings with special characters>"
role: user
tags:
- <tag1>
- <tag2>
protocols:
- protocol: responses
version: v1
environment_variables:
- name: FOUNDRY_PROJECT_ENDPOINT
value: ${AZURE_AI_PROJECT_ENDPOINT}
- name: FOUNDRY_MODEL_DEPLOYMENT_NAME
value: gpt-4o-mini
- name: APPLICATIONINSIGHTS_CONNECTION_STRING
value: ${APPLICATIONINSIGHTS_CONNECTION_STRING}
Note: Set FOUNDRY_MODEL_DEPLOYMENT_NAME to match the deployment name in your azure.yaml (e.g., gpt-4o-mini).
⚠️ Environment Variable Naming: The hosted agent platform injects variables with FOUNDRY_ prefix. Your Python code must read FOUNDRY_PROJECT_ENDPOINT and FOUNDRY_MODEL_DEPLOYMENT_NAME (not AZURE_* prefixes). The agent.yaml maps Azure outputs to the expected names.
IMPORTANT YAML Formatting Rules:
content: and description: values in double quotes"He said \"hello\""Standard Python container for hosted agents:
FROM python:3.11-slim
WORKDIR /app
COPY ./ user_agent/
WORKDIR /app/user_agent
RUN if [ -f requirements.txt ]; then \
pip install -r requirements.txt; \
else \
echo "No requirements.txt found"; \
fi
EXPOSE 8088
ENV PORT=8088
CMD ["python", "main.py"]
For TypeScript/Node.js agents:
FROM node:20-slim
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production
COPY . .
EXPOSE 8088
ENV PORT=8088
CMD ["node", "dist/main.js"]
The agent code must use the Azure AI Agent Framework pattern to run as a hosted agent:
| Scenario | Client Type | Notes |
|----------|-------------|-------|
| Local development with AI Services endpoint | AzureOpenAIChatClient | Uses ChatAgent pattern |
| Hosted agent deployment (azd up) | AzureAIAgentClient | Required - Uses create_agent + from_agent_framework |
| Foundry Project endpoint | AzureAIAgentClient | Requires FOUNDRY_* env vars |
import asyncio
import os
import logging
from typing import Annotated
from azure.identity.aio import DefaultAzureCredential
from agent_framework.azure import AzureAIAgentClient
from azure.ai.agentserver.agentframework import from_agent_framework
from azure.monitor.opentelemetry import configure_azure_monitor
from dotenv import load_dotenv
load_dotenv(override=True)
logger = logging.getLogger(__name__)
if os.getenv("APPLICATIONINSIGHTS_CONNECTION_STRING"):
configure_azure_monitor(enable_live_metrics=True, logger_name="__main__")
ENDPOINT = os.getenv("FOUNDRY_PROJECT_ENDPOINT", "")
MODEL_DEPLOYMENT_NAME = os.getenv("FOUNDRY_MODEL_DEPLOYMENT_NAME", "")
# Define your tools as functions with type annotations
# IMPORTANT: Use simple strings in Annotated[], NOT Pydantic Field objects
def my_tool(
param1: Annotated[str, "Description of param1"],
param2: Annotated[int, "Description of param2"]
) -> str:
"""Tool description that the model will see."""
# Tool implementation
return "result"
tools = [my_tool]
async def run_server():
"""Run the agent as an HTTP server."""
credential = DefaultAzureCredential()
try:
client = AzureAIAgentClient(
project_endpoint=ENDPOINT,
model_deployment_name=MODEL_DEPLOYMENT_NAME,
credential=credential,
)
agent = client.create_agent(
name="<AgentName>",
model=MODEL_DEPLOYMENT_NAME,
instructions="<Your agent system instructions>",
tools=tools,
)
logger.info("Starting Agent HTTP Server...")
await from_agent_framework(agent).run_async()
finally:
await credential.close()
def main():
asyncio.run(run_server())
if __name__ == "__main__":
main()
# Core agent packages
agent-framework-azure-ai
agent-framework-core
azure-ai-agentserver-agentframework
# Web server (required by agent server)
uvicorn
fastapi
# Azure identity
azure-identity
# Environment
python-dotenv
# Monitoring
azure-monitor-opentelemetry
CRITICAL: The azd ai extension requires infrastructure that provisions Microsoft.CognitiveServices/accounts/projects resources. The Bicep modules are complex (~300+ lines) and must be obtained from the official template.
Always use the official starter template for infrastructure:
# Option 1: Initialize a new project with infra included
azd init -t Azure-Samples/azd-ai-starter-basic
# Option 2: Copy infra to an existing project
git clone --depth 1 https://github.com/Azure-Samples/azd-ai-starter-basic.git temp-starter
cp -r temp-starter/infra ./infra
rm -rf temp-starter
The official infra/ folder contains:
infra/
├── main.bicep # Main deployment orchestrator
├── main.parameters.json # Parameter mappings
├── abbreviations.json # Resource naming conventions
└── core/
└── ai/
└── ai-project.bicep # AI Foundry provisioning module
The core/ai/ai-project.bicep module creates:
The main.bicep must output these environment variables for azd ai to work:
// Required outputs - azd ai uses these to locate resources
output AZURE_RESOURCE_GROUP string = resourceGroupName
output AZURE_AI_ACCOUNT_ID string = aiProject.outputs.accountId
output AZURE_AI_PROJECT_ID string = aiProject.outputs.projectId
output AZURE_AI_ACCOUNT_NAME string = aiProject.outputs.aiServicesAccountName
output AZURE_AI_PROJECT_NAME string = aiProject.outputs.projectName
// Endpoints
output AZURE_AI_PROJECT_ENDPOINT string = aiProject.outputs.AZURE_AI_PROJECT_ENDPOINT
output AZURE_OPENAI_ENDPOINT string = aiProject.outputs.AZURE_OPENAI_ENDPOINT
output APPLICATIONINSIGHTS_CONNECTION_STRING string = aiProject.outputs.APPLICATIONINSIGHTS_CONNECTION_STRING
// Container Registry
output AZURE_CONTAINER_REGISTRY_ENDPOINT string = aiProject.outputs.dependentResources.registry.loginServer
Note: The AZURE_AI_PROJECT_ID must be in the format:
/subscriptions/{sub}/resourceGroups/{rg}/providers/Microsoft.CognitiveServices/accounts/{account}/projects/{project}
Do NOT use Microsoft.MachineLearningServices/workspaces - this is a different resource type that won't work with azd ai.
The deployments section in azure.yaml under each service's config defines the AI models:
| Property | Description | Example |
|----------|-------------|---------|
| name | Deployment name | gpt-4o-mini |
| model.format | Model provider | OpenAI |
| model.name | Model identifier | gpt-4o-mini |
| model.version | Model version | 2024-07-18 |
| sku.name | SKU tier | GlobalStandard |
| sku.capacity | Tokens per minute (thousands) | 10 |
Common models for agents:
gpt-4o (version: 2024-08-06)gpt-4o-mini (version: 2024-07-18)gpt-4-turbo (version: 2024-04-09)IMPORTANT: Hosted agents are only supported in specific Azure regions.
The provided Bicep templates default to northcentralus. If you need to change the region, verify hosted agent support first.
After scaffolding, users deploy with:
# Install the azd ai extension (if not installed)
azd extension install azure.ai.agents
# Login to Azure
azd auth login
# Initialize environment (creates .azure folder)
azd init
# Provision infrastructure and deploy agent
azd up
Or step-by-step:
azd provision # Create Azure resources
azd deploy # Deploy the agent
If azd deploy times out waiting for the container:
Check container logs in Azure Portal:
Common issues:
requirements.txtTest locally first:
cd src/YourAgent
pip install -r requirements.txt
python main.py
Verify the agent starts a server:
The agent must call from_agent_framework(agent).run_async() to start the HTTP server.
AzureAIAgentClient patternfrom_agent_framework(agent).run_async() to servetools listAzureAIAgentClientIn agent.yaml:
environment_variables:
- name: CUSTOM_VAR
value: ${MY_ENV_VAR}
In azure.yaml under service config:
config:
env:
CUSTOM_VAR: "value"
For agents needing additional Azure services (search, storage, etc.), add to azure.yaml:
config:
resources:
- resource: search
connectionName: my-search-connection
- resource: storage
connectionName: my-storage-connection
Available resource types:
search - Azure AI Searchstorage - Azure Storageregistry - Azure Container Registrybing_grounding - Bing Searchbing_custom_grounding - Bing Custom Searchconfig:
container:
resources:
cpu: "2"
memory: 4Gi
scale:
minReplicas: 1
maxReplicas: 10
When a user has no existing code and wants to create a new agent from scratch, generate a complete working project.
For users who say "create a new agent for azd ai" or "scaffold a new Foundry agent", generate this complete structure:
mkdir -p my-agent/src/MyAgent my-agent/infra/core/ai
# yaml-language-server: $schema=https://raw.githubusercontent.com/Azure/azure-dev/main/schemas/v1.0/azure.yaml.json
requiredVersions:
extensions:
azure.ai.agents: '>=0.1.0-preview'
name: my-agent
services:
MyAgent:
project: src/MyAgent
host: azure.ai.agent
language: docker
docker:
remoteBuild: true
config:
container:
resources:
cpu: "1"
memory: 2Gi
scale:
maxReplicas: 3
minReplicas: 1
deployments:
- model:
format: OpenAI
name: gpt-4o-mini
version: "2024-07-18"
name: gpt-4o-mini
sku:
capacity: 10
name: GlobalStandard
infra:
provider: bicep
path: ./infra
# yaml-language-server: $schema=https://raw.githubusercontent.com/microsoft/AgentSchema/refs/heads/main/schemas/v1.0/ContainerAgent.yaml
kind: hosted
name: MyAgent
description: "A helpful assistant that can answer questions and perform tasks."
metadata:
authors:
- developer
example:
- content: "Hello, what can you help me with?"
role: user
tags:
- starter
- assistant
protocols:
- protocol: responses
version: v1
environment_variables:
- name: FOUNDRY_PROJECT_ENDPOINT
value: ${AZURE_AI_PROJECT_ENDPOINT}
- name: FOUNDRY_MODEL_DEPLOYMENT_NAME
value: gpt-4o-mini
- name: APPLICATIONINSIGHTS_CONNECTION_STRING
value: ${APPLICATIONINSIGHTS_CONNECTION_STRING}
import asyncio
import os
import logging
from typing import Annotated
from azure.identity.aio import DefaultAzureCredential
from agent_framework.azure import AzureAIAgentClient
from azure.ai.agentserver.agentframework import from_agent_framework
from azure.monitor.opentelemetry import configure_azure_monitor
from dotenv import load_dotenv
load_dotenv(override=True)
logger = logging.getLogger(__name__)
if os.getenv("APPLICATIONINSIGHTS_CONNECTION_STRING"):
configure_azure_monitor(enable_live_metrics=True, logger_name="__main__")
ENDPOINT = os.getenv("FOUNDRY_PROJECT_ENDPOINT", "")
MODEL_DEPLOYMENT_NAME = os.getenv("FOUNDRY_MODEL_DEPLOYMENT_NAME", "")
# ===========================================
# Define your tools here
# ===========================================
def greet(
name: Annotated[str, "The name of the person to greet"]
) -> str:
"""Greet someone by name.
Args:
name: The person's name
"""
return f"Hello, {name}! Nice to meet you."
def get_current_time() -> str:
"""Get the current date and time."""
from datetime import datetime
return datetime.now().strftime("%Y-%m-%d %H:%M:%S")
def calculate(
expression: Annotated[str, "A mathematical expression to evaluate, e.g. '2 + 2'"]
) -> str:
"""Safely evaluate a mathematical expression.
Args:
expression: Math expression like '2 + 2' or '10 * 5'
"""
# Safe evaluation of basic math
allowed_chars = set("0123456789+-*/(). ")
if not all(c in allowed_chars for c in expression):
return "Error: Invalid characters in expression"
try:
result = eval(expression)
return f"{expression} = {result}"
except Exception as e:
return f"Error: {str(e)}"
# Collect all tools
tools = [greet, get_current_time, calculate]
# ===========================================
# Agent Server
# ===========================================
async def run_server():
"""Run the agent as an HTTP server."""
credential = DefaultAzureCredential()
try:
client = AzureAIAgentClient(
project_endpoint=ENDPOINT,
model_deployment_name=MODEL_DEPLOYMENT_NAME,
credential=credential,
)
agent = client.create_agent(
name="MyAgent",
model=MODEL_DEPLOYMENT_NAME,
instructions="""You are a helpful assistant. You can:
- Greet people by name
- Tell the current time
- Perform basic math calculations
Be friendly and helpful. Use the available tools when appropriate.""",
tools=tools,
)
logger.info("Starting MyAgent HTTP Server...")
print("Starting MyAgent HTTP Server on port 8088...")
await from_agent_framework(agent).run_async()
finally:
await credential.close()
def main():
"""Main entry point."""
asyncio.run(run_server())
if __name__ == "__main__":
main()
# Core agent packages
agent-framework-azure-ai
agent-framework-core
azure-ai-agentserver-agentframework
# Web server (required by agent server)
uvicorn
fastapi
# Azure identity
azure-identity
# Environment
python-dotenv
# Monitoring
azure-monitor-opentelemetry
FROM python:3.11-slim
WORKDIR /app
COPY ./ user_agent/
WORKDIR /app/user_agent
RUN pip install --no-cache-dir -r requirements.txt
EXPOSE 8088
ENV PORT=8088
ENV PYTHONUNBUFFERED=1
CMD ["python", "main.py"]
Use the standard Bicep template from the Infrastructure section above, or point users to clone from the starter template:
# Alternative: Start from official template
azd init -t Azure-Samples/azd-ai-starter-basic
{
"$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentParameters.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"environmentName": { "value": "${AZURE_ENV_NAME}" },
"location": { "value": "${AZURE_LOCATION}" },
"aiDeploymentsLocation": { "value": "${AZURE_AI_DEPLOYMENTS_LOCATION}" },
"principalId": { "value": "${AZURE_PRINCIPAL_ID}" },
"principalType": { "value": "${AZURE_PRINCIPAL_TYPE}" },
"aiProjectDeploymentsJson": { "value": "${AI_PROJECT_DEPLOYMENTS}" },
"aiProjectConnectionsJson": { "value": "${AI_PROJECT_CONNECTIONS}" },
"aiProjectDependentResourcesJson": { "value": "${AI_PROJECT_DEPENDENT_RESOURCES}" },
"enableHostedAgents": { "value": "${ENABLE_HOSTED_AGENTS=true}" }
}
}
my-agent/
├── azure.yaml
├── infra/
│ ├── main.bicep
│ ├── main.parameters.json
│ └── core/
│ └── ai/
│ └── ai-project.bicep
└── src/
└── MyAgent/
├── agent.yaml
├── Dockerfile
├── main.py
└── requirements.txt
cd my-agent
# Login to Azure
azd auth login
# Initialize environment (creates .azure folder)
azd init
# Deploy everything
azd up
The agent will be live at the Azure AI Foundry endpoint shown in the output.
When a user says "prepare my calculator agent for azd ai":
calculator.py with add/multiply/divide functionssrc/CalculatorAgent/ directorymain.pyagent.yaml with metadataDockerfilerequirements.txtazure.yaml with service definitioninfra/ with Bicep filesazd upBefore completing, always validate generated YAML files:
Quote all string values that contain:
:),)#, &, *, !, |, >, ', ", %, @, `)Required quoting patterns:
# CORRECT
description: "A helpful agent that answers questions."
content: "What is 2 + 2?"
content: "Subject: Meeting - Let's discuss the project."
# INCORRECT - will break parsing
description: A helpful agent that answers questions.
content: What is 2 + 2?
content: Subject: Meeting - Let's discuss the project.
Escape internal quotes:
content: "He said \"hello\" to everyone."
Validate YAML syntax before finishing:
# Python
python -c "import yaml; yaml.safe_load(open('agent.yaml'))"
# Node.js
node -e "require('js-yaml').load(require('fs').readFileSync('agent.yaml'))"
Check for common errors:
Machine endpoints, protocol fit, contract coverage, invocation examples, and guardrails for agent-to-agent use.
Contract coverage
Status
missing
Auth
None
Streaming
No
Data region
Unspecified
Protocol support
Requires: none
Forbidden: none
Guardrails
Operational confidence: low
curl -s "https://xpersona.co/api/v1/agents/spboyer-skill-azd-ai-init/snapshot"
curl -s "https://xpersona.co/api/v1/agents/spboyer-skill-azd-ai-init/contract"
curl -s "https://xpersona.co/api/v1/agents/spboyer-skill-azd-ai-init/trust"
Trust and runtime signals, benchmark suites, failure patterns, and practical risk constraints.
Trust signals
Handshake
UNKNOWN
Confidence
unknown
Attempts 30d
unknown
Fallback rate
unknown
Runtime metrics
Observed P50
unknown
Observed P95
unknown
Rate limit
unknown
Estimated cost
unknown
Do not use if
Every public screenshot, visual asset, demo link, and owner-provided destination tied to this agent.
Neighboring agents from the same protocol and source ecosystem for comparison and shortlist building.
Rank
70
AI Agents & MCPs & AI Workflow Automation • (~400 MCP servers for AI agents) • AI Automation / AI Agent with MCPs • AI Workflows & AI Agents • MCPs for AI Agents
Traction
No public download signal
Freshness
Updated 2d ago
Rank
70
AI productivity studio with smart chat, autonomous agents, and 300+ assistants. Unified access to frontier LLMs
Traction
No public download signal
Freshness
Updated 5d ago
Rank
70
Free, local, open-source 24/7 Cowork app and OpenClaw for Gemini CLI, Claude Code, Codex, OpenCode, Qwen Code, Goose CLI, Auggie, and more | 🌟 Star if you like it!
Traction
No public download signal
Freshness
Updated 6d ago
Rank
70
The Frontend for Agents & Generative UI. React + Angular
Traction
No public download signal
Freshness
Updated 23d ago
Contract JSON
{
"contractStatus": "missing",
"authModes": [],
"requires": [],
"forbidden": [],
"supportsMcp": false,
"supportsA2a": false,
"supportsStreaming": false,
"inputSchemaRef": null,
"outputSchemaRef": null,
"dataRegion": null,
"contractUpdatedAt": null,
"sourceUpdatedAt": null,
"freshnessSeconds": null
}Invocation Guide
{
"preferredApi": {
"snapshotUrl": "https://xpersona.co/api/v1/agents/spboyer-skill-azd-ai-init/snapshot",
"contractUrl": "https://xpersona.co/api/v1/agents/spboyer-skill-azd-ai-init/contract",
"trustUrl": "https://xpersona.co/api/v1/agents/spboyer-skill-azd-ai-init/trust"
},
"curlExamples": [
"curl -s \"https://xpersona.co/api/v1/agents/spboyer-skill-azd-ai-init/snapshot\"",
"curl -s \"https://xpersona.co/api/v1/agents/spboyer-skill-azd-ai-init/contract\"",
"curl -s \"https://xpersona.co/api/v1/agents/spboyer-skill-azd-ai-init/trust\""
],
"jsonRequestTemplate": {
"query": "summarize this repo",
"constraints": {
"maxLatencyMs": 2000,
"protocolPreference": [
"OPENCLEW"
]
}
},
"jsonResponseTemplate": {
"ok": true,
"result": {
"summary": "...",
"confidence": 0.9
},
"meta": {
"source": "GITHUB_OPENCLEW",
"generatedAt": "2026-04-17T00:54:07.742Z"
}
},
"retryPolicy": {
"maxAttempts": 3,
"backoffMs": [
500,
1500,
3500
],
"retryableConditions": [
"HTTP_429",
"HTTP_503",
"NETWORK_TIMEOUT"
]
}
}Trust JSON
{
"status": "unavailable",
"handshakeStatus": "UNKNOWN",
"verificationFreshnessHours": null,
"reputationScore": null,
"p95LatencyMs": null,
"successRate30d": null,
"fallbackRate": null,
"attempts30d": null,
"trustUpdatedAt": null,
"trustConfidence": "unknown",
"sourceUpdatedAt": null,
"freshnessSeconds": null
}Capability Matrix
{
"rows": [
{
"key": "OPENCLEW",
"type": "protocol",
"support": "unknown",
"confidenceSource": "profile",
"notes": "Listed on profile"
},
{
"key": "answer",
"type": "capability",
"support": "supported",
"confidenceSource": "profile",
"notes": "Declared in agent profile metadata"
},
{
"key": "you",
"type": "capability",
"support": "supported",
"confidenceSource": "profile",
"notes": "Declared in agent profile metadata"
},
{
"key": "first",
"type": "capability",
"support": "supported",
"confidenceSource": "profile",
"notes": "Declared in agent profile metadata"
}
],
"flattenedTokens": "protocol:OPENCLEW|unknown|profile capability:answer|supported|profile capability:you|supported|profile capability:first|supported|profile"
}Facts JSON
[
{
"factKey": "docs_crawl",
"category": "integration",
"label": "Crawlable docs",
"value": "6 indexed pages on the official domain",
"href": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
"sourceUrl": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
"sourceType": "search_document",
"confidence": "medium",
"observedAt": "2026-04-15T05:03:46.393Z",
"isPublic": true
},
{
"factKey": "vendor",
"category": "vendor",
"label": "Vendor",
"value": "Spboyer",
"href": "https://github.com/spboyer/skill-azd-ai-init",
"sourceUrl": "https://github.com/spboyer/skill-azd-ai-init",
"sourceType": "profile",
"confidence": "medium",
"observedAt": "2026-04-15T03:15:12.461Z",
"isPublic": true
},
{
"factKey": "protocols",
"category": "compatibility",
"label": "Protocol compatibility",
"value": "OpenClaw",
"href": "https://xpersona.co/api/v1/agents/spboyer-skill-azd-ai-init/contract",
"sourceUrl": "https://xpersona.co/api/v1/agents/spboyer-skill-azd-ai-init/contract",
"sourceType": "contract",
"confidence": "medium",
"observedAt": "2026-04-15T03:15:12.461Z",
"isPublic": true
},
{
"factKey": "handshake_status",
"category": "security",
"label": "Handshake status",
"value": "UNKNOWN",
"href": "https://xpersona.co/api/v1/agents/spboyer-skill-azd-ai-init/trust",
"sourceUrl": "https://xpersona.co/api/v1/agents/spboyer-skill-azd-ai-init/trust",
"sourceType": "trust",
"confidence": "medium",
"observedAt": null,
"isPublic": true
}
]Change Events JSON
[
{
"eventType": "docs_update",
"title": "Docs refreshed: Sign in to GitHub · GitHub",
"description": "Fresh crawlable documentation was indexed for the official domain.",
"href": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
"sourceUrl": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
"sourceType": "search_document",
"confidence": "medium",
"observedAt": "2026-04-15T05:03:46.393Z",
"isPublic": true
}
]Sponsored
Ads related to azd-ai-init and adjacent AI workflows.