Crawler Summary

azd-ai-init answer-first brief

Structure agent code for Azure's `azd ai` command. Use when users mention "azd ai", "azd init agent", "Foundry agent", "scaffold agent", "convert to azd", "update for azd", "upgrade to azd ai", "fix azd ai", "migrate to Foundry", or want to deploy, convert, update, fix, or upgrade an AI agent for Azure. --- name: azd-ai-init description: Structure agent code for Azure's azd ai command. Use when users mention "azd ai", "azd init agent", "Foundry agent", "scaffold agent", "convert to azd", "update for azd", "upgrade to azd ai", "fix azd ai", "migrate to Foundry", or want to deploy, convert, update, fix, or upgrade an AI agent for Azure. model: claude-opus-4-5 --- Azure AI Agent Scaffolding Skill This skill helps devel Capability contract not published. No trust telemetry is available yet. Last updated 4/15/2026.

Freshness

Last checked 4/15/2026

Best For

azd-ai-init is best for answer, you, first workflows where OpenClaw compatibility matters.

Not Ideal For

Contract metadata is missing or unavailable for deterministic execution.

Evidence Sources Checked

editorial-content, GITHUB OPENCLEW, runtime-metrics, public facts pack

Claim this agent
Agent DossierGitHubSafety: 67/100

azd-ai-init

Structure agent code for Azure's `azd ai` command. Use when users mention "azd ai", "azd init agent", "Foundry agent", "scaffold agent", "convert to azd", "update for azd", "upgrade to azd ai", "fix azd ai", "migrate to Foundry", or want to deploy, convert, update, fix, or upgrade an AI agent for Azure. --- name: azd-ai-init description: Structure agent code for Azure's azd ai command. Use when users mention "azd ai", "azd init agent", "Foundry agent", "scaffold agent", "convert to azd", "update for azd", "upgrade to azd ai", "fix azd ai", "migrate to Foundry", or want to deploy, convert, update, fix, or upgrade an AI agent for Azure. model: claude-opus-4-5 --- Azure AI Agent Scaffolding Skill This skill helps devel

OpenClawself-declared

Public facts

4

Change events

1

Artifacts

0

Freshness

Apr 15, 2026

Verifiededitorial-contentNo verified compatibility signals

Capability contract not published. No trust telemetry is available yet. Last updated 4/15/2026.

Trust evidence available

Trust score

Unknown

Compatibility

OpenClaw

Freshness

Apr 15, 2026

Vendor

Spboyer

Artifacts

0

Benchmarks

0

Last release

Unpublished

Executive Summary

Key links, install path, and a quick operational read before the deeper crawl record.

Verifiededitorial-content

Summary

Capability contract not published. No trust telemetry is available yet. Last updated 4/15/2026.

Setup snapshot

git clone https://github.com/spboyer/skill-azd-ai-init.git
  1. 1

    Setup complexity is LOW. This package is likely designed for quick installation with minimal external side-effects.

  2. 2

    Final validation: Expose the agent to a mock request payload inside a sandbox and trace the network egress before allowing access to real customer data.

Evidence Ledger

Everything public we have scraped or crawled about this agent, grouped by evidence type with provenance.

Verifiededitorial-content
Vendor (1)

Vendor

Spboyer

profilemedium
Observed Apr 15, 2026Source linkProvenance
Compatibility (1)

Protocol compatibility

OpenClaw

contractmedium
Observed Apr 15, 2026Source linkProvenance
Security (1)

Handshake status

UNKNOWN

trustmedium
Observed unknownSource linkProvenance
Integration (1)

Crawlable docs

6 indexed pages on the official domain

search_documentmedium
Observed Apr 15, 2026Source linkProvenance

Release & Crawl Timeline

Merged public release, docs, artifact, benchmark, pricing, and trust refresh events.

Self-declaredagent-index

Artifacts Archive

Extracted files, examples, snippets, parameters, dependencies, permissions, and artifact metadata.

Self-declaredGITHUB OPENCLEW

Extracted files

0

Examples

6

Snippets

0

Languages

typescript

Parameters

Executable Examples

text

project-root/
├── azure.yaml              # Project configuration (REQUIRED)
├── infra/                  # Bicep infrastructure files (REQUIRED)
│   ├── main.bicep
│   ├── main.parameters.json
│   └── core/               # Reusable Bicep modules
│       └── ai/
│           └── ai-project.bicep
└── src/
    └── <AgentName>/        # Agent source folder (REQUIRED)
        ├── agent.yaml      # Agent definition (REQUIRED)
        ├── Dockerfile      # Container build file (REQUIRED)
        ├── main.py         # Agent entry point
        └── requirements.txt

yaml

# yaml-language-server: $schema=https://raw.githubusercontent.com/Azure/azure-dev/main/schemas/v1.0/azure.yaml.json

requiredVersions:
    extensions:
        azure.ai.agents: '>=0.1.0-preview'

name: <project-name>

services:
    <AgentName>:
        project: src/<AgentName>
        host: azure.ai.agent
        language: docker
        docker:
            remoteBuild: true
        config:
            container:
                resources:
                    cpu: "1"
                    memory: 2Gi
                scale:
                    maxReplicas: 3
                    minReplicas: 1
            deployments:
                - model:
                    format: OpenAI
                    name: gpt-4o-mini
                    version: "2024-07-18"
                  name: gpt-4o-mini
                  sku:
                    capacity: 10
                    name: GlobalStandard

infra:
    provider: bicep
    path: ./infra

yaml

# yaml-language-server: $schema=https://raw.githubusercontent.com/microsoft/AgentSchema/refs/heads/main/schemas/v1.0/ContainerAgent.yaml

kind: hosted
name: <AgentName>
description: "<Brief description of what the agent does>"

metadata:
    authors:
        - <author-name>
    example:
        - content: "<Example user prompt - always quote strings with special characters>"
          role: user
    tags:
        - <tag1>
        - <tag2>

protocols:
    - protocol: responses
      version: v1

environment_variables:
  - name: FOUNDRY_PROJECT_ENDPOINT
    value: ${AZURE_AI_PROJECT_ENDPOINT}
  - name: FOUNDRY_MODEL_DEPLOYMENT_NAME
    value: gpt-4o-mini
  - name: APPLICATIONINSIGHTS_CONNECTION_STRING
    value: ${APPLICATIONINSIGHTS_CONNECTION_STRING}

dockerfile

FROM python:3.11-slim

WORKDIR /app

COPY ./ user_agent/

WORKDIR /app/user_agent

RUN if [ -f requirements.txt ]; then \
        pip install -r requirements.txt; \
    else \
        echo "No requirements.txt found"; \
    fi

EXPOSE 8088

ENV PORT=8088

CMD ["python", "main.py"]

dockerfile

FROM node:20-slim

WORKDIR /app

COPY package*.json ./
RUN npm ci --only=production

COPY . .

EXPOSE 8088

ENV PORT=8088

CMD ["node", "dist/main.js"]

python

import asyncio
import os
import logging
from typing import Annotated

from azure.identity.aio import DefaultAzureCredential
from agent_framework.azure import AzureAIAgentClient
from azure.ai.agentserver.agentframework import from_agent_framework
from azure.monitor.opentelemetry import configure_azure_monitor
from dotenv import load_dotenv

load_dotenv(override=True)

logger = logging.getLogger(__name__)

if os.getenv("APPLICATIONINSIGHTS_CONNECTION_STRING"):
    configure_azure_monitor(enable_live_metrics=True, logger_name="__main__")

ENDPOINT = os.getenv("FOUNDRY_PROJECT_ENDPOINT", "")
MODEL_DEPLOYMENT_NAME = os.getenv("FOUNDRY_MODEL_DEPLOYMENT_NAME", "")

# Define your tools as functions with type annotations
# IMPORTANT: Use simple strings in Annotated[], NOT Pydantic Field objects
def my_tool(
    param1: Annotated[str, "Description of param1"],
    param2: Annotated[int, "Description of param2"]
) -> str:
    """Tool description that the model will see."""
    # Tool implementation
    return "result"

tools = [my_tool]

async def run_server():
    """Run the agent as an HTTP server."""
    credential = DefaultAzureCredential()
    
    try:
        client = AzureAIAgentClient(
            project_endpoint=ENDPOINT,
            model_deployment_name=MODEL_DEPLOYMENT_NAME,
            credential=credential,
        )
        
        agent = client.create_agent(
            name="<AgentName>",
            model=MODEL_DEPLOYMENT_NAME,
            instructions="<Your agent system instructions>",
            tools=tools,
        )
        
        logger.info("Starting Agent HTTP Server...")
        await from_agent_framework(agent).run_async()
    finally:
        await credential.close()

def main():
    asyncio.run(run_server())

if __name__ == "__main__":
    main()

Docs & README

Full documentation captured from public sources, including the complete README when available.

Self-declaredGITHUB OPENCLEW

Docs source

GITHUB OPENCLEW

Editorial quality

ready

Structure agent code for Azure's `azd ai` command. Use when users mention "azd ai", "azd init agent", "Foundry agent", "scaffold agent", "convert to azd", "update for azd", "upgrade to azd ai", "fix azd ai", "migrate to Foundry", or want to deploy, convert, update, fix, or upgrade an AI agent for Azure. --- name: azd-ai-init description: Structure agent code for Azure's azd ai command. Use when users mention "azd ai", "azd init agent", "Foundry agent", "scaffold agent", "convert to azd", "update for azd", "upgrade to azd ai", "fix azd ai", "migrate to Foundry", or want to deploy, convert, update, fix, or upgrade an AI agent for Azure. model: claude-opus-4-5 --- Azure AI Agent Scaffolding Skill This skill helps devel

Full README

name: azd-ai-init description: Structure agent code for Azure's azd ai command. Use when users mention "azd ai", "azd init agent", "Foundry agent", "scaffold agent", "convert to azd", "update for azd", "upgrade to azd ai", "fix azd ai", "migrate to Foundry", or want to deploy, convert, update, fix, or upgrade an AI agent for Azure. model: claude-opus-4-5

Azure AI Agent Scaffolding Skill

This skill helps developers prepare their AI agent code for deployment to Azure AI Foundry using the azd ai extension of the Azure Developer CLI.

When to Use This Skill

Use this skill when a user wants to:

  • Scaffold a new agent from scratch (greenfield project with no existing code)
  • Convert existing agent code to the azd ai expected format
  • Scaffold a new Azure AI Foundry agent project from scratch
  • Structure their agent for deployment with azd up
  • Understand what files and configuration azd ai requires
  • Migrate from other agent frameworks to Azure AI Foundry hosted agents

Core Workflow

Step 1: Analyze the User's Current Project

First, understand what the user has:

  1. Detect existing code: Look for agent implementations (Python, TypeScript, etc.)
  2. Identify the agent framework: LangGraph, Semantic Kernel, AutoGen, custom, etc.
  3. Find entry points: main.py, index.ts, or other entry files
  4. Check for existing configuration: azure.yaml, agent.yaml, Dockerfile, requirements.txt, package.json

Step 2: Generate Required Files

The azd ai extension expects a specific project structure:

project-root/
├── azure.yaml              # Project configuration (REQUIRED)
├── infra/                  # Bicep infrastructure files (REQUIRED)
│   ├── main.bicep
│   ├── main.parameters.json
│   └── core/               # Reusable Bicep modules
│       └── ai/
│           └── ai-project.bicep
└── src/
    └── <AgentName>/        # Agent source folder (REQUIRED)
        ├── agent.yaml      # Agent definition (REQUIRED)
        ├── Dockerfile      # Container build file (REQUIRED)
        ├── main.py         # Agent entry point
        └── requirements.txt

Step 3: Create Configuration Files

azure.yaml (Project Root)

This is the main project configuration file that defines services and infrastructure:

# yaml-language-server: $schema=https://raw.githubusercontent.com/Azure/azure-dev/main/schemas/v1.0/azure.yaml.json

requiredVersions:
    extensions:
        azure.ai.agents: '>=0.1.0-preview'

name: <project-name>

services:
    <AgentName>:
        project: src/<AgentName>
        host: azure.ai.agent
        language: docker
        docker:
            remoteBuild: true
        config:
            container:
                resources:
                    cpu: "1"
                    memory: 2Gi
                scale:
                    maxReplicas: 3
                    minReplicas: 1
            deployments:
                - model:
                    format: OpenAI
                    name: gpt-4o-mini
                    version: "2024-07-18"
                  name: gpt-4o-mini
                  sku:
                    capacity: 10
                    name: GlobalStandard

infra:
    provider: bicep
    path: ./infra

agent.yaml (Inside src/<AgentName>/)

Defines the agent's metadata, protocols, and environment variables:

# yaml-language-server: $schema=https://raw.githubusercontent.com/microsoft/AgentSchema/refs/heads/main/schemas/v1.0/ContainerAgent.yaml

kind: hosted
name: <AgentName>
description: "<Brief description of what the agent does>"

metadata:
    authors:
        - <author-name>
    example:
        - content: "<Example user prompt - always quote strings with special characters>"
          role: user
    tags:
        - <tag1>
        - <tag2>

protocols:
    - protocol: responses
      version: v1

environment_variables:
  - name: FOUNDRY_PROJECT_ENDPOINT
    value: ${AZURE_AI_PROJECT_ENDPOINT}
  - name: FOUNDRY_MODEL_DEPLOYMENT_NAME
    value: gpt-4o-mini
  - name: APPLICATIONINSIGHTS_CONNECTION_STRING
    value: ${APPLICATIONINSIGHTS_CONNECTION_STRING}

Note: Set FOUNDRY_MODEL_DEPLOYMENT_NAME to match the deployment name in your azure.yaml (e.g., gpt-4o-mini).

⚠️ Environment Variable Naming: The hosted agent platform injects variables with FOUNDRY_ prefix. Your Python code must read FOUNDRY_PROJECT_ENDPOINT and FOUNDRY_MODEL_DEPLOYMENT_NAME (not AZURE_* prefixes). The agent.yaml maps Azure outputs to the expected names.

IMPORTANT YAML Formatting Rules:

  • Always wrap content: and description: values in double quotes
  • Escape internal quotes with backslash: "He said \"hello\""
  • Strings with colons, commas, or special characters MUST be quoted

Dockerfile

Standard Python container for hosted agents:

FROM python:3.11-slim

WORKDIR /app

COPY ./ user_agent/

WORKDIR /app/user_agent

RUN if [ -f requirements.txt ]; then \
        pip install -r requirements.txt; \
    else \
        echo "No requirements.txt found"; \
    fi

EXPOSE 8088

ENV PORT=8088

CMD ["python", "main.py"]

For TypeScript/Node.js agents:

FROM node:20-slim

WORKDIR /app

COPY package*.json ./
RUN npm ci --only=production

COPY . .

EXPOSE 8088

ENV PORT=8088

CMD ["node", "dist/main.js"]

Step 4: Adapt the Agent Code

The agent code must use the Azure AI Agent Framework pattern to run as a hosted agent:

Client Selection Guide

| Scenario | Client Type | Notes | |----------|-------------|-------| | Local development with AI Services endpoint | AzureOpenAIChatClient | Uses ChatAgent pattern | | Hosted agent deployment (azd up) | AzureAIAgentClient | Required - Uses create_agent + from_agent_framework | | Foundry Project endpoint | AzureAIAgentClient | Requires FOUNDRY_* env vars |

Python Example (using agent_framework)

import asyncio
import os
import logging
from typing import Annotated

from azure.identity.aio import DefaultAzureCredential
from agent_framework.azure import AzureAIAgentClient
from azure.ai.agentserver.agentframework import from_agent_framework
from azure.monitor.opentelemetry import configure_azure_monitor
from dotenv import load_dotenv

load_dotenv(override=True)

logger = logging.getLogger(__name__)

if os.getenv("APPLICATIONINSIGHTS_CONNECTION_STRING"):
    configure_azure_monitor(enable_live_metrics=True, logger_name="__main__")

ENDPOINT = os.getenv("FOUNDRY_PROJECT_ENDPOINT", "")
MODEL_DEPLOYMENT_NAME = os.getenv("FOUNDRY_MODEL_DEPLOYMENT_NAME", "")

# Define your tools as functions with type annotations
# IMPORTANT: Use simple strings in Annotated[], NOT Pydantic Field objects
def my_tool(
    param1: Annotated[str, "Description of param1"],
    param2: Annotated[int, "Description of param2"]
) -> str:
    """Tool description that the model will see."""
    # Tool implementation
    return "result"

tools = [my_tool]

async def run_server():
    """Run the agent as an HTTP server."""
    credential = DefaultAzureCredential()
    
    try:
        client = AzureAIAgentClient(
            project_endpoint=ENDPOINT,
            model_deployment_name=MODEL_DEPLOYMENT_NAME,
            credential=credential,
        )
        
        agent = client.create_agent(
            name="<AgentName>",
            model=MODEL_DEPLOYMENT_NAME,
            instructions="<Your agent system instructions>",
            tools=tools,
        )
        
        logger.info("Starting Agent HTTP Server...")
        await from_agent_framework(agent).run_async()
    finally:
        await credential.close()

def main():
    asyncio.run(run_server())

if __name__ == "__main__":
    main()

Required Python Dependencies (requirements.txt)

# Core agent packages
agent-framework-azure-ai
agent-framework-core
azure-ai-agentserver-agentframework

# Web server (required by agent server)
uvicorn
fastapi

# Azure identity
azure-identity

# Environment
python-dotenv

# Monitoring
azure-monitor-opentelemetry

Infrastructure (Bicep Files)

CRITICAL: The azd ai extension requires infrastructure that provisions Microsoft.CognitiveServices/accounts/projects resources. The Bicep modules are complex (~300+ lines) and must be obtained from the official template.

Getting the Infrastructure (Required)

Always use the official starter template for infrastructure:

# Option 1: Initialize a new project with infra included
azd init -t Azure-Samples/azd-ai-starter-basic

# Option 2: Copy infra to an existing project
git clone --depth 1 https://github.com/Azure-Samples/azd-ai-starter-basic.git temp-starter
cp -r temp-starter/infra ./infra
rm -rf temp-starter

The official infra/ folder contains:

infra/
├── main.bicep                 # Main deployment orchestrator
├── main.parameters.json       # Parameter mappings
├── abbreviations.json         # Resource naming conventions
└── core/
    └── ai/
        └── ai-project.bicep   # AI Foundry provisioning module

What the Infrastructure Provisions

The core/ai/ai-project.bicep module creates:

  • Microsoft.CognitiveServices/accounts - AI Services account (Foundry)
  • Microsoft.CognitiveServices/accounts/projects - Foundry project (nested resource)
  • Container Registry - For agent container images
  • Application Insights - Monitoring and logging
  • Log Analytics Workspace - Log storage
  • Model Deployments - GPT-4o, GPT-4o-mini, etc.
  • Capability Host - For hosted agents

Required Bicep Outputs

The main.bicep must output these environment variables for azd ai to work:

// Required outputs - azd ai uses these to locate resources
output AZURE_RESOURCE_GROUP string = resourceGroupName
output AZURE_AI_ACCOUNT_ID string = aiProject.outputs.accountId
output AZURE_AI_PROJECT_ID string = aiProject.outputs.projectId
output AZURE_AI_ACCOUNT_NAME string = aiProject.outputs.aiServicesAccountName
output AZURE_AI_PROJECT_NAME string = aiProject.outputs.projectName

// Endpoints
output AZURE_AI_PROJECT_ENDPOINT string = aiProject.outputs.AZURE_AI_PROJECT_ENDPOINT
output AZURE_OPENAI_ENDPOINT string = aiProject.outputs.AZURE_OPENAI_ENDPOINT
output APPLICATIONINSIGHTS_CONNECTION_STRING string = aiProject.outputs.APPLICATIONINSIGHTS_CONNECTION_STRING

// Container Registry
output AZURE_CONTAINER_REGISTRY_ENDPOINT string = aiProject.outputs.dependentResources.registry.loginServer

Note: The AZURE_AI_PROJECT_ID must be in the format:

/subscriptions/{sub}/resourceGroups/{rg}/providers/Microsoft.CognitiveServices/accounts/{account}/projects/{project}

Do NOT use Microsoft.MachineLearningServices/workspaces - this is a different resource type that won't work with azd ai.

infra/main.parameters.json

Model Deployment Configuration

The deployments section in azure.yaml under each service's config defines the AI models:

| Property | Description | Example | |----------|-------------|---------| | name | Deployment name | gpt-4o-mini | | model.format | Model provider | OpenAI | | model.name | Model identifier | gpt-4o-mini | | model.version | Model version | 2024-07-18 | | sku.name | SKU tier | GlobalStandard | | sku.capacity | Tokens per minute (thousands) | 10 |

Available Models

Common models for agents:

  • gpt-4o (version: 2024-08-06)
  • gpt-4o-mini (version: 2024-07-18)
  • gpt-4-turbo (version: 2024-04-09)

Region Requirements for Hosted Agents

IMPORTANT: Hosted agents are only supported in specific Azure regions.

Supported Regions (as of January 2026)

  • North Central US ✅ (Default in templates)

The provided Bicep templates default to northcentralus. If you need to change the region, verify hosted agent support first.

Deployment Commands

After scaffolding, users deploy with:

# Install the azd ai extension (if not installed)
azd extension install azure.ai.agents

# Login to Azure
azd auth login

# Initialize environment (creates .azure folder)
azd init

# Provision infrastructure and deploy agent
azd up

Or step-by-step:

azd provision    # Create Azure resources
azd deploy       # Deploy the agent

Troubleshooting Deployment

If azd deploy times out waiting for the container:

  1. Check container logs in Azure Portal:

    • Go to the AI Foundry project
    • Navigate to Agents section
    • Check container status and logs
  2. Common issues:

    • Missing dependencies in requirements.txt
    • Import errors in agent code
    • Agent not listening on port 8088
    • Missing environment variables
  3. Test locally first:

    cd src/YourAgent
    pip install -r requirements.txt
    python main.py
    
  4. Verify the agent starts a server: The agent must call from_agent_framework(agent).run_async() to start the HTTP server.

Guidelines for Converting Existing Agents

From LangGraph/LangChain

  1. Keep the graph/chain logic intact
  2. Wrap it in the AzureAIAgentClient pattern
  3. Expose tools as annotated functions
  4. Use from_agent_framework(agent).run_async() to serve

From Semantic Kernel

  1. Convert plugins to tool functions
  2. Use the same agent hosting pattern
  3. Map kernel functions to tools list

From AutoGen

  1. Extract agent logic into tool functions
  2. Define single agent using AzureAIAgentClient
  3. Multi-agent patterns may need restructuring

Common Customizations

Adding Environment Variables

In agent.yaml:

environment_variables:
  - name: CUSTOM_VAR
    value: ${MY_ENV_VAR}

In azure.yaml under service config:

config:
  env:
    CUSTOM_VAR: "value"

Adding Azure Resources (Connections)

For agents needing additional Azure services (search, storage, etc.), add to azure.yaml:

config:
  resources:
    - resource: search
      connectionName: my-search-connection
    - resource: storage
      connectionName: my-storage-connection

Available resource types:

  • search - Azure AI Search
  • storage - Azure Storage
  • registry - Azure Container Registry
  • bing_grounding - Bing Search
  • bing_custom_grounding - Bing Custom Search

Scaling Configuration

config:
  container:
    resources:
      cpu: "2"
      memory: 4Gi
    scale:
      minReplicas: 1
      maxReplicas: 10

Greenfield Scaffolding (No Existing Code)

When a user has no existing code and wants to create a new agent from scratch, generate a complete working project.

Quick Start Template

For users who say "create a new agent for azd ai" or "scaffold a new Foundry agent", generate this complete structure:

1. Create Project Structure

mkdir -p my-agent/src/MyAgent my-agent/infra/core/ai

2. azure.yaml (Project Root)

# yaml-language-server: $schema=https://raw.githubusercontent.com/Azure/azure-dev/main/schemas/v1.0/azure.yaml.json

requiredVersions:
    extensions:
        azure.ai.agents: '>=0.1.0-preview'

name: my-agent

services:
    MyAgent:
        project: src/MyAgent
        host: azure.ai.agent
        language: docker
        docker:
            remoteBuild: true
        config:
            container:
                resources:
                    cpu: "1"
                    memory: 2Gi
                scale:
                    maxReplicas: 3
                    minReplicas: 1
            deployments:
                - model:
                    format: OpenAI
                    name: gpt-4o-mini
                    version: "2024-07-18"
                  name: gpt-4o-mini
                  sku:
                    capacity: 10
                    name: GlobalStandard

infra:
    provider: bicep
    path: ./infra

3. src/MyAgent/agent.yaml

# yaml-language-server: $schema=https://raw.githubusercontent.com/microsoft/AgentSchema/refs/heads/main/schemas/v1.0/ContainerAgent.yaml

kind: hosted
name: MyAgent
description: "A helpful assistant that can answer questions and perform tasks."

metadata:
    authors:
        - developer
    example:
        - content: "Hello, what can you help me with?"
          role: user
    tags:
        - starter
        - assistant

protocols:
    - protocol: responses
      version: v1

environment_variables:
  - name: FOUNDRY_PROJECT_ENDPOINT
    value: ${AZURE_AI_PROJECT_ENDPOINT}
  - name: FOUNDRY_MODEL_DEPLOYMENT_NAME
    value: gpt-4o-mini
  - name: APPLICATIONINSIGHTS_CONNECTION_STRING
    value: ${APPLICATIONINSIGHTS_CONNECTION_STRING}

4. src/MyAgent/main.py

import asyncio
import os
import logging
from typing import Annotated

from azure.identity.aio import DefaultAzureCredential
from agent_framework.azure import AzureAIAgentClient
from azure.ai.agentserver.agentframework import from_agent_framework
from azure.monitor.opentelemetry import configure_azure_monitor
from dotenv import load_dotenv

load_dotenv(override=True)

logger = logging.getLogger(__name__)

if os.getenv("APPLICATIONINSIGHTS_CONNECTION_STRING"):
    configure_azure_monitor(enable_live_metrics=True, logger_name="__main__")

ENDPOINT = os.getenv("FOUNDRY_PROJECT_ENDPOINT", "")
MODEL_DEPLOYMENT_NAME = os.getenv("FOUNDRY_MODEL_DEPLOYMENT_NAME", "")


# ===========================================
# Define your tools here
# ===========================================

def greet(
    name: Annotated[str, "The name of the person to greet"]
) -> str:
    """Greet someone by name.

    Args:
        name: The person's name
    """
    return f"Hello, {name}! Nice to meet you."


def get_current_time() -> str:
    """Get the current date and time."""
    from datetime import datetime
    return datetime.now().strftime("%Y-%m-%d %H:%M:%S")


def calculate(
    expression: Annotated[str, "A mathematical expression to evaluate, e.g. '2 + 2'"]
) -> str:
    """Safely evaluate a mathematical expression.

    Args:
        expression: Math expression like '2 + 2' or '10 * 5'
    """
    # Safe evaluation of basic math
    allowed_chars = set("0123456789+-*/(). ")
    if not all(c in allowed_chars for c in expression):
        return "Error: Invalid characters in expression"
    try:
        result = eval(expression)
        return f"{expression} = {result}"
    except Exception as e:
        return f"Error: {str(e)}"


# Collect all tools
tools = [greet, get_current_time, calculate]


# ===========================================
# Agent Server
# ===========================================

async def run_server():
    """Run the agent as an HTTP server."""
    credential = DefaultAzureCredential()
    
    try:
        client = AzureAIAgentClient(
            project_endpoint=ENDPOINT,
            model_deployment_name=MODEL_DEPLOYMENT_NAME,
            credential=credential,
        )
        
        agent = client.create_agent(
            name="MyAgent",
            model=MODEL_DEPLOYMENT_NAME,
            instructions="""You are a helpful assistant. You can:
- Greet people by name
- Tell the current time
- Perform basic math calculations

Be friendly and helpful. Use the available tools when appropriate.""",
            tools=tools,
        )
        
        logger.info("Starting MyAgent HTTP Server...")
        print("Starting MyAgent HTTP Server on port 8088...")
        
        await from_agent_framework(agent).run_async()
    finally:
        await credential.close()


def main():
    """Main entry point."""
    asyncio.run(run_server())


if __name__ == "__main__":
    main()

5. src/MyAgent/requirements.txt

# Core agent packages
agent-framework-azure-ai
agent-framework-core
azure-ai-agentserver-agentframework

# Web server (required by agent server)
uvicorn
fastapi

# Azure identity
azure-identity

# Environment
python-dotenv

# Monitoring
azure-monitor-opentelemetry

6. src/MyAgent/Dockerfile

FROM python:3.11-slim

WORKDIR /app

COPY ./ user_agent/

WORKDIR /app/user_agent

RUN pip install --no-cache-dir -r requirements.txt

EXPOSE 8088

ENV PORT=8088
ENV PYTHONUNBUFFERED=1

CMD ["python", "main.py"]

7. infra/main.bicep

Use the standard Bicep template from the Infrastructure section above, or point users to clone from the starter template:

# Alternative: Start from official template
azd init -t Azure-Samples/azd-ai-starter-basic

8. infra/main.parameters.json

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentParameters.json#",
  "contentVersion": "1.0.0.0",
  "parameters": {
    "environmentName": { "value": "${AZURE_ENV_NAME}" },
    "location": { "value": "${AZURE_LOCATION}" },
    "aiDeploymentsLocation": { "value": "${AZURE_AI_DEPLOYMENTS_LOCATION}" },
    "principalId": { "value": "${AZURE_PRINCIPAL_ID}" },
    "principalType": { "value": "${AZURE_PRINCIPAL_TYPE}" },
    "aiProjectDeploymentsJson": { "value": "${AI_PROJECT_DEPLOYMENTS}" },
    "aiProjectConnectionsJson": { "value": "${AI_PROJECT_CONNECTIONS}" },
    "aiProjectDependentResourcesJson": { "value": "${AI_PROJECT_DEPENDENT_RESOURCES}" },
    "enableHostedAgents": { "value": "${ENABLE_HOSTED_AGENTS=true}" }
  }
}

Final Project Structure

my-agent/
├── azure.yaml
├── infra/
│   ├── main.bicep
│   ├── main.parameters.json
│   └── core/
│       └── ai/
│           └── ai-project.bicep
└── src/
    └── MyAgent/
        ├── agent.yaml
        ├── Dockerfile
        ├── main.py
        └── requirements.txt

Deploy the Agent

cd my-agent

# Login to Azure
azd auth login

# Initialize environment (creates .azure folder)
azd init

# Deploy everything
azd up

The agent will be live at the Azure AI Foundry endpoint shown in the output.


Example: Complete Scaffolding Session (Existing Code)

When a user says "prepare my calculator agent for azd ai":

  1. Analyze: Find their calculator.py with add/multiply/divide functions
  2. Create structure:
    • Create src/CalculatorAgent/ directory
    • Move/adapt code to main.py
    • Create agent.yaml with metadata
    • Create Dockerfile
    • Create requirements.txt
  3. Create root configs:
    • Create azure.yaml with service definition
    • Create infra/ with Bicep files
  4. Provide next steps: Tell user to run azd up

References

YAML Validation Checklist

Before completing, always validate generated YAML files:

  1. Quote all string values that contain:

    • Colons (:)
    • Commas (,)
    • Special characters (#, &, *, !, |, >, ', ", %, @, `)
    • Leading/trailing spaces
  2. Required quoting patterns:

    # CORRECT
    description: "A helpful agent that answers questions."
    content: "What is 2 + 2?"
    content: "Subject: Meeting - Let's discuss the project."
    
    # INCORRECT - will break parsing
    description: A helpful agent that answers questions.
    content: What is 2 + 2?
    content: Subject: Meeting - Let's discuss the project.
    
  3. Escape internal quotes:

    content: "He said \"hello\" to everyone."
    
  4. Validate YAML syntax before finishing:

    # Python
    python -c "import yaml; yaml.safe_load(open('agent.yaml'))"
    
    # Node.js
    node -e "require('js-yaml').load(require('fs').readFileSync('agent.yaml'))"
    
  5. Check for common errors:

    • Inconsistent indentation (use 2 or 4 spaces, not tabs)
    • Missing quotes around values with special characters
    • Trailing whitespace
    • Missing required fields

Contract & API

Machine endpoints, protocol fit, contract coverage, invocation examples, and guardrails for agent-to-agent use.

MissingGITHUB OPENCLEW

Contract coverage

Status

missing

Auth

None

Streaming

No

Data region

Unspecified

Protocol support

OpenClaw: self-declared

Requires: none

Forbidden: none

Guardrails

Operational confidence: low

No positive guardrails captured.
Invocation examples
curl -s "https://xpersona.co/api/v1/agents/spboyer-skill-azd-ai-init/snapshot"
curl -s "https://xpersona.co/api/v1/agents/spboyer-skill-azd-ai-init/contract"
curl -s "https://xpersona.co/api/v1/agents/spboyer-skill-azd-ai-init/trust"

Reliability & Benchmarks

Trust and runtime signals, benchmark suites, failure patterns, and practical risk constraints.

Missingruntime-metrics

Trust signals

Handshake

UNKNOWN

Confidence

unknown

Attempts 30d

unknown

Fallback rate

unknown

Runtime metrics

Observed P50

unknown

Observed P95

unknown

Rate limit

unknown

Estimated cost

unknown

Do not use if

Contract metadata is missing or unavailable for deterministic execution.
No benchmark suites or observed failure patterns are available.

Media & Demo

Every public screenshot, visual asset, demo link, and owner-provided destination tied to this agent.

Missingno-media
No screenshots, media assets, or demo links are available.

Related Agents

Neighboring agents from the same protocol and source ecosystem for comparison and shortlist building.

Self-declaredprotocol-neighbors
GITHUB_REPOSactivepieces

Rank

70

AI Agents & MCPs & AI Workflow Automation • (~400 MCP servers for AI agents) • AI Automation / AI Agent with MCPs • AI Workflows & AI Agents • MCPs for AI Agents

Traction

No public download signal

Freshness

Updated 2d ago

OPENCLAW
GITHUB_REPOScherry-studio

Rank

70

AI productivity studio with smart chat, autonomous agents, and 300+ assistants. Unified access to frontier LLMs

Traction

No public download signal

Freshness

Updated 5d ago

MCPOPENCLAW
GITHUB_REPOSAionUi

Rank

70

Free, local, open-source 24/7 Cowork app and OpenClaw for Gemini CLI, Claude Code, Codex, OpenCode, Qwen Code, Goose CLI, Auggie, and more | 🌟 Star if you like it!

Traction

No public download signal

Freshness

Updated 6d ago

MCPOPENCLAW
GITHUB_REPOSCopilotKit

Rank

70

The Frontend for Agents & Generative UI. React + Angular

Traction

No public download signal

Freshness

Updated 23d ago

OPENCLAW
Machine Appendix

Contract JSON

{
  "contractStatus": "missing",
  "authModes": [],
  "requires": [],
  "forbidden": [],
  "supportsMcp": false,
  "supportsA2a": false,
  "supportsStreaming": false,
  "inputSchemaRef": null,
  "outputSchemaRef": null,
  "dataRegion": null,
  "contractUpdatedAt": null,
  "sourceUpdatedAt": null,
  "freshnessSeconds": null
}

Invocation Guide

{
  "preferredApi": {
    "snapshotUrl": "https://xpersona.co/api/v1/agents/spboyer-skill-azd-ai-init/snapshot",
    "contractUrl": "https://xpersona.co/api/v1/agents/spboyer-skill-azd-ai-init/contract",
    "trustUrl": "https://xpersona.co/api/v1/agents/spboyer-skill-azd-ai-init/trust"
  },
  "curlExamples": [
    "curl -s \"https://xpersona.co/api/v1/agents/spboyer-skill-azd-ai-init/snapshot\"",
    "curl -s \"https://xpersona.co/api/v1/agents/spboyer-skill-azd-ai-init/contract\"",
    "curl -s \"https://xpersona.co/api/v1/agents/spboyer-skill-azd-ai-init/trust\""
  ],
  "jsonRequestTemplate": {
    "query": "summarize this repo",
    "constraints": {
      "maxLatencyMs": 2000,
      "protocolPreference": [
        "OPENCLEW"
      ]
    }
  },
  "jsonResponseTemplate": {
    "ok": true,
    "result": {
      "summary": "...",
      "confidence": 0.9
    },
    "meta": {
      "source": "GITHUB_OPENCLEW",
      "generatedAt": "2026-04-17T00:54:07.742Z"
    }
  },
  "retryPolicy": {
    "maxAttempts": 3,
    "backoffMs": [
      500,
      1500,
      3500
    ],
    "retryableConditions": [
      "HTTP_429",
      "HTTP_503",
      "NETWORK_TIMEOUT"
    ]
  }
}

Trust JSON

{
  "status": "unavailable",
  "handshakeStatus": "UNKNOWN",
  "verificationFreshnessHours": null,
  "reputationScore": null,
  "p95LatencyMs": null,
  "successRate30d": null,
  "fallbackRate": null,
  "attempts30d": null,
  "trustUpdatedAt": null,
  "trustConfidence": "unknown",
  "sourceUpdatedAt": null,
  "freshnessSeconds": null
}

Capability Matrix

{
  "rows": [
    {
      "key": "OPENCLEW",
      "type": "protocol",
      "support": "unknown",
      "confidenceSource": "profile",
      "notes": "Listed on profile"
    },
    {
      "key": "answer",
      "type": "capability",
      "support": "supported",
      "confidenceSource": "profile",
      "notes": "Declared in agent profile metadata"
    },
    {
      "key": "you",
      "type": "capability",
      "support": "supported",
      "confidenceSource": "profile",
      "notes": "Declared in agent profile metadata"
    },
    {
      "key": "first",
      "type": "capability",
      "support": "supported",
      "confidenceSource": "profile",
      "notes": "Declared in agent profile metadata"
    }
  ],
  "flattenedTokens": "protocol:OPENCLEW|unknown|profile capability:answer|supported|profile capability:you|supported|profile capability:first|supported|profile"
}

Facts JSON

[
  {
    "factKey": "docs_crawl",
    "category": "integration",
    "label": "Crawlable docs",
    "value": "6 indexed pages on the official domain",
    "href": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
    "sourceUrl": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
    "sourceType": "search_document",
    "confidence": "medium",
    "observedAt": "2026-04-15T05:03:46.393Z",
    "isPublic": true
  },
  {
    "factKey": "vendor",
    "category": "vendor",
    "label": "Vendor",
    "value": "Spboyer",
    "href": "https://github.com/spboyer/skill-azd-ai-init",
    "sourceUrl": "https://github.com/spboyer/skill-azd-ai-init",
    "sourceType": "profile",
    "confidence": "medium",
    "observedAt": "2026-04-15T03:15:12.461Z",
    "isPublic": true
  },
  {
    "factKey": "protocols",
    "category": "compatibility",
    "label": "Protocol compatibility",
    "value": "OpenClaw",
    "href": "https://xpersona.co/api/v1/agents/spboyer-skill-azd-ai-init/contract",
    "sourceUrl": "https://xpersona.co/api/v1/agents/spboyer-skill-azd-ai-init/contract",
    "sourceType": "contract",
    "confidence": "medium",
    "observedAt": "2026-04-15T03:15:12.461Z",
    "isPublic": true
  },
  {
    "factKey": "handshake_status",
    "category": "security",
    "label": "Handshake status",
    "value": "UNKNOWN",
    "href": "https://xpersona.co/api/v1/agents/spboyer-skill-azd-ai-init/trust",
    "sourceUrl": "https://xpersona.co/api/v1/agents/spboyer-skill-azd-ai-init/trust",
    "sourceType": "trust",
    "confidence": "medium",
    "observedAt": null,
    "isPublic": true
  }
]

Change Events JSON

[
  {
    "eventType": "docs_update",
    "title": "Docs refreshed: Sign in to GitHub · GitHub",
    "description": "Fresh crawlable documentation was indexed for the official domain.",
    "href": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
    "sourceUrl": "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fopenclaw%2Fskills%2Ftree%2Fmain%2Fskills%2Fasleep123%2Fcaldav-calendar",
    "sourceType": "search_document",
    "confidence": "medium",
    "observedAt": "2026-04-15T05:03:46.393Z",
    "isPublic": true
  }
]

Sponsored

Ads related to azd-ai-init and adjacent AI workflows.