main 71f8bfd1f6a2 cached
103 files
382.2 KB
91.3k tokens
87 symbols
1 requests
Download .txt
Showing preview only (414K chars total). Download the full file or copy to clipboard to get everything.
Repository: MicrosoftLearning/mslearn-ai-language
Branch: main
Commit: 71f8bfd1f6a2
Files: 103
Total size: 382.2 KB

Directory structure:
gitextract_wp6mqwji/

├── .github/
│   └── workflows/
│       └── voice-live-web-files.yml
├── .gitignore
├── Instructions/
│   ├── Exercises/
│   │   ├── 01-analyze-text.md
│   │   ├── 02-language-agent.md
│   │   ├── 03-gen-ai-speech.md
│   │   ├── 04-azure-speech.md
│   │   ├── 05-azure-speech-mcp.md
│   │   ├── 06-voice-live-agent.md
│   │   └── 07-translation.md
│   └── Labs/
│       ├── 01-analyze-text.md
│       ├── 02-qna.md
│       ├── 03-language-understanding.md
│       ├── 04-text-classification.md
│       ├── 05-extract-custom-entities.md
│       ├── 06-translate-text.md
│       ├── 07-speech.md
│       ├── 08-translate-speech.md
│       ├── 09-audio-chat.md
│       ├── 10-voice-live-api.md
│       └── 11-voice-live-agent-web.md
├── LICENSE
├── Labfiles/
│   ├── 01-analyze-text/
│   │   └── Python/
│   │       ├── readme.txt
│   │       └── text-analysis/
│   │           ├── requirements.txt
│   │           ├── reviews/
│   │           │   ├── review1.txt
│   │           │   ├── review2.txt
│   │           │   ├── review3.txt
│   │           │   ├── review4.txt
│   │           │   └── review5.txt
│   │           └── text-analysis.py
│   ├── 02-language-agent/
│   │   └── Python/
│   │       └── text-agent/
│   │           ├── requirements.txt
│   │           └── text-agent.py
│   ├── 02-qna/
│   │   ├── Python/
│   │   │   ├── qna-app/
│   │   │   │   ├── qna-app.py
│   │   │   │   └── requirements.txt
│   │   │   └── readme.txt
│   │   └── ask-question.sh
│   ├── 03-gen-ai-speech/
│   │   └── Python/
│   │       ├── generate-speech/
│   │       │   ├── generate-speech.py
│   │       │   └── requirements.txt
│   │       └── transcribe-speech/
│   │           ├── requirements.txt
│   │           └── transcribe-speech.py
│   ├── 03-language/
│   │   ├── Clock.json
│   │   ├── Python/
│   │   │   ├── clock-client/
│   │   │   │   ├── clock-client.py
│   │   │   │   └── requirements.txt
│   │   │   └── readme.txt
│   │   └── send-call.sh
│   ├── 04-azure-speech/
│   │   └── Python/
│   │       └── voice-mail/
│   │           ├── requirements.txt
│   │           └── voice-mail.py
│   ├── 04-text-classification/
│   │   ├── Python/
│   │   │   ├── classify-text/
│   │   │   │   ├── articles/
│   │   │   │   │   ├── test1.txt
│   │   │   │   │   └── test2.txt
│   │   │   │   ├── classify-text.py
│   │   │   │   └── requirements.txt
│   │   │   └── readme.txt
│   │   ├── classify-text.ps1
│   │   ├── test1.txt
│   │   └── test2.txt
│   ├── 05-custom-entity-recognition/
│   │   ├── Python/
│   │   │   ├── custom-entities/
│   │   │   │   ├── ads/
│   │   │   │   │   ├── test1.txt
│   │   │   │   │   └── test2.txt
│   │   │   │   ├── custom-entities.py
│   │   │   │   └── requirements.txt
│   │   │   └── readme.txt
│   │   ├── extract-entities.ps1
│   │   ├── test1.txt
│   │   └── test2.txt
│   ├── 05-speech-tool/
│   │   └── Python/
│   │       └── speech-client/
│   │           ├── requirements.txt
│   │           └── speech-client.py
│   ├── 06-translator-sdk/
│   │   └── Python/
│   │       ├── readme.txt
│   │       └── translate-text/
│   │           ├── requirements.txt
│   │           └── translate.py
│   ├── 06-voice-live/
│   │   └── Python/
│   │       └── chat-client/
│   │           ├── chat-client.py
│   │           └── requirements.txt
│   ├── 07-speech/
│   │   └── Python/
│   │       ├── readme.txt
│   │       └── speaking-clock/
│   │           ├── requirements.txt
│   │           └── speaking-clock.py
│   ├── 07-translation/
│   │   └── Python/
│   │       ├── readme.txt
│   │       └── translators/
│   │           ├── requirements.txt
│   │           ├── translate-speech.py
│   │           └── translate-text.py
│   ├── 08-speech-translation/
│   │   └── Python/
│   │       ├── readme.txt
│   │       └── translator/
│   │           ├── requirements.txt
│   │           └── translator.py
│   ├── 09-audio-chat/
│   │   └── Python/
│   │       ├── audio-chat.py
│   │       └── requirements.txt
│   └── 11-voice-live-agent/
│       └── python/
│           ├── .dockerignore
│           ├── .gitignore
│           ├── .python-version
│           ├── Dockerfile
│           ├── README.md
│           ├── azdeploy.sh
│           ├── azure.yaml
│           ├── infra/
│           │   ├── ai-foundry.bicep
│           │   ├── main.bicep
│           │   └── main.parameters.json
│           ├── pyproject.toml
│           ├── requirements.txt
│           └── src/
│               ├── __init__.py
│               ├── flask_app.py
│               ├── static/
│               │   ├── app.js
│               │   └── style.css
│               └── templates/
│                   └── index.html
├── README.md
├── _build.yml
├── _config.yml
├── downloads/
│   └── python/
│       └── readme.md
└── index.md

================================================
FILE CONTENTS
================================================

================================================
FILE: .github/workflows/voice-live-web-files.yml
================================================
name: Zip Voice Live Web Files
on:
  workflow_dispatch:
  push:
    branches:
      - 'main'

    paths:
      - 'Labfiles/11-voice-live-agent/python/**'

permissions:
  contents: write

defaults:
  run:
    shell: bash

jobs:
  create_zip:
    runs-on: ubuntu-latest
    steps:
    - name: Checkout
      uses: actions/checkout@v4
    - name: Create Voice Live zip
      run: |
        rm -f ./downloads/python/voice-live-web.zip
        cd ./Labfiles/11-voice-live-agent/python/
        zip -r -q ../../../downloads/python/voice-live-web.zip .
    - name: Commit and push
      uses: Endbug/add-and-commit@v9  # Updated to latest version
      with:
        add: 'downloads/python/voice-live-web.zip'
        message: 'Updating Zip for python source files'
        push: true



================================================
FILE: .gitignore
================================================
bin
obj
*.sln


================================================
FILE: Instructions/Exercises/01-analyze-text.md
================================================
---
lab:
    title: 'Analyze text'
    description: "Use Azure Language in Foundry Tools to analyze text."
    level: 300
    duration: 30
    islab: true
---

# Analyze Text

**Azure Language in Foundry Tools** supports analysis of text, including language detection, entity recognition, and PII redaction.

For example, suppose a travel agency wants to process hotel reviews that have been submitted to the company's web site. By using the Azure Language, they can determine the language each review is written in, identify named entities, such as places, landmarks, or people mentioned in the reviews, and redact any personally identifiable information before publishing them on the company's website. In this exercise, you'll use the Azure Language Python SDK for text analytics to implement a simple hotel review application.

While this exercise is based on Python, you can develop text analytics applications using multiple language-specific SDKs; including:

The code used in this exercise is based on the for Microsoft Foundry Tools SDK for Python. You can develop similar solutions using the SDKs for Microsoft .NET, JavaScript, and Java. Refer to [Microsoft Foundry SDK client libraries](https://learn.microsoft.com/azure/ai-foundry/how-to/develop/sdk-overview) for details.

This exercise takes approximately **30** minutes.

> **Note**: Some of the technologies used in this exercise are in preview or in active development. You may experience some unexpected behavior, warnings, or errors.

## Prerequisites

Before starting this exercise, ensure you have:

- An active [Azure subscription](https://azure.microsoft.com/pricing/purchase-options/azure-account)
- [Visual Studio Code](https://code.visualstudio.com/) installed
- [Python version **3.13.xx**](https://www.python.org/downloads/release/python-31312/) installed\*
- [Git](https://git-scm.com/install/) installed and configured
- [Azure CLI](https://learn.microsoft.com/cli/azure/install-azure-cli?view=azure-cli-latest) installed

> \* Python 3.14 is available, but some dependencies are not yet compiled for that release. The lab has been successfully tested with Python 3.13.12.

## Create a Microsoft Foundry project

Microsoft Foundry uses projects to organize models, resources, data, and other assets used to develop an AI solution.

1. In a web browser, open the [Microsoft Foundry portal](https://ai.azure.com) at `https://ai.azure.com` and sign in using your Azure credentials. Close any tips or quick start panes that are opened the first time you sign in, and if necessary use the Foundry logo at the top left to navigate to the home page.

1. If it is not already enabled, in the tool bar the top of the page, enable the **New Foundry** option. Then, if prompted, create a new project with a unique name; expanding the **Advanced options** area to specify the following settings for your project:
    - **Foundry resource**: *Use the default name for your resource (usually {project_name}-resource)*
    - **Subscription**: *Your Azure subscription*
    - **Resource group**: *Create or select a resource group*
    - **Region**: Select any available region

1. Select **Create**. Wait for your project to be created.
1. On the home page for your project, note that the API key, project endpoint, and OpenAI endpoint are displayed here.

    > **TIP**: You're going to need the project endpoint later!

## Get the application files from GitHub

The initial application files you'll need to develop the review analysis application are provided in a GitHub repo.

1. Open Visual Studio Code.
1. Open the command palette (*Ctrl+Shift+P*) and use the `Git:clone` command to clone the `https://github.com/microsoftlearning/mslearn-ai-language` repo to a local folder (it doesn't matter which one). Then open it.

    You may be prompted to confirm you trust the authors.

1. After the repo has been cloned, in the Explorer pane, navigate to the folder containing the application code files at **/Labfiles/01-analyze-text/Python/text-analysis**. The application files include:
    - **reviews** (a subfolder containing the review documents)
    - **.env** (the application configuration file)
    - **requirements.txt** (the Python package dependencies that need to be installed)
    - **text-analysis.py** (the code file for the application)

## Configure your application

1. In Visual Studio Code, view the **Extensions** pane; and if it is not already installed, install the **Python** extension.
1. In the **Command Palette**, use the command `python:select interpreter`. Then select an existing environment if you have one, or create a new **Venv** environment based on your Python 3.13.x installation.

    > **Tip**: If you are prompted to install dependencies, you can install the ones in the *requirements.txt* file in the */Labfiles/01-analyze-text/Python/text-analysis* folder; but it's OK if you don't - we'll install them later!

    > **Tip**: If you prefer to use the terminal, you can create your **Venv** environment with `python -m venv labenv`, then activate it with `\labenv\Scripts\activate`.

1. In the **Explorer** pane, right-click the **text-analysis** folder containing the application files, and select **Open in integrated terminal** (or open a terminal in the **Terminal** menu and navigate to the */Labfiles/01-analyze-text/Python/text-analysis* folder.)

    > **Note**: Opening the terminal in Visual Studio Code will automatically activate the Python environment. You may need to enable running scripts on your system.

1. Ensure that the terminal is open in the **text-analysis** folder with the prefix **(.venv)** to indicate that the Python environment you created is active.
1. Install the Azure Language Text Analytics SDK and other required packages by running the following command:

    ```
    pip install -r requirements.txt
    ```

1. In the **Explorer** pane, in the **text-analysis** folder, select the **.env** file to open it. Then update the configuration values to include the **endpoint** (up to the *.com* domain) for your Foundry project (copy these from the Foundry portal).

    > **Important**: Modify the pasted endpoint to remove the "/api/projects/{project_name}" suffix - the endpoint should be *https://{your-foundry-resource-name}.services.ai.azure.com*.

    Save the modified configuration file.

## Add code to connect to your Azure AI Language resource

1. In the **Explorer** pane, in the **text-analysis** folder,  open the **text-analysis.py** file.
1. Review the existing code. You will add code to work with the Azure Language Text Analytics SDK.

    > **Tip**: As you add code to the code file, be sure to maintain the correct indentation.

1. At the top of the code file, under the existing namespace references, find the comment **Import namespaces** and add the following code to import the namespaces you will need to use the Text Analytics SDK:

    ```python
   # import namespaces
   from azure.identity import DefaultAzureCredential
   from azure.ai.textanalytics import TextAnalyticsClient
    ```

1. In the **main** function, note that code to load the endpoint from the configuration file has already been provided. Then find the comment **Create client using endpoint**, and add the following code to create a client for the Text Analysis API:

    ```Python
   # Create client using endpoint
   credential = DefaultAzureCredential()
   ai_client = TextAnalyticsClient(endpoint=foundry_endpoint, credential=credential)
    ```

1. Save the changes to the code file. Then, in the terminal pane, use the following command to sign into Azure.

    ```powershell
    az login
    ```

    > **Note**: In most scenarios, just using *az login* will be sufficient. However, if you have subscriptions in multiple tenants, you may need to specify the tenant by using the *--tenant* parameter. See [Sign into Azure interactively using the Azure CLI](https://learn.microsoft.com/cli/azure/authenticate-azure-cli-interactively) for details.

1. When prompted, follow the instructions to sign into Azure. Then complete the sign in process in the command line, viewing (and confirming if necessary) the details of the subscription containing your Foundry resource.
1. After you have signed in, enter the following command to run the application:

    ```
   python text-analysis.py
    ```

1. Observe the output as the code should run without error, displaying the contents of each review text file in the **reviews** folder. The application successfully creates a client for the Text Analytics API but doesn't make use of it. We'll fix that in the next section.

## Add code to detect language

Now that you have created a client for the API, let's use it to detect the language in which each review is written.

1. In the code editor, find the comment **Get language**. Then add the code necessary to detect the language in each review document:

    ```python
   # Get language
   detectedLanguage = ai_client.detect_language(documents=[text])[0]
   print('\nLanguage: {}'.format(detectedLanguage.primary_language.name))
    ```

     > **Note**: *In this example, each review is analyzed individually, resulting in a separate call to the service for each file. An alternative approach is to create a collection of documents and pass them to the service in a single call. In both approaches, the response from the service consists of a collection of documents; which is why in the Python code above, the index of the first (and only) document in the response ([0]) is specified.*

1. Save your changes. Then re-run the program.
1. Observe the output, noting that this time the language for each review is identified.

## Add code to extract entities

Often, documents or other bodies of text mention people, places, time periods, or other entities. The text Analytics API can detect multiple categories (and subcategories) of entity in your text.

1. In the code editor, find the comment **Get entities**. Then, add the code necessary to identify entities that are mentioned in each review:

    ```python
   # Get entities
   entities = ai_client.recognize_entities(documents=[text])[0].entities
   if len(entities) > 0:
        print("\nEntities")
        for entity in entities:
            print('\t{} ({})'.format(entity.text, entity.category))
    ```

1. Save your changes and re-run the program.
1. Observe the output, noting the entities that have been detected in the text.

## Add code to redact PII

Often, privacy policies and legislation can require that personally identifiable information (PII). such as names, addresses, phone numbers, and other private details be redacted from documents.

1. In the code editor, find the comment **Get PII**. Then, add the code necessary to identify PII entities that are mentioned in each review:

    ```python
   # Get PII
   pii_result = ai_client.recognize_pii_entities(documents=[text])[0]
   pii_entities = pii_result.entities
   if len(pii_entities) > 0:
        print("\nPII Entities")
        for pii_entity in pii_entities:
            print('\t{} ({})'.format(pii_entity.text, pii_entity.category)) 
        print("Redacted Text:\n {}".format(pii_result.redacted_text))
    ```

1. Save your changes and re-run the program.
1. Observe the output, noting the PII entities that are identified, and reviewing the redacted version of each document that is produced.

## Clean up

If you've finished exploring Azure Language in Foundry Tools, you should delete the resources you have created in this exercise to avoid incurring unnecessary Azure costs.

1. Open the [Azure portal](https://portal.azure.com) and view the contents of the resource group where you deployed the resources used in this exercise.
1. On the toolbar, select **Delete resource group**.
1. Enter the resource group name and confirm that you want to delete it.


================================================
FILE: Instructions/Exercises/02-language-agent.md
================================================
---
lab:
    title: 'Develop a text analysis agent'
    description: 'Use Azure Language in Foundry Tools to add text analysis capabilities to an AI agent.'
    duration: 30
    level: 300
    islab: true
---

# Develop a text analysis agent

**Azure Language in Foundry Tools** supports analysis of text, including language detection, entity recognition, and PII redaction.

You can use the service directly in an application through its REST API and several language-specific SDKs. You can also use the **Azure Language in Foundry Tools MCP server** to integrate its capabilities into an AI agent; which is what you'll do in this exercise.

The code used in this exercise is based on the for Microsoft Foundry Tools SDK for Python. You can develop similar solutions using the SDKs for Microsoft .NET, JavaScript, and Java. Refer to [Microsoft Foundry SDK client libraries](https://learn.microsoft.com/azure/ai-foundry/how-to/develop/sdk-overview) for details.

This exercise takes approximately **30** minutes.

> **Note**: Some of the technologies used in this exercise are in preview or in active development. You may experience some unexpected behavior, warnings, or errors.

## Prerequisites

Before starting this exercise, ensure you have:

- An active [Azure subscription](https://azure.microsoft.com/pricing/purchase-options/azure-account)
- [Visual Studio Code](https://code.visualstudio.com/) installed
- [Python version **3.13.xx**](https://www.python.org/downloads/release/python-31312/) installed\*
- [Git](https://git-scm.com/install/) installed and configured
- [Azure CLI](https://learn.microsoft.com/cli/azure/install-azure-cli?view=azure-cli-latest) installed

> \* Python 3.14 is available, but some dependencies are not yet compiled for that release. The lab has been successfully tested with Python 3.13.12.

## Create a Microsoft Foundry project

Microsoft Foundry uses projects to organize models, resources, data, and other assets used to develop an AI solution.

1. In a web browser, open the [Microsoft Foundry portal](https://ai.azure.com) at `https://ai.azure.com` and sign in using your Azure credentials. Close any tips or quick start panes that are opened the first time you sign in, and if necessary use the Foundry logo at the top left to navigate to the home page.

1. If it is not already enabled, in the tool bar the top of the page, enable the **New Foundry** option. Then, if prompted, create a new project with a unique name; expanding the **Advanced options** area to specify the following settings for your project:
    - **Foundry resource**: *Use the default name for your resource (usually {project_name}-resource)*
    - **Subscription**: *Your Azure subscription*
    - **Resource group**: *Create or select a resource group*
    - **Region**: Select any available region

    > **TIP**: Remember (or make a note of) the Foundry resource name - you're going to need it later!

1. Select **Create**. Wait for your project to be created.
1. On the home page for your project, note that the API key, project endpoint, and OpenAI endpoint are displayed here.

    > **TIP**: Copy the project key to the clipboard - you're going to need it later!

## Create an agent

Now that you have a Foundry project, you can create an agent.

1. Now you're ready to **Start building**. Select **Create agents** (or on the **Build** page, select the **Agents** tab); and create a new agent named `Text-Analysis-Agent`.

    When ready, your agent opens in the agent playground.

1. In the model drop-down list, ensure that a **gpt-4.1** model has been deployed and selected for your agent.
1. Assign your agent the following **Instructions**:

    ```
   You are an AI agent that assists users by helping them analyze text.
    ```

1. Use the **Save** button to save the changes.
1. Test the agent by entering the following prompt in the **Chat** pane:

    ```
   What can you help me with?
    ```

    The agent should respond with an appropriate answer based on its instructions.

## Create an Azure Language in Foundry Tools connection

Foundry includes an MCP server for Azure Language in Foundry Tools, which you can connect to your project and use in your agent.

1. In the navigation pane on the left, select the **Tools** page.
1. On the **Tolls** tab, connect a tool; selecting **Azure Language in Foundry Tools** in the **Catalog** and connecting it to an endpoint. specifying the following configuration
    - **Name**: A unique name for your tool/
    - **Remote MCP Server endpoint**: `https://{foundry-resource-name}.cognitiveservices.azure.com/language/mcp?api-version=2025-11-15-preview`
    - **Parameters**: foundry-resource-name: *Your foundry resource name*
    - **Authentication**: Key-based:
    - **Credential**:
        - `Ocp-Apim-Subscription-Key`: *API Key for your Foundry project*

    > **Note**: If key-based authentication is disabled by a policy in your Azure subscription, you can use Entra ID authentication to connect the agent to the Azure Language service.

1. Wait for the MCP tool connection to be created, and then view its details page.
1. On the details page for the Azure Language in Foundry Tools connection, select **Use in an agent**, and then select the **Text-Analysis-Agent** agent you created previously.

    The agent should open in the playground, with the Azure Language in Foundry Tools tool connected.

## Test the Azure Language tool in the playground

Now let's test the agent's ability to use the tool you connected.

1. In the agent playground for the **Text-Analysis-Agent** agent, modify the instructions as follows:

    ```
   You are an AI agent that assists users by helping them analyze text. Use the Azure Language tool to perform text analysis tasks.
    ```

1. Use the **Save** button to save the changes.
1. Test the agent by entering the following prompt in the **Chat** pane:

    ```
    Identify the PII entities in this article, and generate a redacted version:

    Microsoft was founded on April 4, 1975, by childhood friends Bill Gates (then 19) and Paul Allen (22) after they were inspired by the Altair 8800, one of the first personal computers, featured on the cover of Popular Electronics. They contacted the Altair’s maker, MITS, and successfully developed a version of the BASIC programming language, despite initially not owning the machine themselves. The pair formed a partnership called “Micro‑Soft” in Albuquerque, New Mexico, close to MITS’s headquarters, with the goal of writing software for emerging microcomputers.
    
    In the late 1970s, Microsoft grew by supplying programming languages to multiple hardware vendors, then relocated to the Seattle area in 1979. A pivotal moment came in 1980 when Microsoft partnered with IBM to provide an operating system for the IBM PC, leading to MS‑DOS and establishing the company’s dominance in personal computing. Gates guided the company’s long-term strategy as CEO, while Allen contributed key technical vision in its early years, setting Microsoft on a path that would reshape the software industry.
    ```

1. When prompted, approve use of the Azure Language tool by selecting **Always approve all Azure Language in Foundry Tools tools** (you may need to do this twice because the prompt asked for two distinct text analysis tasks).
1. Review the response, which should identify any personally identifiable information in the article about the founding of Microsoft, and create a version of the article with this information redacted.
1. Review the **Logs** for the chat and verify that the Azure Language tool was used by the agent to process the prompt.

## Configure tool approval

As you've seen in the playground, to use the tool, the agent needs approval.

1. In the playground, in the list of **Tools** under the **Instructions**, in the menu for the Azure language tool you added, select **Configure**.
1. Ensure that the **Approval setting for tools in this MCP server for this agent** setting is **Always auto-approve all tools** (if not, change it and add it).
1. Save any changes to the agent.

## Create a client application

Now that you have a working agent, you can create a client application that uses it.

### Get the application files from GitHub

1. Open Visual Studio Code.
1. Open the command palette (*Ctrl+Shift+P*) and use the `Git:clone` command to clone the `https://github.com/microsoftlearning/mslearn-ai-language` repo to a local folder (it doesn't matter which one). Then open it.

    You may be prompted to confirm you trust the authors.

1. After the repo has been cloned, in the Explorer pane, navigate to the folder containing the application code files at **/Labfiles/02-language-agent/Python/text-agent**. The application files include:
    - **.env** (the application configuration file)
    - **requirements.txt** (the Python package dependencies that need to be installed)
    - **text-agent.py** (the code file for the application)

### Configure the application

1. In Visual Studio Code, view the **Extensions** pane; and if it is not already installed, install the **Python** extension.
1. In the **Command Palette**, use the command `python:select interpreter`. Then select an existing environment if you have one, or create a new **Venv** environment based on your Python 3.1x installation.

    > **Tip**: If you are prompted to install dependencies, you can install the ones in the *requirements.txt* file in the */Labfiles/02-language-agent/Python/text-agent* folder; but it's OK if you - don't we'll install them later!

    > **Tip**: If you prefer to use the terminal, you can create your **Venv** environment with `python -m venv labenv`, then activate it with `\labenv\Scripts\activate`.

1. In the **Explorer** pane, right-click the **text-agent** folder containing the application files, and select **Open in integrated terminal** (or open a terminal in the **Terminal** menu and navigate to the */Labfiles/02-language-agent/Python/text-agent* folder.)

    > **Note**: Opening the terminal in Visual Studio Code will automatically activate the Python environment. You may need to enable running scripts on your system.

1. Ensure that the terminal is open in the **text-agent** folder with the prefix **(.venv)** to indicate that the Python environment you created is active.
1. Install the Foundry SDK package, the Azure Identity package, and other required packages by running the following command:

    ```
    pip install -r requirements.txt
    ```

1. In the **Explorer** pane, in the **text-agent** folder, select the **.env** file to open it. Then update the configuration values to include your project **endpoint** (from the project home page in Foundry Portal) and the name of your agent (which should be **Text-Analysis-Agent** - note that this name is case-sensitive).
1. Save the modified configuration file.

### Implement application code

1. In the **Explorer** pane, in the **text-agent** folder,  open the **text-agent.py** file.
1. Review the existing code. You will add code to submit prompts to your agent.

    > **Tip**: As you add code to the code file, be sure to maintain the correct indentation.

1. At the top of the code file, under the existing namespace references, find the comment **Import namespaces** and add the following code to import the namespaces you will need:

    ```python
   # import namespaces
   from azure.identity import DefaultAzureCredential
   from azure.ai.projects import AIProjectClient
    ```

1. In the **main** function, note that code to load the endpoint from the configuration file has already been provided. Then find the comment **Get project client**, and add the following code to create a client for your Foundry project:

    ```python
   # Get project client
   project_client = AIProjectClient(
        endpoint=foundry_endpoint,
        credential=DefaultAzureCredential(),
   )
    ```

1. Find the comment **Get an OpenAI client**, and add the following code to get an OpenAI client with which to call your agent.

    ```python
   # Get an OpenAI client
   openai_client = project_client.get_openai_client()
    ```

1. Find the comment **Use the agent to get a response**, and add the following code to submit a user prompt to your agent, and display the response.

    ```python
   # Use the agent to get a response
   prompt = input("User prompt: ")
   response = openai_client.responses.create(
        input=[{"role": "user", "content": prompt}],
        extra_body={"agent_reference": {"name": agent_name, "type": "agent_reference"}},
   )

   print(f"{agent_name}: {response.output_text}")
    ```

1. Save the changes you made to the code file.

## Test the client application

Now let's test the application by running it in a Python environment and authenticating the connection to your project.

1. In the Visual Studio Code terminal, enter the following command to sign into Azure

   ```powershell
    az login
    ```

    > **Note**: In most scenarios, just using *az login* will be sufficient. However, if you have subscriptions in multiple tenants, you may need to specify the tenant by using the *--tenant* parameter. See [Sign into Azure interactively using the Azure CLI](https://learn.microsoft.com/cli/azure/authenticate-azure-cli-interactively) for details.

1. When prompted, follow the instructions to sign into Azure. Then complete the sign in process in the command line, viewing (and confirming if necessary) the details of the subscription containing your Foundry resource.
1. After you have signed in, enter the following command to run the application:

    ```powershell
    python text-agent.py
    ```

1. When prompted, enter the following prompt:

    ```
    Extract named entities from the following text: "Pierre and I went to Paris on July 14th."
    ```

1. Review the response, which should identify named people, places, and dates.

## View tool details

The Azure Language in Foundry Tools tool provides a wide range of functionality, and the agent must select the appropriate function to call. We can see the options available in the agent's response.

1. In the **text-agent.py** code file, add the following line immediately after the *print(f"{agent_name}: {response.output_text}")* line you added previously (before the *except Exception as ex:* line):

    ```python
    print(f"\nResponse Details: {response.model_dump_json(indent=2)}")
    ```

1. Save the changes to the code file.
1. In the terminal, re-enter the command to run the application (`python text-agent.py`).
1. When prompted, enter the following command:

    ```
    Tell me what entities and dates are mentioned in this review, and whether it is positive or negative: "I booked my flight to Paris in July with Margie's Travel, and it was fantastic!"
    ```

1. Review the response (you may need to scroll quite far up to see it), which should identify entities and dates, and determine the sentiment of the text.
1. Review the JSON response details, which indicate each of the tools available to the agent. In this case, it should have used the **extract_named_entities_from_text** and **detect_sentiment_from_text** tools within Azure Language in Foundry Tools.

## Clean up resources

If you're finished exploring the Azure Language service, you can delete the resources you created in this exercise. Here's how:

1. In the Azure portal, browse to the Foundry resource you created in this lab.
1. On the resource page, select **Delete** and follow the instructions to delete the resource.


================================================
FILE: Instructions/Exercises/03-gen-ai-speech.md
================================================
---
lab:
    title: 'Use speech-capable generative AI models'
    description: Implement speech functionality using generative AI.
    duration: 30
    level: 300
    islab: true
---

# Use speech-capable generative AI models

Increasingly, generative AI model capabilities are evolving beyond text-based language completion to support content in other formats - including audible speech.

In this exercise, you'll use generative AI models to support two common scenarios:

- Speech synthesis (text-to-speech) - generating speech output.
- Speech recognition (speech-to-text) - transcribing speech input.

While this exercise is based on Python, you can develop generative AI speech applications using multiple language-specific SDKs; including:

- [OpenAI SDK for Python](https://pypi.org/project/openai/)
- [OpenAI SDK for .NET](<https://www.nuget.org/packages/OpenAI>)
- [OpenAI SDK for JavaScript](https://www.npmjs.com/package/openai)

This exercise takes approximately **30** minutes.

> **Note**: Some of the technologies used in this exercise are in preview or in active development. You may experience some unexpected behavior, warnings, or errors.

## Prerequisites

Before starting this exercise, ensure you have:

- An active [Azure subscription](https://azure.microsoft.com/pricing/purchase-options/azure-account)
- [Visual Studio Code](https://code.visualstudio.com/) installed
- [Python version **3.13.xx**](https://www.python.org/downloads/release/python-31312/) installed\*
- [Git](https://git-scm.com/install/) installed and configured
- [Azure CLI](https://learn.microsoft.com/cli/azure/install-azure-cli?view=azure-cli-latest) installed

> \* Python 3.14 is available, but some dependencies are not yet compiled for that release. The lab has been successfully tested with Python 3.13.12.

## Create a Microsoft Foundry project

Microsoft Foundry uses projects to organize models, resources, data, and other assets used to develop an AI solution.

1. In a web browser, open the [Microsoft Foundry portal](https://ai.azure.com) at `https://ai.azure.com` and sign in using your Azure credentials. Close any tips or quick start panes that are opened the first time you sign in, and if necessary use the Foundry logo at the top left to navigate to the home page.

1. If it is not already enabled, in the tool bar the top of the page, enable the **New Foundry** option. Then, if prompted, create a new project with a unique name; expanding the **Advanced options** area to specify the following settings for your project:
    - **Foundry resource**: *Use the default name for your resource (usually {project_name}-resource)*
    - **Subscription**: *Your Azure subscription*
    - **Resource group**: *Create or select a resource group*
    - **Region**: Select *East US 2* (For this exercises some models are only available in this location.)

1. Select **Create**. Wait for your project to be created. Then view its home page.

## Deploy models

To develop speech-enables apps, we're going to need speech-enabled models. Specifically, we need a model that can perform speech-generation, and a model that can process speech input.

### Deploy a speech-generation model

1. Now you're ready to **Start building**. Select **Find models** (or on the **Discover** page, select the **Models** tab) to view the Microsoft Foundry model catalog.
1. In the model catalog, search for `gpt-4o-mini-tts`.
1. Review the model card, and then deploy it using the default settings.
1. When the model has been deployed, view its details, noting that the **Target URI** and **Key** required to use it are available here (you'll need the Target URI later).

### Deploy a speech-recognition model

1. In the Foundry portal menu bar, select **Build**; and then view the **Models** page. Note that the *gpt-4o-mini-tts* model you deployed is listed.
1. Select **Deploy a base model, and search the catalog for `gpt-4o-mini-transcribe`.
1. Deploy a *gpt-4o-mini-transcribe* model using the default settings.
1. Return to the **Models** page and verify that both of the model you deployed are listed.
1. Select either of the models to view the Target URI you need to use in your code.

## Get the application files from GitHub

The initial application files you'll need to develop speech applications are provided in a GitHub repo.

1. Open Visual Studio Code.
1. Open the command palette (*Ctrl+Shift+P*) and use the `Git:clone` command to clone the `https://github.com/microsoftlearning/mslearn-ai-language` repo to a local folder (it doesn't matter which one). Then open it.

    You may be prompted to confirm you trust the authors.

1. In Visual Studio Code, view the **Extensions** pane; and if it is not already installed, install the **Python** extension.
1. In the **Command Palette**, use the command `python:select interpreter`. Then select an existing environment if you have one, or create a new **Venv** environment based on your Python 3.1x installation.

    > **Tip**: If you are prompted to install dependencies, you can install the ones in the *requirements.txt* file in the */Labfiles/03-gen-ai-speech/Python/generate-speech* folder; but it's OK if you don't - we'll install them later!

    > **Tip**: If you prefer to use the terminal, you can create your **Venv** environment with `python -m venv labenv`, then activate it with `\labenv\Scripts\activate`.

## Create a speech-generation app

1. After the repo has been cloned, in the Explorer pane, navigate to the folder containing the application code files at **/Labfiles/03-gen-ai-speech/Python/generate-speech**. The application files include:
    - **.env** (the application configuration file)
    - **requirements.txt** (the Python package dependencies that need to be installed)
    - **generate-speech.py** (the code file for the application)

### Configure your application

1. In the **Explorer** pane, right-click the **generate-speech** folder containing the application files, and select **Open in integrated terminal** (or open a terminal in the **Terminal** menu and navigate to the */Labfiles/03-gen-ai-speech/Python/generate-speech* folder.)

    > **Note**: Opening the terminal in Visual Studio Code will automatically activate the Python environment. You may need to enable running scripts on your system.

1. Ensure that the terminal is open in the **generate-speech** folder with the prefix **(.venv)** to indicate that the Python environment you created is active.
1. Install the OpenAI SDK package and other required packages by running the following command:

    ```
    pip install -r requirements.txt
    ```

1. In the **Explorer** pane, in the **generate-speech** folder, select the **.env** file to open it. Then update the configuration values to include the **Target URI** (endpoint) for your **gpt-4o-mini-tts** model.

    > **Tip**: Copy the Target URI from the model details page in the Foundry portal.

    Save the modified configuration file.

### Write code to use the model for speech-generation

1. In the **Explorer** pane, in the **generate-speech** folder, select the **generate-speech.py** file to open it.
1. Review the existing code. You will add code to use the OpenAI SDK to access your model.

    > **Tip**: As you add code to the code file, be sure to maintain the correct indentation.

1. At the top of the code file, under the existing namespace references, find the comment **Import namespaces** and add the following code to import the namespace you will need to use the OpenAI SDK:

    ```python
   # import namespaces
   from openai import AzureOpenAI
   from azure.identity import DefaultAzureCredential, get_bearer_token_provider
    ```

1. In the **main** function, note that code to load the endpoint from the configuration file has already been provided. Then find the comment **Create the Azure OpenAI client**, and add the following code to create a client for the OpenAI API:

    ```Python
   # Create the Azure OpenAI client
   token_provider = get_bearer_token_provider(                    
        DefaultAzureCredential(), "https://ai.azure.com/.default"
    )

   client = AzureOpenAI(
        azure_endpoint=endpoint,
        azure_ad_token_provider = token_provider,
        api_version="2025-03-01-preview"
   )
    ```

1. Find the comment **Generate speech and save to file**, and add the following code to submit a prompt to the speech-generation model save the response as a file.

    ```Python
   # Generate speech and save to file
   with client.audio.speech.with_streaming_response.create(
                model=model_deployment,
                voice="alloy",
                input="My voice is my passport!",
                instructions="Speak in a serious tone.",
            ) as response:
        response.stream_to_file(speech_file_path)
    ```

1. Save the changes to the code file.

### Run the application

1. In the terminal pane, use the following command to sign into Azure.

    ```powershell
    az login
    ```

    > **Note**: In most scenarios, just using *az login* will be sufficient. However, if you have subscriptions in multiple tenants, you may need to specify the tenant by using the *--tenant* parameter. See [Sign into Azure interactively using the Azure CLI](https://learn.microsoft.com/cli/azure/authenticate-azure-cli-interactively) for details.

1. When prompted, follow the instructions to sign into Azure. Then complete the sign in process in the command line, viewing (and confirming if necessary) the details of the subscription containing your Foundry resource.
1. After you have signed in, enter the following command to run the application:

    ```
   python generate-speech.py
    ```

1. Observe the output as the code generates the requested speech and saves it in a file. The code should also play the generated audio file.

## Create a speech-transcription app

1. In the Explorer pane, navigate to the folder containing the application code files at **/Labfiles/03-gen-ai-speech/Python/transcribe-speech**. The application files include:
    - **.env** (the application configuration file)
    - **requirements.txt** (the Python package dependencies that need to be installed)
    - **transcribe-speech.py** (the code file for the application)

### Configure your application

1. In the **Explorer** pane, right-click the **transcribe-speech** folder containing the application files, and select **Open in integrated terminal** (or in the existing terminal, navigate to the */Labfiles/03-gen-ai-speech/Python/transcribe-speech* folder.)

    > **Note**: Opening the terminal in Visual Studio Code will automatically activate the Python environment. You may need to enable running scripts on your system.

1. Ensure that the terminal is open in the **transcribe-speech** folder with the prefix **(.venv)** to indicate that the Python environment you created previously is active.
1. Install the OpenAI SDK package and other required packages by running the following command:

    ```
    pip install -r requirements.txt
    ```

    > **Note**: This step isn't actually necessary if you completed the previous part of this exercise, as botg apps use the same environment and have the same dependencies - but it won't do any harm!

1. In the **Explorer** pane, in the **transcribe-speech** folder, select the **.env** file to open it. Then update the configuration values to include the **Target URI** (endpoint) for your **gpt-4o-mini-transcribe** model.

    > **Tip**: Copy the Target URI from the model details page in the Foundry portal.

    Save the modified configuration file.

### Write code to use the model for speech-transcription

1. In the **Explorer** pane, in the **transcribe-speech** folder, select the **transcribe-speech.py** file to open it.
1. Review the existing code. You will add code to use the OpenAI SDK to access your model.

    > **Tip**: As you add code to the code file, be sure to maintain the correct indentation.

1. At the top of the code file, under the existing namespace references, find the comment **Import namespaces** and add the following code to import the namespace you will need to use the OpenAI SDK:

    ```python
   # import namespaces
   from openai import AzureOpenAI
   from azure.identity import DefaultAzureCredential, get_bearer_token_provider
    ```

1. In the **main** function, note that code to load the endpoint from the configuration file has already been provided. Then find the comment **Create the Azure OpenAI client**, and add the following code to create a client for the OpenAI API:

    ```Python
   # Create the Azure OpenAI client
   token_provider = get_bearer_token_provider(                    
        DefaultAzureCredential(), "https://ai.azure.com/.default"
    )

   client = AzureOpenAI(
        azure_endpoint=endpoint,
        azure_ad_token_provider = token_provider,
        api_version="2025-03-01-preview"
   )
    ```

1. Find the comment **Call model to transcribe audio file**, and add the following code to submit an audio file to the speech-transcription model generate a transcript.

    ```Python
   # Call model to transcribe audio file
   audio_file = open(file_path, "rb")
   transcription = client.audio.transcriptions.create(
        model=model_deployment,
        file=audio_file,
        response_format="text"
   )
        
   print(transcription)
        
    ```

1. Save the changes to the code file.

### Run the application

1. In the terminal pane, use the following command to sign into Azure.

    ```powershell
    az login
    ```

    > **Note**: In most scenarios, just using *az login* will be sufficient. However, if you have subscriptions in multiple tenants, you may need to specify the tenant by using the *--tenant* parameter. See [Sign into Azure interactively using the Azure CLI](https://learn.microsoft.com/cli/azure/authenticate-azure-cli-interactively) for details.

1. When prompted, follow the instructions to sign into Azure. Then complete the sign in process in the command line, viewing (and confirming if necessary) the details of the subscription containing your Foundry resource.
1. After you have signed in, enter the following command to run the application:

    ```
   python transcribe-speech.py
    ```

1. Observe the output as the code submits the audio file to the model for transcription and displays the results. The code should also play the audio file.

## Clean up

If you've finished exploring speech-enabled models in Foundry Tools, you should delete the resources you have created in this exercise to avoid incurring unnecessary Azure costs.

1. Open the [Azure portal](https://portal.azure.com) and view the contents of the resource group where you deployed the resources used in this exercise.
1. On the toolbar, select **Delete resource group**.
1. Enter the resource group name and confirm that you want to delete it.


================================================
FILE: Instructions/Exercises/04-azure-speech.md
================================================
---
lab:
    title: 'Recognize and synthesize speech'
    description: Implement speech functionality using Azure Speech in Foundry Tools.
    duration: 30
    level: 300
    islab: true
---

# Recognize and synthesize speech

**Azure Speech in Foundry Tools** is a service that provides speech-related functionality, including:

- A *speech-to-text* API that enables you to implement speech recognition (converting audible spoken words into text).
- A *text-to-speech* API that enables you to implement speech synthesis (converting text into audible speech).

In this exercise, you'll use both of these APIs to implement a voice message assistant.

While this exercise is based on Python, you can develop speech applications using multiple language-specific SDKs; including:

- [Azure Speech SDK for Python](https://pypi.org/project/azure-cognitiveservices-speech/)
- [Azure Speech SDK for .NET](https://www.nuget.org/packages/Microsoft.CognitiveServices.Speech)
- [Azure Speech SDK for JavaScript](https://www.npmjs.com/package/microsoft-cognitiveservices-speech-sdk)

This exercise takes approximately **30** minutes.

> **Note**: Some of the technologies used in this exercise are in preview or in active development. You may experience some unexpected behavior, warnings, or errors.

## Prerequisites

Before starting this exercise, ensure you have:

- An active [Azure subscription](https://azure.microsoft.com/pricing/purchase-options/azure-account)
- [Visual Studio Code](https://code.visualstudio.com/) installed
- [Python version **3.13.xx**](https://www.python.org/downloads/release/python-31312/) installed\*
- [Git](https://git-scm.com/install/) installed and configured
- [Azure CLI](https://learn.microsoft.com/cli/azure/install-azure-cli?view=azure-cli-latest) installed

> \* Python 3.14 is available, but some dependencies are not yet compiled for that release. The lab has been successfully tested with Python 3.13.12.

## Create a Microsoft Foundry project

Microsoft Foundry uses projects to organize models, resources, data, and other assets used to develop an AI solution.

1. In a web browser, open the [Microsoft Foundry portal](https://ai.azure.com) at `https://ai.azure.com` and sign in using your Azure credentials. Close any tips or quick start panes that are opened the first time you sign in, and if necessary use the Foundry logo at the top left to navigate to the home page.

1. If it is not already enabled, in the tool bar the top of the page, enable the **New Foundry** option. Then, if prompted, create a new project with a unique name; expanding the **Advanced options** area to specify the following settings for your project:
    - **Foundry resource**: *Use the default name for your resource (usually {project_name}-resource)*\*
    - **Subscription**: *Your Azure subscription*
    - **Resource group**: *Create or select a resource group*
    - **Region**: Select any available region

    > **TIP**: \* Remember the Foundry resource name - you'll need it later!

1. Wait for your project to be created. Then view the home page for your project.

## Get the application files from GitHub

The initial application files you'll need to develop the voice application are provided in a GitHub repo.

1. Open Visual Studio Code.
1. Open the command palette (*Ctrl+Shift+P*) and use the `Git:clone` command to clone the `https://github.com/microsoftlearning/mslearn-ai-language` repo to a local folder (it doesn't matter which one). Then open it.

    You may be prompted to confirm you trust the authors.

1. After the repo has been cloned, in the Explorer pane, navigate to the folder containing the application code files at **/Labfiles/04-azure-speech/Python/voice-mail**. The application files include:
    - **messages** (a subfolder containing audio recordings of messages)
    - **.env** (the application configuration file)
    - **requirements.txt** (the Python package dependencies that need to be installed)
    - **voice-mail.py** (the code file for the application)

## Configure your application

1. In Visual Studio Code, view the **Extensions** pane; and if it is not already installed, install the **Python** extension.
1. In the **Command Palette**, use the command `python:select interpreter`. Then select an existing environment if you have one, or create a new **Venv** environment based on your Python 3.1x installation.

    > **Tip**: If you are prompted to install dependencies, you can install the ones in the *requirements.txt* file in the */Labfiles/04-azure-speech/Python/voice-mail* folder; but it's OK if you don't - we'll install them later!

    > **Tip**: If you prefer to use the terminal, you can create your **Venv** environment with `python -m venv labenv`, then activate it with `\labenv\Scripts\activate`.

1. In the **Explorer** pane, right-click the **voice-mail** folder containing the application files, and select **Open in integrated terminal** (or open a terminal in the **Terminal** menu and navigate to the */Labfiles/04-azure-speech/Python/voice-mail* folder.)

    > **Note**: Opening the terminal in Visual Studio Code will automatically activate the Python environment. You may need to enable running scripts on your system.

1. Ensure that the terminal is open in the **voice-mail** folder with the prefix **(.venv)** to indicate that the Python environment you created is active.
1. Install the Azure AI Speech SDK package and other required packages by running the following command:

    ```
    pip install -r requirements.txt
    ```

1. In the **Explorer** pane, in the **voice-mail** folder, select the **.env** file to open it. Then update the configuration values to reflect the Cognitive Services **endpoint** for your Foundry resource.

    > **Important**: The endpoint should be *https://{YOUR_FOUNDRY_RESOURCE}.cognitiveservices.azure.com/*. The Foundry Resource name usually takes the form *{project_name}-resource*.

    Save the modified configuration file.

## Add code to synthesize speech

1. In the **Explorer** pane, in the **voice-mail** folder,  open the **voice-mail.py** file.
1. Review the existing code. You will add code to work with the Azure Speech SDK.

    > **Tip**: As you add code to the code file, be sure to maintain the correct indentation.

1. At the top of the code file, under the existing namespace references, find the comment **Import namespaces** and add the following code to import the namespaces you will need to use the Speech SDK:

    ```python
   # import namespaces
   from azure.identity import DefaultAzureCredential
   import azure.cognitiveservices.speech as speech_sdk
    ```

1. In the **main** function, note that code to load the endpoint from the configuration file has already been provided. Then find the comment **Create speech_config using Entra ID authentication**, and add the following code to create a Speech Configuration object:

    ```Python
   # Create speech_config using Entra ID authentication
   credential = DefaultAzureCredential()
   speech_config = speech_sdk.SpeechConfig(    
        token_credential=credential,
        endpoint=foundry_endpoint)
    ```

1. Review the rest of the **main** function, and note that a loop has been implemented that enables the user to choose one of three options:
    1. Record a voice greeting
    1. Transcribe messages
    1. Exit the application

1. Find the **record_greeting** function, which you will implement to record a voice greeting as an audio file.
1. In the **record_greeting** function, find the comment **Synthesize the greeting message to an audio file**, and add the following code to synthesize speech from the text entered by the user and save it as an audio file.

    ```python
   output_file = "greeting.wav"
   audio_config = speech_sdk.audio.AudioOutputConfig(filename=output_file)

   speech_config.speech_synthesis_voice_name = "en-US-Serena:DragonHDLatestNeural"

   speech_synthesizer = speech_sdk.SpeechSynthesizer(
        speech_config=speech_config,
        audio_config=audio_config
   )

   result = speech_synthesizer.speak_text_async(greeting_message).get()

   if result.reason == speech_sdk.ResultReason.SynthesizingAudioCompleted:
        print(f"Greeting recorded and saved to {output_file}")
        speech_synthesizer = None  # Release the synthesizer resources
   else:
        print("Error recording greeting: {}".format(result.reason))
    ```

1. Save the changes to the code file. Then, in the terminal pane, use the following command to sign into Azure.

    ```powershell
    az login
    ```

    > **Note**: In most scenarios, just using *az login* will be sufficient. However, if you have subscriptions in multiple tenants, you may need to specify the tenant by using the *--tenant* parameter. See [Sign into Azure interactively using the Azure CLI](https://learn.microsoft.com/cli/azure/authenticate-azure-cli-interactively) for details.

1. When prompted, follow the instructions to sign into Azure. Then complete the sign in process in the command line, viewing (and confirming if necessary) the details of the subscription containing your Foundry resource.
1. After you have signed in, enter the following command to run the application:

    ```powershell
   python voice-mail.py
    ```

1. When prompted, enter **1** to record a greeting.
1. Enter a greeting, like `Hi. The person you called is not available right now. Leave a message.`
1. Wait while the speech is synthesized and saved as an audio file.

    You can select the *greeting.wav* file that is generated in the voice-mail folder to play it in Visual Studio Code.

## Add code to recognize speech

1. In the **voice-mail.py** code file, find the **transcribe_messages** function; which you will implement to transcribe each of the voice messages in the **messages** subfolder.

    The functional already contains code to loop through the files in the **messages** folder.

1. In the **transcribe_messages** function, find the comment **Transcribe the audio file**, and add the following code to transcribe the audio.

    ```python
   # Transcribe the audio file
   audio_config = speech_sdk.audio.AudioConfig(filename=file_path)
   speech_recognizer = speech_sdk.SpeechRecognizer(
        speech_config=speech_config,
        audio_config=audio_config
   )
   result = speech_recognizer.recognize_once_async().get()
   if result.reason == speech_sdk.ResultReason.RecognizedSpeech:
        print(f"Transcription: {result.text}")
   else:
        print("Error transcribing message: {}".format(result.reason))
    ```

1. Save the changes to the code file. Then, in the terminal, enter the following command to run the application:

    ```powershell
   python voice-mail.py
    ```

1. When prompted, enter **2** to transcribe messages.
1. View the transcription for each message.

    Each file is played back automatically, so you can hear the message.

## Clean up

If you've finished exploring Azure Speech in Foundry Tools, you should delete the resources you have created in this exercise to avoid incurring unnecessary Azure costs.

1. Open the [Azure portal](https://portal.azure.com) and view the contents of the resource group where you deployed the resources used in this exercise.
1. On the toolbar, select **Delete resource group**.
1. Enter the resource group name and confirm that you want to delete it.

## More information

For more information about using the **Speech-to-text** and **Text-to-speech** APIs, see the [Speech-to-text documentation](https://learn.microsoft.com/azure/ai-services/speech-service/index-speech-to-text) and [Text-to-speech documentation](https://learn.microsoft.com/azure/ai-services/speech-service/index-text-to-speech).


================================================
FILE: Instructions/Exercises/05-azure-speech-mcp.md
================================================
---
lab:
    title: 'Use Azure Speech in an agent'
    description: Use the Azure Speech in Foundry Tools MCP server to add speech capabilities to an agent.
    duration: 30
    level: 300
    islab: true
---

# Use Azure Speech in an agent

> <font color="red"><b>WARNING</b>:</font> You may experience a bocking error in this lab. The issue is under investigation. We apologize for the inconvenience.

**Azure Speech in Foundry Tools** provides an MCP server that you can use to enable and agent to call its speech recognition and synthesis capabilities.

In this exercise, you'll configure the Azure Speech in Foundry Tools MCP server, and connect it to an agent.

The code used in this exercise is based on the Foundry Tools SDK for Python. You can develop similar solutions using the SDKs for Microsoft .NET, JavaScript, and Java. Refer to [Microsoft Foundry SDK client libraries](https://learn.microsoft.com/azure/ai-foundry/how-to/develop/sdk-overview) for details.

This exercise takes approximately **30** minutes.

> **Note**: Some of the technologies used in this exercise are in preview or in active development. You may experience some unexpected behavior, warnings, or errors.

## Prerequisites

Before starting this exercise, ensure you have:

- An active [Azure subscription](https://azure.microsoft.com/pricing/purchase-options/azure-account)
- [Visual Studio Code](https://code.visualstudio.com/) installed
- [Python version **3.13.xx**](https://www.python.org/downloads/release/python-31312/) installed\*
- [Git](https://git-scm.com/install/) installed and configured
- [Azure CLI](https://learn.microsoft.com/cli/azure/install-azure-cli?view=azure-cli-latest) installed

> \* Python 3.14 is available, but some dependencies are not yet compiled for that release. The lab has been successfully tested with Python 3.13.12.

## Create an Azure storage account

The Azure Speech MCP server uses an Azure storage account to save generated audio files.

1. Open the [Azure portal](https://portal.azure.com) at `https://portal.azure.com`, and sign in using your Azure credentials.
1. Create a new **Azure storage account** resource with the following settings:
    - **Subscription**: *Your subscription*
    - **Resource group**: *Create or select a resource group*
    - **Storage account name**: *A unique name for your storage account*
    - **Region**: *Any available region*
    - **Preferred storage type**: Azure blob storage or Azure Data Lake Storage Gen2
    - **Primary workload**: Cloud native
    - **Performance**: Standard
    - **Redundancy**: Locally-redundant storage (LRS)
1. When the Azure storage account resource has been created, go to it in the portal.
1. In the left navigation pane for the storage account, expand **Data storage**, and select **Containers**.
1. Add a new container named **files**. This is where your agent will save the audio files it generates.
1. In the context menu (**...**) for the **files** container, select **Generate SAS**, and create a SAS token with the following details:
    - **Signing method** : Account key
    - **Signing key**: Key1
    - **Stored access policy**: None
    - **Permissions**:
        - Read
        - Add
        - Create
        - Write
        - List
    - **Start and expiry date/time**:
        - **Start**: The current date and time
        - **Expiry**: 11:59pm tomorrow
    - **Allowed IP addresses**: *Leave blank*
    - **Allowed protocols**: HTTPS only

    > **IMPORTANT**: Copy the generated SAS token and URL, and store them in a text file for now - you'll need them later!

## Create a Microsoft Foundry project

Microsoft Foundry uses projects to organize models, resources, data, and other assets used to develop an AI solution.

1. In a web browser, open the [Microsoft Foundry portal](https://ai.azure.com) at `https://ai.azure.com` and sign in using your Azure credentials. Close any tips or quick start panes that are opened the first time you sign in, and if necessary use the Foundry logo at the top left to navigate to the home page.

1. If it is not already enabled, in the tool bar the top of the page, enable the **New Foundry** option. Then, if prompted, create a new project with a unique name; expanding the **Advanced options** area to specify the following settings for your project:
    - **Foundry resource**: *Use the default name for your resource (usually {project_name}-resource)*
    - **Subscription**: *Your Azure subscription*
    - **Resource group**: *Create or select a resource group*
    - **Region**: Select any available region

    > **TIP**: Remember (or make a note of) the Foundry resource name - you're going to need it later!

1. Wait for your project to be created.
1. On the home page for your project, note that the API key, project endpoint, and OpenAI endpoint are displayed here.

    > **TIP**: Copy the project key to the clipboard - you're going to need it later!

## Create an agent

Now that you have a Foundry project, you can create an agent.

1. Now you're ready to **Start building**. Select **Create agents** (or on the **Build** page, select the **Agents** tab); and create a new agent named `speech-agent`.

     When ready, your agent opens in the agent playground.

1. In the model drop-down list, ensure that a **gpt-4.1** model has been deployed and selected for your agent.
1. Assign your agent the following **Instructions**:

    ```
   You are an AI agent that uses the Azure AI Speech tool to transcribe and generates speech.
    ```

1. Use the **Save** button to save the changes.
1. Test the agent by entering the following prompt in the **Chat** pane:

    ```
   What can you help me with?
    ```

    The agent should respond with an appropriate answer based on its instructions.

## Create an Azure Speech in Foundry Tools connection

Foundry includes an MCP server for Azure Speech in Foundry Tools, which you can connect to your project and use in your agent.

1. In the navigation pane on the left, select the **Tools** page.
1. On the **Tolls** tab, connect a tool; selecting **Azure Speech MCP Server** in the **Catalog** and connecting it to an endpoint. specifying the following configuration
    - **Name**: *A unique name for your tool.*
    - **Remote MCP Server endpoint**: `https://{foundry-resource-name}.cognitiveservices.azure.com/speech/mcp?api-version=2025-11-15-preview`
    - **Parameters**: foundry-resource-name: *Your foundry resource name*
    - **Authentication**: Key-based:
    - **Credential**:
        - `Ocp-Apim-Subscription-Key`: *API Key for your Foundry project*

    - **Add key value pair**
        - `X-Blob-Container-Url`: *The SAS URL for your storage container*

    > **Note**: If key-based authentication is disabled by a policy in your Azure subscription, you can use Entra ID authentication to connect the agent to the Azure Language service.

1. Wait for the MCP tool connection to be created, and then view its details page.
1. On the details page for the Azure Speech in Foundry Tools connection, select **Use in an agent**, and then select the **Speech-Agent** agent you created previously.

    The agent should open in the playground, with the Azure Speech in Foundry Tools tool connected.

## Test the Azure Speech tool in the playground

Now let's test the agent's ability to use the tool you connected.

1. In the agent playground for the **speech-agent** agent, enter the following prompt:

    ```
    Generate "To be or not to be, that is the question." as speech
    ```

1. When prompted, approve use of the Azure Speech tool by selecting **Always approve all Azure Speech MCP Server tools**.

    > <font color="red"><b>NOTE</b>:</font> You may encounter the error ***HTTP 404 (not found)***. This issue is currently under investigation. If this occurs, the rest of the lab exercise will not work. We apologize for the inconvenience.

1. Review the response, which should include a link to the generated audio file. Then click the link to hear the synthesized speech.
1. Enter the following prompt:

    ```
    Transcribe the file at https://microsoftlearning.github.io/mslearn-ai-language/Labfiles/05-speech-tool/speech_1.wav
    ```

1. If prompted, approve use of the Azure Speech tool by selecting **Always approve all Azure Speech MCP Server tools**.
1. Review the output, which should be a transcription of the audio file.

## Create a client application

Now that you have a working agent, you can create a client application that uses it.

### Get the application files from GitHub

1. Open Visual Studio Code.
1. Open the command palette (*Ctrl+Shift+P*) and use the `Git:clone` command to clone the `https://github.com/microsoftlearning/mslearn-ai-language` repo to a local folder (it doesn't matter which one). Then open it.

    You may be prompted to confirm you trust the authors.

1. After the repo has been cloned, in the Explorer pane, navigate to the folder containing the application code files at **/Labfiles/05-speech-tool/Python/speech-client**. The application files include:
    - **.env** (the application configuration file)
    - **requirements.txt** (the Python package dependencies that need to be installed)
    - **speech-client.py** (the code file for the application)

### Configure the application

1. In Visual Studio Code, view the **Extensions** pane; and if it is not already installed, install the **Python** extension.
1. In the **Command Palette**, use the command `python:select interpreter`. Then select an existing environment if you have one, or create a new **Venv** environment based on your Python 3.1x installation.

    > **Tip**: If you are prompted to install dependencies, you can install the ones in the *requirements.txt* file in the */Labfiles/05-speech-tool/Python/speech-client* folder; but it's OK if you don't - we'll install them later!

    > **Tip**: If you prefer to use the terminal, you can create your **Venv** environment with `python -m venv labenv`, then activate it with `\labenv\Scripts\activate`.

1. In the **Explorer** pane, right-click the **text-agent** folder containing the application files, and select **Open in integrated terminal** (or open a terminal in the **Terminal** menu and navigate to the */Labfiles/05-speech-tool/Python/speech-client* folder.)

    > **Note**: Opening the terminal in Visual Studio Code will automatically activate the Python environment. You may need to enable running scripts on your system.

1. Ensure that the terminal is open in the **speech-client** folder with the prefix **(.venv)** to indicate that the Python environment you created is active.
1. Install the Foundry SDK package, the Azure Identity package, and other required packages by running the following command:

    ```
    pip install -r requirements.txt
    ```

1. In the **Explorer** pane, in the **speech-client** folder, select the **.env** file to open it. Then update the configuration values to include your project **endpoint** (from the project home page in Foundry Portal)and the name of your agent (which should be **Speech-Agent** - note that this name is case-sensitive).
1. Save the modified configuration file.

### Implement application code

1. In the **Explorer** pane, in the **speech-client** folder,  open the **speech-client.py** file.
1. Review the existing code. You will add code to submit prompts to your agent.

    > **Tip**: As you add code to the code file, be sure to maintain the correct indentation.

1. At the top of the code file, under the existing namespace references, find the comment **Import namespaces** and add the following code to import the namespaces you will need:

    ```python
   # import namespaces
   from azure.identity import DefaultAzureCredential
   from azure.ai.projects import AIProjectClient
    ```

1. In the **main** function, note that code to load the endpoint from the configuration file has already been provided. Then find the comment **Get project client**, and add the following code to create a client for your Foundry project:

    ```python
   # Get project client
   project_client = AIProjectClient(
        endpoint=foundry_endpoint,
        credential=DefaultAzureCredential(),
   )
    ```

1. Find the comment **Get an OpenAI client**, and add the following code to get an OpenAI client with which to call your agent.

    ```python
   # Get an OpenAI client
   openai_client = project_client.get_openai_client()
    ```

1. Find the comment **Use the agent to get a response**, and add the following code to submit a user prompt to your agent, and display the response.

    ```python
   # Use the agent to get a response
   response = openai_client.responses.create(
        input=[{"role": "user", "content": prompt}],
        extra_body={"agent_reference": {"name": agent_name, "type": "agent_reference"}},
   )

   print(f"{agent_name}: {response.output_text}")
    ```

1. Save the changes you made to the code file.

## Test the client application

Now let's test the application by running it in a Python environment and authenticating the connection to your project.

1. In the terminal pane, use the following command to sign into Azure.

    ```powershell
    az login
    ```

    > **Note**: In most scenarios, just using *az login* will be sufficient. However, if you have subscriptions in multiple tenants, you may need to specify the tenant by using the *--tenant* parameter. See [Sign into Azure interactively using the Azure CLI](https://learn.microsoft.com/cli/azure/authenticate-azure-cli-interactively) for details.

1. When prompted, follow the instructions to sign into Azure. Then complete the sign in process in the command line, viewing (and confirming if necessary) the details of the subscription containing your Foundry resource.
1. After you have signed in, enter the following command to run the application:

    ```powershell
    python speech-client.py
    ```

1. When prompted, enter the following prompt:

    ```
    Synthesize "Better a witty fool, than a foolish wit!" as speech using the voice "en-GB-SoniaNeural".
    ```

1. Review the response, which should include a clickable link to a generated audio file.
1. After checking out the generated audio file, enter the following prompt:

     ```
     Transcribe https://microsoftlearning.github.io/mslearn-ai-language/Labfiles/05-speech-tool/speech_2.wav
     ```

1. Review the response.
1. To exit the program, enter "quit" (or just press return)

## Clean up resources

If you're finished exploring the Azure AI Language service, you can delete the resources you created in this exercise. Here's how:

1. In the Azure portal, browse to the Foundry resource you created in this lab.
1. On the resource page, select **Delete** and follow the instructions to delete the resource.


================================================
FILE: Instructions/Exercises/06-voice-live-agent.md
================================================
---
lab:
    title: 'Develop a Voice Live agent'
    description: 'Use Azure Speech Voice Live in Microsoft Foundry Tools to create a conversational agent.'
    level: 300
    duration: 30
    islab: true
---

# Develop a Voice Live agent

Speech-capable AI agents enable users to interact conversationally - using spoken command and questions that generate vocal responses.

In this exercise, you'll the Voice Live capability of Azure Speech in Microsoft Foundry Tools to create a real-time voice-based agent.

This exercise takes approximately **30** minutes.

> **Note**: Some of the technologies used in this exercise are in preview or in active development. You may experience some unexpected behavior, warnings, or errors.

## Prerequisites

Before starting this exercise, ensure you have:

- An active [Azure subscription](https://azure.microsoft.com/pricing/purchase-options/azure-account)
- [Visual Studio Code](https://code.visualstudio.com/) installed
- [Python version **3.13.xx**](https://www.python.org/downloads/release/python-31312/) installed\*
- [Git](https://git-scm.com/install/) installed and configured
- [Azure CLI](https://learn.microsoft.com/cli/azure/install-azure-cli?view=azure-cli-latest) installed

> \* Python 3.14 is available, but some dependencies are not yet compiled for that release. The lab has been successfully tested with Python 3.13.12.

## Create a Microsoft Foundry project

Microsoft Foundry uses projects to organize models, resources, data, and other assets used to develop an AI solution.

1. In a web browser, open [Microsoft Foundry](https://ai.azure.com) at `https://ai.azure.com` and sign in using your Azure credentials. Close any tips or quick start panes that are opened the first time you sign in, and if necessary use the Foundry logo at the top left to navigate to the home page.

1. If it is not already enabled, in the tool bar the top of the page, enable the **New Foundry** option. Then, if prompted, create a new project with a unique name; expanding the  **Advanced options** area to specify the following settings for your project:
    - **Foundry resource**: *Enter a valid name for your AI Foundry resource.*
    - **Subscription**: *Your Azure subscription*
    - **Resource group**: *Create or select a resource group*
    - **Region**: Select any available region

1. Select **Create**. Wait for your project to be created. Then view its home page.

## Create an agent

Now let's create an agent.

1. Now you're ready to **Start building**. Select **Create agents** (or on the **Build** page, select the **Agents** tab); and create a new agent named `chat-agent`.

     When ready, your agent opens in the agent playground.

1. In the model drop-down list, ensure that a **gpt-4.1** model has been deployed and selected for your agent.
1. Assign your agent the following **Instructions**:

    ```
   You are an AI assistant that helps people find information about AI and related topics. You answer questions concisely and precisely.
    ```

1. Use the **Save** button to save the changes.
1. Test the agent by entering the following prompt in the **Chat** pane:

    ```
   What can you help me with?
    ```

    The agent should respond with an appropriate answer based on its instructions.

## Configure Azure Speech Voice Live

Enabling speech mode for a Foundry agent integrates Azure Speech Voice Live - adding speech capabilities to the agent.

1. In the pane on the left, under the model selection list, enable **Voice mode**.

    If the **Configuration** pane does not open automatically, use the "cog" icon above the chat interface to open it.

1. In the **Configuration** pane, under **Voice Live**, review the default speech input and output configuration. You can try different voices, previewing them until you decide which one to use.
1. Close the **Configuration** pane and use the **Save** button to save the agent.

## Use speech to interact with the agent

Now you're ready to chat with the agent.

1. In the Chat pane, use the **Start session** button to start a conversation with the agent. If prompted, allow access to the system microphone.

    The agent will start a speech session, and listen for your prompt.

1. When the app status is **Listening…**, say something like "*How does speech recognition work?*" and wait for a response.

1. Verify that the app status changes to **Processing…**. The app will process the spoken input.

    >**Tip**: The processing speed may be so fast that you do not actually see the status before it changes back to *Speaking*.

1. When the status changes to **Speaking…**, the app uses text-to-speech to vocalize the response from the model. To see the original prompt and the response as text, select the **cc** button on the bottom of the chat screen.

    >**Tip**: The follow-on prompt is submitted just by speaking. You can even interrupt the agent to keep the interaction focused on what you need done. You can also use the **Stop generation** button in the chat pane to stop long-running responses. The button will end the conversation. You will need to start a new conversation to continue using the agent.

1. To continue the conversation, just ask another question, such as "*How does speech synthesis work?*", and review the response.
1. When you have finished chatting with the agent, use the **X** icon to end the session. A transcript of the conversation will be displayed.

## Create a client application

To use your agent in a custom application, you need to write code that uses the Azure Speech Voice Live SDK to initiate and manage a conversation session.

### Get the application files from GitHub

1. Open Visual Studio Code.
1. Open the command palette (*Ctrl+Shift+P*) and use the `Git:clone` command to clone the `https://github.com/microsoftlearning/mslearn-ai-language` repo to a local folder (it doesn't matter which one). Then open it.

    You may be prompted to confirm you trust the authors.

1. After the repo has been cloned, in the Explorer pane, navigate to the folder containing the application code files at **/Labfiles/06-voice-live/Python/chat-client**. The application files include:
    - **.env** (the application configuration file)
    - **requirements.txt** (the Python package dependencies that need to be installed)
    - **chat-client.py** (the code file for the application)

### Configure the application

1. In Visual Studio Code, view the **Extensions** pane; and if it is not already installed, install the **Python** extension.
1. In the **Command Palette**, use the command `python:select interpreter`. Then select an existing environment if you have one, or create a new **Venv** environment based on your Python 3.13.xx installation.

    > **Tip**: If you are prompted to install dependencies, you can install the ones in the *requirements.txt* file in the */Labfiles/06-voice-live/Python/chat-client* folder; but it's OK if you - don't we'll install them later!

    > **Tip**: If you prefer to use the terminal, you can create your **Venv** environment with `python -m venv labenv`, then activate it with `\labenv\Scripts\activate`.

1. In the **Explorer** pane, right-click the **chat-client** folder containing the application files, and select **Open in integrated terminal** (or open a terminal in the **Terminal** menu and navigate to the */Labfiles/06-voice-live/Python/chat-client* folder.)

    > **Note**: Opening the terminal in Visual Studio Code will automatically activate the Python environment. You may need to enable running scripts on your system.

1. Ensure that the terminal is open in the **chat-client** folder with the prefix **(.venv)** to indicate that the Python environment you created is active.
1. Install the Foundry SDK package, the Azure Identity package, and other required packages by running the following command:

    ```
    pip install -r requirements.txt azure-identity azure-ai-voicelive==1.2.0b4 --pre azure-ai-projects==2.0.0b4
    ```

1. In the **Explorer** pane, in the **chat-client** folder, select the **.env** file to open it. Then update the configuration values to include your Foundry resource **endpoint** (get the project endpoint from the project home page in Foundry Portal, but use only the base URL up to the *.com* domain), your project name, and the name of your agent (which should be **Chat-Agent** - note that this name is case-sensitive).

> **Important**: Modify the pasted endpoint to remove the "/api/projects/{project_name}" suffix - the endpoint should be *https://{your-foundry-resource-name}.services.ai.azure.com*.

1. Save the modified configuration file.

### Implement application code

1. In the **Explorer** pane, in the **chat-client** folder,  open the **chat-client.py** file.
1. Review the existing code. Most of the application scaffolding has been provided - you must implement the key steps required to use the Voice Live SDK to manage a conversation with your agent.

    > **Tip**: As you add code to the code file, be sure to maintain the correct indentation.

1. At the top of the code file, under the existing namespace references, find the comment **Import namespaces** and add the following code to import the namespaces you will need:

    ```python
   # import namespaces
   from azure.identity.aio import AzureCliCredential
   from azure.ai.voicelive.aio import connect
   from azure.ai.voicelive.models import (
        InputAudioFormat,
        Modality,
        OutputAudioFormat,
        RequestSession,
        ServerEventType,
        AudioNoiseReduction,
        AudioEchoCancellation,
        AzureSemanticVadMultilingual
   ) 
    ```

1. In the **main** function, note that code to load the endpoint from the configuration file has already been provided, as has code to get an authentication credential and to create and run a **VoiceAssistant** object.

    The **VoiceAssistant** class encapsulates the logic to manage the Voice Live conversation.

1. Under the **main** function, find the **VoiceAssistant** class definition.

    The ****init**** function to initialize an object based on the class has already been implemented.

    You must implement the **start** function, which is the core function to establish the conversation session.

1. Find the comment **STEP 1: Connect Azure VoiceLive to the agent**, and add the following code (being careful to indent it one level in under the **try:** statement):

    ```python
   # STEP 1: Connect Azure VoiceLive to the agent
   async with connect(
        endpoint=self.endpoint,
        credential=self.credential,
        api_version="2026-01-01-preview",
        agent_config=self.agent_config
   ) as connection:
        self.connection = connection
    ```

    This step creates a connection to your agent so the Voice Live SDK can establish a conversation with it.

1. Find the comment **STEP 2: Initialize audio processor**, and add the following code (being careful to indent it *another level in* under the step 1 code you just added):

    ```python
   # STEP 2: Initialize audio processor
   self.audio_processor = AudioProcessor(connection)
    ```

    This code attaches an AudioProcessor object based on the class definition further down in the code file. The AudioProcessor is a utlility class to manage audio hardware I/O.

1. Find the comment **STEP 3: Configure the session**, and add the following code (being careful to maintain the same indentation as the step 2 code above):

    ```python
   # STEP 3: Configure the session
   await self.setup_session()
    ```

    This code configures the session with the appropriate audio formats, conversational turn-detection semantics, and options to handle echos and background noise.

1. Find the comment **STEP 4: Start audio systems**, and add the following code (being careful to maintain the same indentation as the step 3 code above):

    ```python
   # STEP 4: Start audio systems
   self.audio_processor.start_playback()
            
   print("\n✅ Ready! Start speaking...")
   print("Press Ctrl+C to exit\n")
    ```

    This code starts the audio processor so that it monitors the microphone for audio input and plays back audio output.

1. Find the comment **STEP 5: Process events**, and add the following code (being careful to maintain the same indentation as the step 4 code above):

    ```python
   # STEP 5: Process events
   await self.process_events()
    ```

    This code runs the main loop to process events such as speech input, response output, and interruptions.

1. Save the changes to the code file.

    The completed function should look like this:

    ```python
   async def start(self):
            """Start the voice assistant."""
            print("\n" + "=" *60)
            print(f"🎙️   {self.agent_config['agent_name']}")
            print("="* 60)
    
            # Add your code in this try block!
            try:
                # STEP 1: Connect Azure VoiceLive to the agent
                async with connect(
                    endpoint=self.endpoint,
                    credential=self.credential,
                    api_version="2026-01-01-preview",
                    agent_config=self.agent_config
                ) as connection:
                    self.connection = connection
                        
                    # STEP 2: Initialize audio processor
                    self.audio_processor = AudioProcessor(connection)
                                      
                    # STEP 3: Configure the session
                    await self.setup_session()
                    
                    # STEP 4: Start audio systems
                    self.audio_processor.start_playback()
            
                    print("\n✅ Ready! Start speaking...")
                    print("Press Ctrl+C to exit\n")
                    
                    # STEP 5: Process events
                    await self.process_events()
    
            finally:
                if hasattr(self, 'audio_processor'):
                    self.audio_processor.shutdown()
    ```

## Run the application

Now you're ready to run your application, and have a conversation with your agent.

> **TIP**: The application works best when using a headset. When using speakers, there's a risk that the agent can "hear" its own responses and process them as new user input.

1. In the Visual Studio Code terminal, enter the following command to sign into Azure

   ```powershell
    az login
    ```

    When prompted, sign into Azure using your credentials.

    > **Note**: In most scenarios, just using *az login* will be sufficient. However, if you have subscriptions in multiple tenants, you may need to specify the tenant by using the *--tenant* parameter. See *[Sign into Azure interactively using the Azure CLI](https://learn.microsoft.com/cli/azure/authenticate-azure-cli-interactively)* for details.

1. In the Visual Studio Code terminal, confirm the details of your Azure subscription; and then enter the following command to run the client application:

    ```powershell
    python chat-client.py
    ```

1. When prompted, begin a conversation with the agent by asking a question such as "*How is computer speech used in AI?*".
1. Listen to the response and then continue the conversation - note that you can interrupt the agent to ask new questions.
1. When you're finished, press **CTRL+C** to end the conversation and stop the program.

## Clean up

If you have finished exploring Microsoft Foundry, delete any resources that you no longer need. This avoids accruing any unnecessary costs.

1. Open the **Azure portal** at [https://portal.azure.com](https://portal.azure.com) and select the resource group that contains the resources you created.
1. Select **Delete resource group** and then **enter the resource group name** to confirm. The resource group is then deleted.


================================================
FILE: Instructions/Exercises/07-translation.md
================================================
---
lab:
    title: 'Translate text and speech'
    description: Implement translation with Azure Translator and Azure Speech in Foundry Tools.
    duration: 30
    level: 300
    islab: true
---

# Translate text and speech

**Azure Translator in Foundry Tools** is a service that enables you to translate text between languages. Similarly, **Azure Speech in Foundry Tools** provides translation services for speech. In this exercise, you'll use them to create translation apps that translates input in any supported language to the target language of your choice.

While this exercise is based on Python, you can develop text translation applications using multiple language-specific SDKs; including:

- [Azure Translator client library for Python](https://pypi.org/project/azure-ai-translation-text/)
- [Azure Translator client library for .NET](https://www.nuget.org/packages/Azure.AI.Translation.Text)
- [Azure Translator client library for JavaScript](https://www.npmjs.com/package/@azure-rest/ai-translation-text)
- [Azure AI Speech SDK for Python](https://pypi.org/project/azure-cognitiveservices-speech/)
- [Azure AI Speech SDK for .NET](https://www.nuget.org/packages/Microsoft.CognitiveServices.Speech)
- [Azure AI Speech SDK for JavaScript](https://www.npmjs.com/package/microsoft-cognitiveservices-speech-sdk)

This exercise takes approximately **30** minutes.

> **Note**: Some of the technologies used in this exercise are in preview or in active development. You may experience some unexpected behavior, warnings, or errors.

## Prerequisites

Before starting this exercise, ensure you have:

- An active [Azure subscription](https://azure.microsoft.com/pricing/purchase-options/azure-account)
- [Visual Studio Code](https://code.visualstudio.com/) installed
- [Python version **3.13.xx**](https://www.python.org/downloads/release/python-31312/) installed\*
- [Git](https://git-scm.com/install/) installed and configured
- [Azure CLI](https://learn.microsoft.com/cli/azure/install-azure-cli?view=azure-cli-latest) installed

> \* Python 3.14 is available, but some dependencies are not yet compiled for that release. The lab has been successfully tested with Python 3.13.12.

## Create a Microsoft Foundry project

Microsoft Foundry uses projects to organize models, resources, data, and other assets used to develop an AI solution.

1. In a web browser, open the [Microsoft Foundry portal](https://ai.azure.com) at `https://ai.azure.com` and sign in using your Azure credentials. Close any tips or quick start panes that are opened the first time you sign in, and if necessary use the Foundry logo at the top left to navigate to the home page.

1. If it is not already enabled, in the tool bar the top of the page, enable the **New Foundry** option. Then, if prompted, create a new project with a unique name; expanding the **Advanced options** area to specify the following settings for your project:
    - **Foundry resource**: *Use the default name for your resource (usually {project_name}-resource)*\*
    - **Subscription**: *Your Azure subscription*
    - **Resource group**: *Create or select a resource group*
    - **Region**: Select any available region

    > **TIP**: \* Remember the Foundry resource name - you'll need it later!

1. Wait for your project to be created. Then view the home page for your project.

## Explore Azure Translator in Foundry Tools in the portal

You can use the Azure Translator playground in the Foundry portal to experiment with the service.

1. Now you're ready to **Start building**. Select **Explore playgrounds** (or on the **Build** page, select the **Models** tab) to view the models in your project.
1. In the **Models** page, select the **AI services** tab to view the list of Azure services in Foundry Tools.
1. In the list of tools, select **Azure Translator - Text translation**.
1. In the Text translator playground, in the **Source text** area, enter the text `Hello world!`. Then, in the **Translation** area, select any language and use the **Translate** button to generate the translation.
1. Try a few more languages.
1. Select the **Code** tab to view sample code for using Azure Translator; and note the **ENDPOINT** variable used in the code for the REST API, which should be similar to `https://{foundry-resource-name}.cognitiveservices.azure.com/`.

    This endpoint uses an older format for Azure AI Services, but is still used to connect to the Azure Translator resource in a Foundry resource. You can also use it to connect to Azure Speech tools.

    > **TIP**: You're going to need the endpoint later!

## Get application files from GitHub

The initial application files you'll need to develop the translation application are provided in a GitHub repo.

1. Open Visual Studio Code.
1. Open the command palette (*Ctrl+Shift+P*) and use the `Git:clone` command to clone the `https://github.com/microsoftlearning/mslearn-ai-language` repo to a local folder (it doesn't matter which one). Then open it.

    You may be prompted to confirm you trust the authors.

1. In Visual Studio Code, view the **Extensions** pane; and if it is not already installed, install the **Python** extension.
1. In the **Command Palette**, use the command `python:select interpreter`. Then select an existing environment if you have one, or create a new **Venv** environment based on your Python 3.1x installation.

    > **Tip**: If you are prompted to install dependencies, you can install the ones in the *requirements.txt* file in the */Labfiles/07-translation/Python/translators* folder; but it's OK if you don't - we'll install them later!

    > **Tip**: If you prefer to use the terminal, you can create your **Venv** environment with `python -m venv labenv`, then activate it with `\labenv\Scripts\activate`.

## Create a text translation application

Now you're ready to use Azure Translator to implement text translation.

1. After the repo has been cloned, in the Explorer pane, navigate to the folder containing the application code files at **/Labfiles/07-translation/Python/translators**. The application files include:
    - **.env** (the application configuration file)
    - **requirements.txt** (the Python package dependencies that need to be installed)
    - **translate-text.py** (the code file for text-application)
    - **translate-speech.py** (the code file for speech-application)

### Configure your text translation application

1. In the **Explorer** pane, in the **translators** folder, select the **.env** file to open it. Then update the configuration values to reflect the Cognitive Services **endpoint** for your Foundry resource.

    > **Important**: The endpoint should be *https://{YOUR_FOUNDRY_RESOURCE}.cognitiveservices.azure.com/*. The Foundry Resource name usually takes the form *{project_name}-resource*.

    Save the modified configuration file.

1. In the **Explorer** pane, right-click the **translators** folder containing the application files, and select **Open in integrated terminal** (or open a terminal in the **Terminal** menu and navigate to the */Labfiles/07-translation/Python/translators* folder.)

    > **Note**: Opening the terminal in Visual Studio Code will automatically activate the Python environment. You may need to enable running scripts on your system.

1. Ensure that the terminal is open in the **translators** folder with the prefix **(.venv)** to indicate that the Python environment you created is active.
1. Install the Azure Translator SDK, Speech SDK, and other required packages by running the following command:

    ```
    pip install -r requirements.txt
    ```

### Add code to translate text

1. In the **Explorer** pane, in the **translators** folder,  open the **translate-text.py** file.

1. Review the existing code. You will add code to work with Azure Translator.

    > **Tip**: As you add code to the code file, be sure to maintain the correct indentation.

1. At the top of the code file, under the existing namespace references, find the comment **Import namespaces** and add the following code to import the namespaces you will need to use the Translator SDK:

    ```python
   # import namespaces
   from azure.identity import DefaultAzureCredential
   from azure.ai.translation.text import *
   from azure.ai.translation.text.models import InputTextItem
    ```

1. In the **main** function, note that the existing code reads the configuration settings.
1. Find the comment **Create client using endpoint and credential** and add the following code:

    ```python
   # Create client using endpoint and credential
   credential = DefaultAzureCredential()
   client = TextTranslationClient(credential=credential, endpoint=foundry_endpoint)
    ```

1. Find the comment **Choose target language** and add the following code, which uses the Text Translator service to return list of supported languages for translation, and prompts the user to select a language code for the target language:

    ```python
   # Choose target language
   languagesResponse = client.get_supported_languages(scope="translation")
   print("{} languages supported.".format(len(languagesResponse.translation)))
   print("(See https://learn.microsoft.com/azure/ai-services/translator/language-support#translation)")
   print("Enter a target language code for translation (for example, 'en'):")
   targetLanguage = "xx"
   supportedLanguage = False
   while supportedLanguage == False:
        targetLanguage = input()
        if  targetLanguage in languagesResponse.translation.keys():
            supportedLanguage = True
        else:
            print("{} is not a supported language.".format(targetLanguage))
    ```

1. Find the comment **Translate text** and add the following code, which repeatedly prompts the user for text to be translated, uses the Azure AI Translator service to translate it to the target language (detecting the source language automatically), and displays the results until the user enters *quit*:

    ```python
   # Translate text
   inputText = ""
   while inputText.lower() != "quit":
        inputText = input("Enter text to translate ('quit' to exit):")
        if inputText != "quit":
            input_text_elements = [InputTextItem(text=inputText)]
            translationResponse = client.translate(body=input_text_elements, to_language=[targetLanguage])
            translation = translationResponse[0] if translationResponse else None
            if translation:
                sourceLanguage = translation.detected_language
                for translated_text in translation.translations:
                    print(f"'{inputText}' was translated from {sourceLanguage.language} to {translated_text.to} as '{translated_text.text}'.")
    ```

1. Save the changes to the code file. Then, in the terminal pane, use the following command to sign into Azure.

    ```powershell
    az login
    ```

    > **Note**: In most scenarios, just using *az login* will be sufficient. However, if you have subscriptions in multiple tenants, you may need to specify the tenant by using the *--tenant* parameter. See [Sign into Azure interactively using the Azure CLI](https://learn.microsoft.com/cli/azure/authenticate-azure-cli-interactively) for details.

1. When prompted, follow the instructions to sign into Azure. Then complete the sign in process in the command line, viewing (and confirming if necessary) the details of the subscription containing your Foundry resource.
1. After you have signed in, enter the following command to run the application:

    ```
   python translate-text.py
    ```

1. When prompted, enter a valid target language from the list in the link displayed.
1. Enter a phrase to be translated (for example `This is a test` or `C'est un test`) and view the results, which should detect the source language and translate the text to the target language.
1. When you're done, enter `quit`. You can run the application again and choose a different target language.

## Create a speech translation application

Now you're ready to use Azure Speech to implement text translation.

### Configure your speech translation application

1. In the **translators** folder, verify that the .env file contains the  **endpoint** for your Foundry resource (Azure Speech can use the same information as Azure Translator to connect to your Foundry resource).
1. Ensure that the terminal is open in the **translators** folder with the prefix **(.venv)** to indicate that the Python environment you created is active.
1. If you did not previously install the required packages, enter the following command to do so now:

    ```
    pip install -r requirements.txt
    ```

### Add code to translate speech

1. In the **Explorer** pane, in the **translators** folder,  open the **translate-speech.py** file.

1. Review the existing code. You will add code to work with Azure Speech.

    > **Tip**: As you add code to the code file, be sure to maintain the correct indentation.

1. At the top of the code file, under the existing namespace references, find the comment **Import namespaces** and add the following code to import the namespace you will need to use the Speech SDK:

   ```python
   # Import namespaces
   from azure.identity import DefaultAzureCredential
   import azure.cognitiveservices.speech as speech_sdk
    ```

1. In the **main** function, under the comment **Get configuration settings**, note that the code loads the endpoint you defined in the configuration file.

1. Find the following code under the comment **Configure translation**, and add the following code to configure your connection to the Foundry endpoint for Azure Speech, and prepare to translate speech in US English to French, Spanish, and Hindi:

    ```python
   # Configure translation
   credential = DefaultAzureCredential()
   translation_cfg = speech_sdk.translation.SpeechTranslationConfig(
            token_credential=credential,
            endpoint=foundry_endpoint
   )
   translation_cfg.speech_recognition_language = 'en-US'
   translation_cfg.add_target_language('fr')
   translation_cfg.add_target_language('es')
   translation_cfg.add_target_language('hi')
   audio_in_cfg = speech_sdk.AudioConfig(use_default_microphone=True)
   translator = speech_sdk.translation.TranslationRecognizer(
        translation_config=translation_cfg,
        audio_config=audio_in_cfg
   )
   print('Ready to translate from',translation_cfg.speech_recognition_language)
    ```

1. You will use the **SpeechTranslationConfig** to translate speech into text, but you will also use a **SpeechConfig** to synthesize translations into speech. Add the following code under the comment **Configure speech for synthesis of translations**:

    ```python
   # Configure speech for synthesis of translations
   speech_cfg = speech_sdk.SpeechConfig(
        token_credential=credential, endpoint=foundry_endpoint)
   audio_out_cfg = speech_sdk.audio.AudioOutputConfig(use_default_speaker=True)
   voices = {
        "fr": "fr-FR-HenriNeural",
        "es": "es-ES-ElviraNeural",
        "hi": "hi-IN-MadhurNeural"
   }
   print('Ready to use speech service.')
    ```

1. Now it's time to add the code to translate the user's speech int the system microphone. Find the comment **Translate user speech**, and add the following code:

    ```python
   # Translate user speech
   print("Speak now...")
   translation_results = translator.recognize_once_async().get()
   print(f"Translating '{translation_results.text}'")
    ```

1. When the results are returned, the application will iterate through the translations, printing the text and playing the synthesized speech through the default system speaker. Find the comment **Print and speak the translation results** and add he following code:

    ```python
   # Print and speak the translation results
   translations = translation_results.translations
   for translation_language in translations:

        print(f"{translation_language}: '{translations[translation_language]}'")

        speech_cfg.speech_synthesis_voice_name = voices.get(translation_language)
        audio_out_cfg = speech_sdk.audio.AudioOutputConfig(use_default_speaker=True)
        speech_synthesizer = speech_sdk.SpeechSynthesizer(speech_cfg, audio_out_cfg)
        speak = speech_synthesizer.speak_text_async(translations[translation_language]).get()
        
        if speak.reason != speech_sdk.ResultReason.SynthesizingAudioCompleted:
            print(speak.reason)
    ```

1. Save the changes to the code file. Then, in the terminal pane, if you are not already signed into Azure (or your session may have expired) use the following command to sign into Azure.

    ```powershell
    az login
    ```

    > **Note**: In most scenarios, just using *az login* will be sufficient. However, if you have subscriptions in multiple tenants, you may need to specify the tenant by using the *--tenant* parameter. See [Sign into Azure interactively using the Azure CLI](https://learn.microsoft.com/cli/azure/authenticate-azure-cli-interactively) for details.

1. When prompted, follow the instructions to sign into Azure. Then complete the sign in process in the command line, viewing (and confirming if necessary) the details of the subscription containing your Foundry resource.
1. After you have signed in, enter the following command to run the application:

    ```
   python translate-speech.py
    ```

1. When prompted, say something aloud (for example, "*Hello!"*).

     The program should translate it to the languages specified in the code (French, Spanish, and Hindi), and print and speak the translations.

    > **NOTE**: The translation to Hindi may not always be displayed correctly in the terminal due to character encoding issues.

## Clean up resources

If you have finished exploring Microsoft Foundry, delete any resources that you no longer need. This avoids accruing any unnecessary costs.

1. Open the **Azure portal** at [https://portal.azure.com](https://portal.azure.com) and select the resource group that contains the resources you created.
1. Select **Delete resource group** and then **enter the resource group name** to confirm. The resource group is then deleted.


================================================
FILE: Instructions/Labs/01-analyze-text.md
================================================
---
lab:
    title: 'Analyze text (deprecated)'
    description: "Use Azure AI Language to analyze text, including language detection, sentiment analysis, key phrase extraction, and entity recognition."
    islab: false
---

# Analyze Text (deprecated)

> **Note**: This exercise is deprecated. Consider completing the replacement exercise at <https://go.microsoft.com/fwlink/?linkid=2356343>.

**Azure AI Language** supports analysis of text, including language detection, sentiment analysis, key phrase extraction, and entity recognition.

For example, suppose a travel agency wants to process hotel reviews that have been submitted to the company's web site. By using the Azure AI Language, they can determine the language each review is written in, the sentiment (positive, neutral, or negative) of the reviews, key phrases that might indicate the main topics discussed in the review, and named entities, such as places, landmarks, or people mentioned in the reviews. In this exercise, you'll use the Azure AI Language Python SDK for text analytics to implement a simple hotel review application based on this example.

While this exercise is based on Python, you can develop text analytics applications using multiple language-specific SDKs; including:

- [Azure AI Text Analytics client library for Python](https://pypi.org/project/azure-ai-textanalytics/)
- [Azure AI Text Analytics client library for .NET](https://www.nuget.org/packages/Azure.AI.TextAnalytics)
- [Azure AI Text Analytics client library for JavaScript](https://www.npmjs.com/package/@azure/ai-text-analytics)

This exercise takes approximately **30** minutes.

## Provision an *Azure AI Language* resource

If you don't already have one in your subscription, you'll need to provision an **Azure AI Language service** resource in your Azure subscription.

1. Open the Azure portal at `https://portal.azure.com`, and sign in using the Microsoft account associated with your Azure subscription.
1. Select **Create a resource**.
1. In the search field, search for **Language service**. Then, in the results, select **Create** under **Language Service**.
1. Select **Continue to create your resource**.
1. Provision the resource using the following settings:
    - **Subscription**: *Your Azure subscription*.
    - **Resource group**: *Choose or create a resource group*.
    - **Region**:*Choose any available region*
    - **Name**: *Enter a unique name*.
    - **Pricing tier**: Select **F0** (*free*), or **S** (*standard*) if F is not available.
    - **Responsible AI Notice**: Agree.
1. Select **Review + create**, then select **Create** to provision the resource.
1. Wait for deployment to complete, and then go to the deployed resource.
1. View the **Keys and Endpoint** page in the **Resource Management** section. You will need the information on this page later in the exercise.

## Clone the repository for this course

You'll develop your code using Cloud Shell from the Azure Portal. The code files for your app have been provided in a GitHub repo.

1. In the Azure Portal, use the **[\>_]** button to the right of the search bar at the top of the page to create a new Cloud Shell in the Azure portal, selecting a ***PowerShell*** environment. The cloud shell provides a command line interface in a pane at the bottom of the Azure portal.

    > **Note**: If you have previously created a cloud shell that uses a *Bash* environment, switch it to ***PowerShell***.

1. In the cloud shell toolbar, in the **Settings** menu, select **Go to Classic version** (this is required to use the code editor).

    **<font color="red">Ensure you've switched to the classic version of the cloud shell before continuing.</font>**

1. In the PowerShell pane, enter the following commands to clone the GitHub repo for this exercise:

    ```
    rm -r mslearn-ai-language -f
    git clone https://github.com/microsoftlearning/mslearn-ai-language
    ```

    > **Tip**: As you enter commands into the cloudshell, the output may take up a large amount of the screen buffer. You can clear the screen by entering the `cls` command to make it easier to focus on each task.

1. After the repo has been cloned, navigate to the folder containing the application code files:  

    ```
    cd mslearn-ai-language/Labfiles/01-analyze-text/Python/text-analysis
    ```

## Configure your application

1. In the command line pane, run the following command to view the code files in the **text-analysis** folder:

    ```
   ls -a -l
    ```

    The files include a configuration file (**.env**) and a code file (**text-analysis.py**). The text your application will analyze is in the **reviews** subfolder.

1. Create a Python virtual environment and install the Azure AI Language Text Analytics SDK package and other required packages by running the following command:

    ```
    python -m venv labenv;
    ./labenv/bin/Activate.ps1;
    pip install -r requirements.txt azure-ai-textanalytics==5.3.0
    ```

1. Enter the following command to edit the application configuration file:

    ```
   code .env
    ```

    The file is opened in a code editor.

1. Update the configuration values to include the  **endpoint** and a **key** from the Azure Language resource you created (available on the **Keys and Endpoint** page for your Azure AI Language resource in the Azure portal)
1. After you've replaced the placeholders, within the code editor, use the **CTRL+S** command or **Right-click > Save** to save your changes and then use the **CTRL+Q** command or **Right-click > Quit** to close the code editor while keeping the cloud shell command line open.

## Add code to connect to your Azure AI Language resource

1. Enter the following command to edit the application code file:

    ```
    code text-analysis.py
    ```

1. Review the existing code. You will add code to work with the AI Language Text Analytics SDK.

    > **Tip**: As you add code to the code file, be sure to maintain the correct indentation.

1. At the top of the code file, under the existing namespace references, find the comment **Import namespaces** and add the following code to import the namespaces you will need to use the Text Analytics SDK:

    ```python
   # import namespaces
   from azure.core.credentials import AzureKeyCredential
   from azure.ai.textanalytics import TextAnalyticsClient
    ```

1. In the **main** function, note that code to load the Azure AI Language service endpoint and key from the configuration file has already been provided. Then find the comment **Create client using endpoint and key**, and add the following code to create a client for the Text Analysis API:

    ```Python
   # Create client using endpoint and key
   credential = AzureKeyCredential(ai_key)
   ai_client = TextAnalyticsClient(endpoint=ai_endpoint, credential=credential)
    ```

1. Save your changes (CTRL+S), then enter the following command to run the program (you maximize the cloud shell pane and resize the panels to see more text in the command line pane):

    ```
   python text-analysis.py
    ```

1. Observe the output as the code should run without error, displaying the contents of each review text file in the **reviews** folder. The application successfully creates a client for the Text Analytics API but doesn't make use of it. We'll fix that in the next section.

## Add code to detect language

Now that you have created a client for the API, let's use it to detect the language in which each review is written.

1. In the code editor, find the comment **Get language**. Then add the code necessary to detect the language in each review document:

    ```python
   # Get language
   detectedLanguage = ai_client.detect_language(documents=[text])[0]
   print('\nLanguage: {}'.format(detectedLanguage.primary_language.name))
    ```

     > **Note**: *In this example, each review is analyzed individually, resulting in a separate call to the service for each file. An alternative approach is to create a collection of documents and pass them to the service in a single call. In both approaches, the response from the service consists of a collection of documents; which is why in the Python code above, the index of the first (and only) document in the response ([0]) is specified.*

1. Save your changes. Then re-run the program.
1. Observe the output, noting that this time the language for each review is identified.

## Add code to evaluate sentiment

*Sentiment analysis* is a commonly used technique to classify text as *positive* or *negative* (or possible *neutral* or *mixed*). It's commonly used to analyze social media posts, product reviews, and other items where the sentiment of the text may provide useful insights.

1. In the code editor, find the comment **Get sentiment**. Then add the code necessary to detect the sentiment of each review document:

    ```python
   # Get sentiment
   sentimentAnalysis = ai_client.analyze_sentiment(documents=[text])[0]
   print("\nSentiment: {}".format(sentimentAnalysis.sentiment))
    ```

1. Save your changes. Then close the code editor and re-run the program.
1. Observe the output, noting that the sentiment of the reviews is detected.

## Add code to identify key phrases

It can be useful to identify key phrases in a body of text to help determine the main topics that it discusses.

1. In the code editor, find the comment **Get key phrases**. Then add the code necessary to detect the key phrases in each review document:

    ```python
   # Get key phrases
   phrases = ai_client.extract_key_phrases(documents=[text])[0].key_phrases
   if len(phrases) > 0:
        print("\nKey Phrases:")
        for phrase in phrases:
            print('\t{}'.format(phrase))
    ```

1. Save your changes and re-run the program.
1. Observe the output, noting that each document contains key phrases that give some insights into what the review is about.

## Add code to extract entities

Often, documents or other bodies of text mention people, places, time periods, or other entities. The text Analytics API can detect multiple categories (and subcategories) of entity in your text.

1. In the code editor, find the comment **Get entities**. Then, add the code necessary to identify entities that are mentioned in each review:

    ```python
   # Get entities
   entities = ai_client.recognize_entities(documents=[text])[0].entities
   if len(entities) > 0:
        print("\nEntities")
        for entity in entities:
            print('\t{} ({})'.format(entity.text, entity.category))
    ```

1. Save your changes and re-run the program.
1. Observe the output, noting the entities that have been detected in the text.

## Add code to extract linked entities

In addition to categorized entities, the Text Analytics API can detect entities for which there are known links to data sources, such as Wikipedia.

1. In the code editor, find the comment **Get linked entities**. Then, add the code necessary to identify linked entities that are mentioned in each review:

    ```python
   # Get linked entities
   entities = ai_client.recognize_linked_entities(documents=[text])[0].entities
   if len(entities) > 0:
        print("\nLinks")
        for linked_entity in entities:
            print('\t{} ({})'.format(linked_entity.name, linked_entity.url))
    ```

1. Save your changes and re-run the program.
1. Observe the output, noting the linked entities that are identified.

## Clean up resources

If you're finished exploring the Azure AI Language service, you can delete the resources you created in this exercise. Here's how:

1. Close the Azure cloud shell pane
1. In the Azure portal, browse to the Azure AI Language resource you created in this lab.
1. On the resource page, select **Delete** and follow the instructions to delete the resource.

## More information

For more information about using **Azure AI Language**, see the [documentation](https://learn.microsoft.com/azure/ai-services/language-service/).


================================================
FILE: Instructions/Labs/02-qna.md
================================================
---
lab:
    title: 'Create a Question Answering solution  (deprecated)'
    description: "Use Azure AI Language to create a custom question answering solution."
    islab: false
---

# Create a Question Answering Solution (deprecated)

> **Note**: This exercise is deprecated. Consider reviewing the QuickStart tutorial at <https://learn.microsoft.com/azure/ai-services/language-service/question-answering/quickstart>

One of the most common conversational scenarios is providing support through a knowledge base of frequently asked questions (FAQs). Many organizations publish FAQs as documents or web pages, which works well for a small set of question and answer pairs, but large documents can be difficult and time-consuming to search.

**Azure AI Language** includes a *question answering* capability that enables you to create a knowledge base of question and answer pairs that can be queried using natural language input, and is most commonly used as a resource that a bot can use to look up answers to questions submitted by users. In this exercise, you'll use the Azure AI Language Python SDK for text analytics to implement a simple question answering application.

While this exercise is based on Python, you can develop question answering applications using multiple language-specific SDKs; including:

- [Azure AI Language Service Question Answering client library for Python](https://pypi.org/project/azure-ai-language-questionanswering/)
- [Azure AI Language Service Question Answering client library for .NET](https://www.nuget.org/packages/Azure.AI.Language.QuestionAnswering)

This exercise takes approximately **20** minutes.

## Provision an *Azure AI Language* resource

If you don't already have one in your subscription, you'll need to provision an **Azure AI Language service** resource. Additionally, to create and host a knowledge base for question answering, you need to enable the **Question Answering** feature.

1. Open the Azure portal at `https://portal.azure.com`, and sign in using the Microsoft account associated with your Azure subscription.
1. Select **Create a resource**.
1. In the search field, search for **Language service**. Then, in the results, select **Create** under **Language Service**.
1. Select the **Custom question answering** block. Then select **Continue to create your resource**. You will need to enter the following settings:

    - **Subscription**: *Your Azure subscription*
    - **Resource group**: *Choose or create a resource group*.
    - **Region**: *Choose any available location*
    - **Name**: *Enter a unique name*
    - **Pricing tier**: Select **F0** (*free*), or **S** (*standard*) if F is not available.
    - **Azure Search region**: *Choose a location in the same global region as your Language resource*
    - **Azure Search pricing tier**: Free (F) (*If this tier is not available, select Basic (B)*)
    - **Responsible AI Notice**: *Agree*

1. Select **Create + review**, then select **Create**.

    > **NOTE**
    > Custom Question Answering uses Azure Search to index and query the knowledge base of questions and answers.

1. Wait for deployment to complete, and then go to the deployed resource.
1. View the **Keys and Endpoint** page in the **Resource Management** section. You will need the information on this page later in the exercise.

## Create a question answering project

To create a knowledge base for question answering in your Azure AI Language resource, you can use the Language Studio portal to create a question answering project. In this case, you'll create a knowledge base containing questions and answers about [Microsoft Learn](https://learn.microsoft.com/training/).

1. In a new browser tab, go to the Language Studio portal at [https://language.cognitive.azure.com/](https://language.cognitive.azure.com/) and sign in using the Microsoft account associated with your Azure subscription.
1. If you're prompted to choose a Language resource, select the following settings:
    - **Azure Directory**: The Azure directory containing your subscription.
    - **Azure subscription**: Your Azure subscription.
    - **Resource type**: Language
    - **Resource name**: The Azure AI Language resource you created previously.

    If you are <u>not</u> prompted to choose a language resource, it may be because you have multiple Language resources in your subscription; in which case:

    1. On the bar at the top if the page, select the **Settings (&#9881;)** button.
    2. On the **Settings** page, view the **Resources** tab.
    3. Select the language resource you just created, and click **Switch resource**.
    4. At the top of the page, click **Language Studio** to return to the Language Studio home page.

1. At the top of the portal, in the **Create new** menu, select **Custom question answering**.
1. In the ***Create a project** wizard, on the **Choose language setting** page, select the option to **Select the language for all projects**, and select **English** as the language. Then select **Next**.
1. On the **Enter basic information** page, enter the following details:
    - **Name** `LearnFAQ`
    - **Description**: `FAQ for Microsoft Learn`
    - **Default answer when no answer is returned**: `Sorry, I don't understand the question`
1. Select **Next**.
1. On the **Review and finish** page, select **Create project**.

## Add sources to the knowledge base

You can create a knowledge base from scratch, but it's common to start by importing questions and answers from an existing FAQ page or document. In this case, you'll import data from an existing FAQ web page for Microsoft Learn, and you'll also import some pre-defined "chit chat" questions and answers to support common conversational exchanges.

1. On the **Manage sources** page for your question answering project, in the **&#9547; Add source** list, select **URLs**. Then in the **Add URLs** dialog box, select **&#9547; Add url** and set the following name and URL  before you select **Add all** to add it to the knowledge base:
    - **Name**: `Learn FAQ Page`
    - **URL**: `https://learn.microsoft.com/en-us/training/support/faq?pivots=general`
1. On the **Manage sources** page for your question answering project, in the **&#9547; Add source** list, select **Chitchat**. The in the **Add chit chat** dialog box, select **Friendly** and select **Add chit chat**.

> **NOTE**  
> If you encounter the error **BadArgument Invalid input**, follow these steps as a workaround:
>
> - Open the FAQ page in a new browser tab:  
>   `https://learn.microsoft.com/en-us/training/support/faq?pivots=general`
> - At the bottom left panel, look for the **Download PDF** button.
> - You’ll be taken to a PDF view of the webpage. Select the print option (or press `Ctrl+P` / `Cmd+P`).
> - In the print dialog, choose **Save as PDF** as the printer and select **Pages 1–4** (these pages cover the FAQ content needed).
> - Save the file locally.
> - Go back to the **Manage sources** page, select **+ Add source**, and choose **Files**.
> - Select **+ Add File**, enter `Learn FAQ Page` as the name, upload the saved PDF, and select **Add all**.

## Edit the knowledge base

Your knowledge base has been populated with question and answer pairs from the Microsoft Learn FAQ, supplemented with a set of conversational *chit-chat* question  and answer pairs. You can extend the knowledge base by adding additional question and answer pairs.

1. In your **LearnFAQ** project in Language Studio, select the **Edit knowledge base** page to see the existing question and answer pairs (if some tips are displayed, read them and choose **Got it** to dismiss them, or select **Skip all**)
1. In the knowledge base, on the **Question answer pairs** tab, select **&#65291;**, and create a new question answer pair with the following settings:
    - **Source**: `https://learn.microsoft.com/en-us/training/support/faq?pivots=general`
    - **Question**: `What are the different types of modules on Microsoft Learn?`
    - **Answer**: `Microsoft Learn offers various types of training modules, including role-based learning paths, product-specific modules, and hands-on labs. Each module contains units with lessons and knowledge checks to help you learn at your own pace.`
1. Select **Done**.
1. In the page for the **What are the different types of modules on Microsoft Learn?** question that is created, expand **Alternate questions**. Then add the alternate question `How are training modules organized?`.

    In some cases, it makes sense to enable the user to follow up on an answer by creating a *multi-turn* conversation that enables the user to iteratively refine the question to get to the answer they need.

1. Under the answer you entered for the module types question, expand **Follow-up prompts** and add  the following follow-up prompt:
    - **Text displayed in the prompt to the user**: `Learn more about training`.
    - Select the **Create link to new pair** tab, and enter this text: `You can explore modules and learning paths on the [Microsoft Learn training page](https://learn.microsoft.com/training/).`
    - Select **Show in contextual flow only**. This option ensures that the answer is only ever returned in the context of a follow-up question from the original module types question.
1. Select **Add prompt**.

## Train and test the knowledge base

Now that you have a knowledge base, you can test it in Language Studio.

1. Save the changes to your knowledge base by selecting the **Save** button under the **Question answer pairs** tab on the left.
1. After the changes have been saved, select the **Test** button to open the test pane.
1. In the test pane, at the top, deselect **Include short answer response** (if not already unselected). Then at the bottom enter the message `Hello`. A suitable response should be returned.
1. In the test pane, at the bottom enter the message `What is Microsoft Learn?`. An appropriate response from the FAQ should be returned.
1. Enter the message `Thanks!` An appropriate chit-chat response should be returned.
1. Enter the message `What are the different types of modules on Microsoft Learn?`. The answer you created should be returned along with a follow-up prompt link.
1. Select the **Learn more about training** follow-up link. The follow-up answer with a link to the training page should be returned.
1. When you're done testing the knowledge base, close the test pane.

## Deploy the knowledge base

The knowledge base provides a back-end service that client applications can use to answer questions. Now you are ready to publish your knowledge base and access its REST interface from a client.

1. In the **LearnFAQ** project in Language Studio, select the **Deploy knowledge base** page from the navigation menu on the left.
1. At the top of the page, select **Deploy**. Then select **Deploy** to confirm you want to deploy the knowledge base.
1. When deployment is complete, select **Get prediction URL** to view the REST endpoint for your knowledge base and note that the sample request includes parameters for:
    - **projectName**: The name of your project (which should be *LearnFAQ*)
    - **deploymentName**: The name of your deployment (which should be *production*)
1. Close the prediction URL dialog box.

## Prepare to develop an app in Cloud Shell

You'll develop your question answering app using Cloud Shell in the Azure portal. The code files for your app have been provided in a GitHub repo.

1. In the Azure Portal, use the **[\>_]** button to the right of the search bar at the top of the page to create a new Cloud Shell in the Azure portal, selecting a ***PowerShell*** environment. The cloud shell provides a command line interface in a pane at the bottom of the Azure portal.

    > **Note**: If you have previously created a cloud shell that uses a *Bash* environment, switch it to ***PowerShell***.

1. In the cloud shell toolbar, in the **Settings** menu, select **Go to Classic version** (this is required to use the code editor).

    **<font color="red">Ensure you've switched to the classic version of the cloud shell before continuing.</font>**

1. In the PowerShell pane, enter the following commands to clone the GitHub repo for this exercise:

    ```
    rm -r mslearn-ai-language -f
    git clone https://github.com/microsoftlearning/mslearn-ai-language
    ```

    > **Tip**: As you enter commands into the cloudshell, the ouput may take up a large amount of the screen buffer. You can clear the screen by entering the `cls` command to make it easier to focus on each task.

1. After the repo has been cloned, navigate to the folder containing the application code files:  

    ```
    cd mslearn-ai-language/Labfiles/02-qna/Python/qna-app
    ```

## Configure your application

1. In the command line pane, run the following command to view the code files in the **qna-app** folder:

    ```
   ls -a -l
    ```

    The files include a configuration file (**.env**) and a code file (**qna-app.py**).

1. Create a Python virtual environment and install the Azure AI Language Question Answering SDK package and other required packages by running the following command:

    ```
   python -m venv labenv;
    ./labenv/bin/Activate.ps1;
    pip install -r requirements.txt azure-ai-language-questionanswering
    ```

1. Enter the following command to edit the configuration file:

    ```
    code .env
    ```

    The file is opened in a code editor.

1. In the code file, update the configuration values it contains to reflect the **endpoint** and an authentication **key** for the Azure Language resource you created (available on the **Keys and Endpoint** page for your Azure AI Language resource in the Azure portal). The project name and deployment name for your deployed knowledge base should also be in this file.
1. After you've replaced the placeholders, within the code editor, use the **CTRL+S** command or **Right-click > Save** to save your changes and then use the **CTRL+Q** command or **Right-click > Quit** to close the code editor while keeping the cloud shell command line open.

## Add code to use your knowledge base

1. Enter the following command to edit the application code file:

    ```
    code qna-app.py
    ```

1. Review the existing code. You will add code to work with your knowledge base.

    > **Tip**: As you add code to the code file, be sure to maintain the correct indentation.

1. In the code file, find the comment **Import namespaces**. Then, under this comment, add the following language-specific code to import the namespaces you will need to use the Question Answering SDK:

    ```python
   # import namespaces
   from azure.core.credentials import AzureKeyCredential
   from azure.ai.language.questionanswering import QuestionAnsweringClient
    ```

1. In the **main** function, note that code to load the Azure AI Language service endpoint and key from the configuration file has already been provided. Then find the comment **Create client using endpoint and key**, and add the following code to create a question answering client:

    ```Python
   # Create client using endpoint and key
   credential = AzureKeyCredential(ai_key)
   ai_client = QuestionAnsweringClient(endpoint=ai_endpoint, credential=credential)
    ```

1. In the code file, find the comment **Submit a question and display the answer**, and add the following code to repeatedly read questions from the command line, submit them to the service, and display details of the answers:

    ```Python
   # Submit a question and display the answer
   user_question = ''
   while True:
        user_question = input('\nQuestion:\n')
        if user_question.lower() == "quit":                
            break
        response = ai_client.get_answers(question=user_question,
                                        project_name=ai_project_name,
                                        deployment_name=ai_deployment_name)
        for candidate in response.answers:
            print(candidate.answer)
            print("Confidence: {}".format(candidate.confidence))
            print("Source: {}".format(candidate.source))
    ```

1. Save your changes (CTRL+S), then enter the following command to run the program (you maximize the cloud shell pane and resize the panels to see more text in the command line pane):

    ```
   python qna-app.py
    ```

1. When prompted, enter a question to be submitted to your question answering project; for example `What is a learning path?`.
1. Review the answer that is returned.
1. Ask more questions. When you're done, enter `quit`.

## Clean up resources

If you're finished exploring the Azure AI Language service, you can delete the resources you created in this exercise. Here's how:

1. Close the Azure cloud shell pane
1. In the Azure portal, browse to the Azure AI Language resource you created in this lab.
1. On the resource page, select **Delete** and follow the instructions to delete the resource.

## More information

To learn more about question answering in  Azure AI Language, see the [Azure AI Language documentation](https://learn.microsoft.com/azure/ai-services/language-service/question-answering/overview).


================================================
FILE: Instructions/Labs/03-language-understanding.md
================================================
---
lab:
    title: 'Create a language understanding model with the Azure AI Language service  (deprecated)'
    description: "Create a custom language understanding model to interpret input, predict intent, and identify entities."
    islab: false
---

# Create a language understanding model with the Language service (deprecated)

> **Note**: This exercise is deprecated. Consider reviewing the QuickStart tutorial at <https://learn.microsoft.com/azure/ai-services/language-service/conversational-language-understanding/quickstart>.

The Azure AI Language service enables you to define a *conversational language understanding* model that applications can use to interpret natural language *utterances* from users (text or spoken input),  predict the users *intent* (what they want to achieve), and identify any *entities* to which the intent should be applied.

For example, a conversational language model for a clock application might be expected to process input such as:

*What is the time in London?*

This kind of input is an example of an *utterance* (something a user might say or type), for which the desired *intent* is to get the time in a specific location (an *entity*); in this case, London.

> **NOTE**
> The task of a conversational language model is to predict the user's intent and identify any entities to which the intent applies. It is <u>not</u> the job of a conversational language model to actually perform the actions required to satisfy the intent. For example, a clock application can use a conversational language model to discern that the user wants to know the time in London; but the client application itself must then implement the logic to determine the correct time and present it to the user.

In this exercise, you'll use the Azure AI Language service to create a conversational language understand model, and use the Python SDK to implement a client app that uses it.

While this exercise is based on Python, you can develop conversational understanding applications using multiple language-specific SDKs; including:

- [Azure AI Conversations client library for Python](https://pypi.org/project/azure-ai-language-conversations/)
- [Azure AI Conversations client library for .NET](https://www.nuget.org/packages/Azure.AI.Language.Conversations)
- [Azure AI Conversations client library for JavaScript](https://www.npmjs.com/package/@azure/ai-language-conversations)

This exercise takes approximately **35** minutes.

## Provision an *Azure AI Language* resource

If you don't already have one in your subscription, you'll need to provision an **Azure AI Language service** resource in your Azure subscription.

1. Open the Azure portal at `https://portal.azure.com`, and sign in using the Microsoft account associated with your Azure subscription.
1. Select **Create a resource**.
1. In the search field, search for **Language service**. Then, in the results, select **Create** under **Language Service**.
1. Provision the resource using the following settings:
    - **Subscription**: *Your Azure subscription*.
    - **Resource group**: *Choose or create a resource group*.
    - **Region**: *Choose from one of the following regions*\*
        - Australia East
        - Central India
        - China East 2
        - East US
        - East US 2
        - North Europe
        - South Central US
        - Switzerland North
        - UK South
        - West Europe
        - West US 2
        - West US 3
    - **Name**: *Enter a unique name*.
    - **Pricing tier**: Select **F0** (*free*), or **S** (*standard*) if F is not available.
    - **Responsible AI Notice**: Agree.
1. Select **Review + create**, then select **Create** to provision the resource.
1. Wait for deployment to complete, and then go to the deployed resource.
1. View the **Keys and Endpoint** page. You will need the information on this page later in the exercise.

## Create a conversational language understanding project

Now that you have created an authoring resource, you can use it to create a conversational language understanding project.

1. In a new browser tab, open the Azure AI Language Studio portal at `https://language.cognitive.azure.com/` and sign in using the Microsoft account associated with your Azure subscription.

1. If prompted to choose a Language resource, select the following settings:

    - **Azure Directory**: The Azure directory containing your subscription.
    - **Azure subscription**: Your Azure subscription.
    - **Resource type**: Language.
    - **Language resource**: The Azure AI Language resource you created previously.

    If you are <u>not</u> prompted to choose a language resource, it may be because you have multiple Language resources in your subscription; in which case:

    1. On the bar at the top of the page, select the **Settings (&#9881;)** button.
    2. On the **Settings** page, view the **Resources** tab.
    3. Select the language resource you just created, and click **Switch resource**.
    4. At the top of the page, click **Language Studio** to return to the Language Studio home page

1. At the top of the portal, in the **Create new** menu, select **Conversational language understanding**.

1. In the **Create a project** dialog box, on the **Enter basic information** page, enter the following details and then select **Next**:
    - **Name**: `Clock`
    - **Utterances primary language**: English
    - **Enable multiple languages in project?**: *Unselected*
    - **Description**: `Natural language clock`

1. On the **Review and finish** page, select **Create**.

### Create intents

The first thing we'll do in the new project is to define some intents. The model will ultimately predict which of these intents a user is requesting when submitting a natural language utterance.

> **Tip**: When working on your project, if some tips are displayed, read them and select **Got it** to dismiss them, or select **Skip all**.

1. On the **Schema definition** page, on the **Intents** tab, select **&#65291; Add** to add a new intent named `GetTime`.
1. Verify that the **GetTime** intent is listed (along with the default **None** intent). Then add the following additional intents:
    - `GetDay`
    - `GetDate`

### Label each intent with sample utterances

To help the model predict which intent a user is requesting, you must label each intent with some sample utterances.

1. In the pane on the left, select the **Data Labeling** page.

> **Tip**: You can expand the pane with the **>>** icon to see the page names, and hide it again with the **<<** icon.

1. Select the new **GetTime** intent  and enter the utterance `what is the time?`. This adds the utterance as sample input for the intent.
1. Add the following additional utterances for the **GetTime** intent:
    - `what's the time?`
    - `what time is it?`
    - `tell me the time`

    > **NOTE**
    > To add a new utterance, write the utterance in the textbox next to the intent and then press ENTER.

1. Select the **GetDay** intent and add the following utterances as example input for that intent:
    - `what day is it?`
    - `what's the day?`
    - `what is the day today?`
    - `what day of the week is it?`

1. Select the **GetDate** intent and add the following utterances for it:
    - `what date is it?`
    - `what's the date?`
    - `what is the date today?`
    - `what's today's date?`

1. After you've added utterances for each of your intents, select **Save changes**.

### Train and test the model

Now that you've added some intents, let's train the language model and see if it can correctly predict them from user input.

1. In the pane on the left, select **Training jobs**. Then select **+ Start a training job**.

1. On the **Start a training job** dialog, select the option to train a new model, name it `Clock`. Select **Standard training** mode and the default **Data splitting** options.

1. To begin the process of training your model, select **Train**.

1. When training is complete (which may take several minutes) the job **Status** will change to **Training succeeded**.

1. Select the **Model performance** page, and then select the **Clock** model. Review the overall and per-intent evaluation metrics (*precision*, *recall*, and *F1 score*) and the *confusion matrix* generated by the evaluation that was performed when training (note that due to the small number of sample utterances, not all intents may be included in the results).

    > **NOTE**
    > To learn more about the evaluation metrics, refer to the [documentation](https://learn.microsoft.com/azure/ai-services/language-service/conversational-language-understanding/concepts/evaluation-metrics)

1. Go to the **Deploying a model** page, then select **Add deployment**.

1. On the **Add deployment** dialog, select **Create a new deployment name**, and then enter `production`.

1. Select the **Clock** model in the **Model** field then select **Deploy**. The deployment may take some time.

1. When the model has been deployed, select the **Testing deployments** page, then select the **production** deployment in the **Deployment name** field.

1. Enter the following text in the empty textbox, and then select **Run the test**:

    `what's the time now?`

    Review the result that is returned, noting that it includes the predicted intent (which should be **GetTime**) and a confidence score that indicates the probability the model calculated for the predicted intent. The JSON tab shows the comparative confidence for each potential intent (the one with the highest confidence score is the predicted intent)

1. Clear the text box, and then run another test with the following text:

    `tell me the time`

    Again, review the predicted intent and confidence score.

1. Try the following text:

    `what's the day today?`

    Hopefully the model predicts the **GetDay** intent.

## Add entities

So far you've defined some simple utterances that map to intents. Most real applications include more complex utterances from which specific data entities must be extracted to get more context for the intent.

### Add a learned entity

The most common kind of entity is a *learned* entity, in which the model learns to identify entity values based on examples.

1. In Language Studio, return to the **Schema definition** page and then on the **Entities** tab, select **&#65291; Add** to add a new entity.

1. In the **Add an entity** dialog box, enter the entity name `Location` and ensure that the **Learned** tab is selected. Then select **Add entity**.

1. After the **Location** entity has been created, return to the **Data labeling** page.
1. Select the **GetTime** intent and enter the following new example utterance:

    `what time is it in London?`

1. When the utterance has been added, select the word **London**, and in the drop-down list that appears, select **Location** to indicate that "London" is an example of a location.

1. Add another example utterance for the **GetTime** intent:

    `Tell me the time in Paris?`

1. When the utterance has been added, select the word **Paris**, and map it to the **Location** entity.

1. Add another example utterance for the **GetTime** intent:

    `what's the time in New York?`

1. When the utterance has been added, select the words **New York**, and map them to the **Location** entity.

1. Select **Save changes** to save the new utterances.

### Add a *list* entity

In some cases, valid values for an entity can be restricted to a list of specific terms and synonyms; which can help the app identify instances of the entity in utterances.

1. In Language Studio, return to the **Schema definition** page and then on the **Entities** tab, select **&#65291; Add** to add a new entity.

1. In the **Add an entity** dialog box, enter the entity name `Weekday` and select the **List** entity tab. Then select **Add entity**.

1. On the page for the **Weekday** entity, in the **Learned** section, ensure **Not required** is selected. Then, in the **List** section, select **&#65291; Add new list**. Then enter the following value and synonym and select **Save**:

    | List key | synonyms|
    |-------------------|---------|
    | `Sunday` | `Sun` |

    > **NOTE**
    > To enter the fields of the new list, insert the value `Sunday` in the text field, then click on the field where 'Type in value and press enter...' is displayed, enter the synonyms, and press ENTER.

1. Repeat the previous step to add the following list components:

    | Value | synonyms|
    |-------------------|---------|
    | `Monday` | `Mon` |
    | `Tuesday` | `Tue, Tues` |
    | `Wednesday` | `Wed, Weds` |
    | `Thursday` | `Thur, Thurs` |
    | `Friday` | `Fri` |
    | `Saturday` | `Sat` |

1. After adding and saving the list values, return to the **Data labeling** page.
1. Select the **GetDate** intent and enter the following new example utterance:

    `what date was it on Saturday?`

1. When the utterance has been added, select the word ***Saturday***, and in the drop-down list that appears, select **Weekday**.

1. Add another example utterance for the **GetDate** intent:

    `what date will it be on Friday?`

1. When the utterance has been added, map **Friday** to the **Weekday** entity.

1. Add another example utterance for the **GetDate** intent:

    `what will the date be on Thurs?`

1. When the utterance has been added, map **Thurs** to the **Weekday** entity.

1. select **Save changes** to save the new utterances.

### Add a *prebuilt* entity

The Azure AI Language service provides a set of *prebuilt* entities that are commonly used in conversational applications.

1. In Language Studio, return to the **Schema definition** page and then on the **Entities** tab, select **&#65291; Add** to add a new entity.

1. In the **Add an entity** dialog box, enter the entity name `Date` and select the **Prebuilt** entity tab. Then select **Add entity**.

1. On the page for the **Date** entity, in the **Learned** section, ensure **Not required** is selected. Then, in the **Prebuilt** section, select **&#65291; Add new prebuilt**.

1. In the **Select prebuilt** list, select **DateTime** and then select **Save**.
1. After adding the prebuilt entity, return to the **Data labeling** page
1. Select the **GetDay** intent and enter the following new example utterance:

    `what day was 01/01/1901?`

1. When the utterance has been added, select ***01/01/1901***, and in the drop-down list that appears, select **Date**.

1. Add another example utterance for the **GetDay** intent:

    `what day will it be on Dec 31st 2099?`

1. When the utterance has been added, map **Dec 31st 2099** to the **Date** entity.

1. Select **Save changes** to save the new utterances.

### Retrain the model

Now that you've modified the schema, you need to retrain and retest the model.

1. On the **Training jobs** page, select **Start a training job**.

1. On the **Start a training job** dialog,  select  **overwrite an existing model** and specify the **Clock** model. Select **Train** to train the model. If prompted, confirm you want to overwrite the existing model.

1. When training is complete the job **Status** will update to **Training succeeded**.

1. Select the **Model performance** page and then select the **Clock** model. Review the evaluation metrics (*precision*, *recall*, and *F1 score*) and the *confusion matrix* generated by the evaluation that was performed when training (note that due to the small number of sample utterances, not all intents may be included in the results).

1. On the **Deploying a model** page, select **Add deployment**.

1. On the **Add deployment** dialog, select **Override an existing deployment name**, and then select **production**.

1. Select the **Clock** model in the **Model** field and then select **Deploy** to deploy it. This may take some time.

1. When the model is deployed, on the **Testing deployments** page, select the **production** deployment under the **Deployment name** field, and then test it with the following text:

    `what's the time in Edinburgh?`

1. Review the result that is returned, which should hopefully predict the **GetTime** intent and a **Location** entity with the text value "Edinburgh".

1. Try testing the following utterances:

    `what time is it in Tokyo?`

    `what date is it on Friday?`

    `what's the date on Weds?`

    `what day was 01/01/2020?`

    `what day will Mar 7th 2030 be?`

## Use the model from a client app

In a real project, you'd iteratively refine intents and entities, retrain, and retest until you are satisfied with the predictive performance. Then, when you've tested it and are satisfied with its predictive performance, you can use it in a client app by calling its REST interface or a runtime-specific SDK.

### Prepare to develop an app in Cloud Shell

You'll develop your language understanding app using Cloud Shell in the Azure portal. The code files for your app have been provided in a GitHub repo.

1. In the Azure Portal, use the **[\>_]** button to the right of the search bar at the top of the page to create a new Cloud Shell in the Azure portal, selecting a ***PowerShell*** environment. The cloud shell provides a command line interface in a pane at the bottom of the Azure portal.

    > **Note**: If you have previously created a cloud shell that uses a *Bash* environment, switch it to ***PowerShell***.

1. In the cloud shell toolbar, in the **Settings** menu, select **Go to Classic version** (this is required to use the code editor).

    **<font color="red">Ensure you've switched to the classic version of the cloud shell before continuing.</font>**

1. In the PowerShell pane, enter the following commands to clone the GitHub repo for this exercise:

    ```
   rm -r mslearn-ai-language -f
   git clone https://github.com/microsoftlearning/mslearn-ai-language
    ```

    > **Tip**: As you paste commands into the cloudshell, the ouput may take up a large amount of the screen buffer. You can clear the screen by entering the `cls` command to make it easier to focus on each task.

1. After the repo has been cloned, navigate to the folder containing the application code files:  

    ```
   cd mslearn-ai-language/Labfiles/03-language/Python/clock-client
    ```

### Configure your application

1. In the command line pane, run the following command to view the code files in the **clock-client** folder:

    ```
   ls -a -l
    ```

    The files include a configuration file (**.env**) and a code file (**clock-client.py**).

1. Create a Python virtual environment and install the Azure AI Language Conversations SDK package and other required packages by running the following command:

    ```
   python -m venv labenv;
    ./labenv/bin/Activate.ps1;
    pip install -r requirements.txt azure-ai-language-conversations==1.1.0
    ```

1. Enter the following command to edit the configuration file:

    ```
   code .env
    ```

    The file is opened in a code editor.

1. Update the configuration values to include the  **endpoint** and a **key** from the Azure Language resource you created (available on the **Keys and Endpoint** page for your Azure AI Language resource in the Azure portal).
1. After you've replaced the placeholders, within the code editor, use the **CTRL+S** command or **Right-click > Save** to save your changes and then use the **CTRL+Q** command or **Right-click > Quit** to close the code editor while keeping the cloud shell command line open.

### Add code to the application

1. Enter the following command to edit the application code file:

    ```
   code clock-client.py
    ```

1. Review the existing code. You will add code to work with the AI Language Conversations SDK.

    > **Tip**: As you add code to the code file, be sure to maintain the correct indentation.

1. At the top of the code file, under the existing namespace references, find the comment **Import namespaces** and add the following code to import the namespaces you will need to use the AI Language Conversations SDK:

    ```python
   # Import namespaces
   from azure.core.credentials import AzureKeyCredential
   from azure.ai.language.conversations import ConversationAnalysisClient
    ```

1. In the **main** function, note that code to load the prediction endpoint and key from the configuration file has already been provided. Then find the comment **Create a client for the Language service model** and add the following code to create a conversation analysis client for your AI Language service:

    ```python
   # Create a client for the Language service model
   client = ConversationAnalysisClient(
        ls_prediction_endpoint, AzureKeyCredential(ls_prediction_key))
    ```

1. Note that the code in the **main** function prompts for user input until the user enters "quit". Within this loop, find the comment **Call the Language service model to get intent and entities** and add the following code:

    ```python
   # Call the Language service model to get intent and entities
   cls_project = 'Clock'
   deployment_slot = 'production'

   with client:
        query = userText
        result = client.analyze_conversation(
            task={
                "kind": "Conversation",
                "analysisInput": {
                    "conversationItem": {
                        "participantId": "1",
                        "id": "1",
                        "modality": "text",
                        "language": "en",
                        "text": query
                    },
                    "isLoggingEnabled": False
                },
                "parameters": {
                    "projectName": cls_project,
                    "deploymentName": deployment_slot,
                    "verbose": True
                }
            }
        )

   top_intent = result["result"]["prediction"]["topIntent"]
   entities = result["result"]["prediction"]["entities"]

   print("view top intent:")
   print("\ttop intent: {}".format(result["result"]["prediction"]["topIntent"]))
   print("\tcategory: {}".format(result["result"]["prediction"]["intents"][0]["category"]))
   print("\tconfidence score: {}\n".format(result["result"]["prediction"]["intents"][0]["confidenceScore"]))

   print("view entities:")
   for entity in entities:
        print("\tcategory: {}".format(entity["category"]))
        print("\ttext: {}".format(entity["text"]))
        print("\tconfidence score: {}".format(entity["confidenceScore"]))

   print("query: {}".format(result["result"]["query"]))
    ```

    The call to the conversational understanding model returns a prediction/result, which includes the top (most likely) intent as well as any entities that were detected in the input utterance. Your client application must now use that prediction to determine and perform the appropriate action.

1. Find the comment **Apply the appropriate action**, and add the following code, which checks for intents supported by the application (**GetTime**, **GetDate**, and **GetDay**) and determines if any relevant entities have been detected, before calling an existing function to produce an appropriate response.

    ```python
   # Apply the appropriate action
   if top_intent == 'GetTime':
        location = 'local'
        # Check for entities
        if len(entities) > 0:
            # Check for a location entity
            for entity in entities:
                if 'Location' == entity["category"]:
                    # ML entities are strings, get the first one
                    location = entity["text"]
        # Get the time for the specified location
        print(GetTime(location))

   elif top_intent == 'GetDay':
        date_string = date.today().strftime("%m/%d/%Y")
        # Check for entities
        if len(entities) > 0:
            # Check for a Date entity
            for entity in entities:
                if 'Date' == entity["category"]:
                    # Regex entities are strings, get the first one
                    date_string = entity["text"]
        # Get the day for the specified date
        print(GetDay(date_string))

   elif top_intent == 'GetDate':
        day = 'today'
        # Check for entities
        if len(entities) > 0:
            # Check for a Weekday entity
            for entity in entities:
                if 'Weekday' == entity["category"]:
                # List entities are lists
                    day = entity["text"]
        # Get the date for the specified day
        print(GetDate(day))

   else:
        # Some other intent (for example, "None") was predicted
        print('Try asking me for the time, the day, or the date.')
    ```

1. Save your changes (CTRL+S), then enter the following command to run the program (you maximize the cloud shell pane and resize the panels to see more text in the command line pane):

    ```
   python clock-client.py
    ```

1. When prompted, enter utterances to test the application. For example, try:

    *Hello*

    *What time is it?*

    *What's the time in London?*

    *What's the date?*

    *What date is Sunday?*

    *What day is it?*

    *What day is 01/01/2025?*

    > **Note**: The logic in the application is deliberately simple, and has a number of limitations. For example, when getting the time, only a restricted set of cities is supported and daylight savings time is ignored. The goal is to see an example of a typical pattern for using Language Service in which your application must:
    > 1. Connect to a prediction endpoint.
    > 2. Submit an utterance to get a prediction.
    > 3. Implement logic to respond appropriately to the predicted intent and entities.

1. When you have finished testing, enter *quit*.

## Clean up resources

If you're finished exploring the Azure AI Language service, you can delete the resources you created in this exercise. Here's how:

1. Close the Azure cloud shell pane
1. In the Azure portal, browse to the Azure AI Language resource you created in this lab.
1. On the resource page, select **Delete** and follow the instructions to delete the resource.

## More information

To learn more about conversational language understanding in  Azure AI Language, see the [Azure AI Language documentation](https://learn.microsoft.com/azure/ai-services/language-service/conversational-language-understanding/overview).


================================================
FILE: Instructions/Labs/04-text-classification.md
================================================
---
lab:
    title: 'Custom text classification  (deprecated)'
    description: "Apply custom classifications to text input using Azure AI Language."
    islab: false
---

# Custom text classification (deprecated)

> **Note**: This exercise is deprecated. Consider reviewing the QuickStart tutorial at <https://learn.microsoft.com/azure/ai-services/language-service/custom-text-classification/quickstart>.

Azure AI Language provides several NLP capabilities, including the key phrase identification, text summarization, and sentiment analysis. The Language service also provides custom features like custom question answering and custom text classification.

To test the custom text classification of the Azure AI Language service, you'll configure the model using Language Studio then use a Python application to test it.

While this exercise is based on Python, you can develop text classification applications using multiple language-specific SDKs; including:

- [Azure AI Text Analytics client library for Python](https://pypi.org/project/azure-ai-textanalytics/)
- [Azure AI Text Analytics client library for .NET](https://www.nuget.org/packages/Azure.AI.TextAnalytics)
- [Azure AI Text Analytics client library for JavaScript](https://www.npmjs.com/package/@azure/ai-text-analytics)

This exercise takes approximately **35** minutes.

## Provision an *Azure AI Language* resource

If you don't already have one in your subscription, you'll need to provision an **Azure AI Language service** resource. Additionally, use custom text classification, you need to enable the **Custom text classification & extraction** feature.

1. Open the Azure portal at `https://portal.azure.com`, and sign in using the Microsoft account associated with your Azure subscription.
1. Select **Create a resource**.
1. In the search field, search for **Language service**. Then, in the results, select **Create** under **Language Service**.
1. Select the box that includes **Custom text classification**. Then select **Continue to create your resource**.
1. Create a resource with the following settings:
    - **Subscription**: *Your Azure subscription*.
    - **Resource group**: *Select or create a resource group*.
    - **Region**: *Choose from one of the following regions*\*
        - Australia East
        - Central India
        - East US
        - East US 2
        - North Europe
        - South Central US
        - Switzerland North
        - UK South
        - West Europe
        - West US 2
        - West US 3
    - **Name**: *Enter a unique name*.
    - **Pricing tier**: Select **F0** (*free*), or **S** (*standard*) if F is not available.
    - **Storage account**: New storage account
      - **Storage account name**: *Enter a unique name*.
      - **Storage account type**: Standard LRS
    - **Responsible AI notice**: Selected.

1. Select **Review + create,** then select **Create** to provision the resource.
1. Wait for deployment to complete, and then go to the resource group.
1. Find the storage account you created, select it, and verify the *Account kind* is **StorageV2**. If it's v1, upgrade your storage account kind on that resource page.

## Configure role-based access for your user

> **NOTE**: If you skip this step, you'll get a 403 error when trying to connect to your custom project. It's important that your current user has this role to access storage account blob data, even if you're the owner of the storage account.

1. Go to your storage account page in the Azure portal.
2. Select **Access Control (IAM)** in the left navigation menu.
3. Select **Add** to Add Role Assignments, and choose the **Storage Blob Data Owner** role on the storage account.
4. Within **Assign access to**, select **User, group, or service principal**.
5. Select **Select members**.
6. Select your User. You can search for user names in the **Select** field.

## Upload sample articles

Once you've created the Azure AI Language service and storage account, you'll need to upload example articles to train your model later.

1. In a new browser tab, download sample articles from `https://aka.ms/classification-articles` and extract the files to a folder of your choice.

1. In the Azure portal, navigate to the storage account you created, and select it.

1. In your storage account select **Configuration**, located below **Settings**. In the Configuration screen enable the option to **Allow Blob anonymous access** then select **Save**.

1. Select **Containers** in the left menu, located below **Data storage**. On the screen that appears, select **+ Container**. Give the container the name `articles`, and set **Anonymous access level** to **Container (anonymous read access for containers and blobs)**.

    > **NOTE**: When you configure a storage account for a real solution, be careful to assign the appropriate access level. To learn more about each access level, see the [Azure Storage documentation](https://learn.microsoft.com/azure/storage/blobs/anonymous-read-access-configure).

    > **ADDITIONAL NOTE**: If the Anonymous access level option appears unavailable or cannot be changed, refresh the page and check again. Sometimes the portal needs to reload after recent security or configuration updates before the option becomes available.

1. After you've created the container, select it then select the **Upload** button. Select **Browse for files** to browse for the sample articles you downloaded. Then select **Upload**.

## Create a custom text classification project

After configuration is complete, create a custom text classification project. This project provides a working place to build, train, and deploy your model.

> **NOTE**: This lab utilizes **Language Studio**, but you can also create, build, train, and deploy your model through the REST API.

1. In a new browser tab, open the Azure AI Language Studio portal at `https://language.cognitive.azure.com/` and sign in using the Microsoft account associated with your Azure subscription.
1. If prompted to choose a Language resource, select the following settings:

    - **Azure Directory**: The Azure directory containing your subscription.
    - **Azure subscription**: Your Azure subscription.
    - **Resource type**: Language.
    - **Language resource**: The Azure AI Language resource you created previously.

    If you are <u>not</u> prompted to choose a language resource, it may be because you have multiple Language resources in your subscription; in which case:

    1. On the bar at the top if the page, select the **Settings (&#9881;)** button.
    2. On the **Settings** page, view the **Resources** tab.
    3. Select the language resource you just created, and click **Switch resource**.
    4. At the top of the page, click **Language Studio** to return to the Language Studio home page

1. At the top of the portal, in the **Create new** menu, select **Custom text classification**.
1. The **Connect storage** page appears. All values will already have been filled. So select **Next**.
1. On the **Select project type** page, select **Single label classification**. Then select **Next**.
1. On the **Enter basic information** pane, set the following:
    - **Name**: `ClassifyLab`  
    - **Text primary language**: English (US)
    - **Description**: `Custom text lab`

1. Select **Next**.
1. On the **Choose container** page, set the **Blob store container** dropdown to your *articles* container.
1. Select the  **No, I need to label my files as part of this project** option. Then select **Next**.
1. Select **Create project**.

> **Tip**: If you get an error about not being authorized to perform this operation, you'll need to add a role assignment. To fix this, we add the role "Storage Blob Data Contributor" on the storage account for the user running the lab. More details can be found [on the documentation page](https://learn.microsoft.com/azure/ai-services/language-service/custom-named-entity-recognition/how-to/create-project?tabs=portal%2Clanguage-studio#enable-identity-management-for-your-resource)

## Label your data

Now that your project is created, you need to label, or tag, your data to train your model how to classify text.

1. On the left, select **Data labeling**, if not already selected. You'll see a list of the files you uploaded to your storage account.
1. On the right side, in the **Activity** pane, select **+ Add class**.  The articles in this lab fall into four classes you'll need to create: `Classifieds`, `Sports`, `News`, and `Entertainment`.

    ![Screenshot showing the tag data page and the add class button.](../media/tag-data-add-class-new.png#lightbox)

1. After you've created your four classes, select **Article 1** to start. Here you can read the article, define which class this file is, and which dataset (training or testing) to assign it to.
1. Assign each article the appropriate class and dataset (training or testing) using the **Activity** pane on the right.  You can select a label from the list of labels on the right, and set each article to **training** or **testing** using the options at the bottom of the Activity pane. You  select **Next document** to move to the next document. For the purposes of this lab, we'll define which are to be used for training the model and testing the model:

    | Article  | Class  | Dataset  |
    |---------|---------|---------|
    | Article 1 | Sports | Training |
    | Article 10 | News | Training |
    | Article 11 | Entertainment | Testing |
    | Article 12 | News | Testing |
    | Article 13 | Sports | Testing |
    | Article 2 | Sports | Training |
    | Article 3 | Classifieds | Training |
    | Article 4 | Classifieds | Training |
    | Article 5 | Entertainment | Training |
    | Article 6 | Entertainment | Training |
    | Article 7 | News | Training |
    | Article 8 | News | Training |
    | Article 9 | Entertainment | Training |

    > **NOTE**
    > Files in Language Studio are listed alphabetically, which is why the above list is not in sequential order. Make sure you visit both pages of documents when labeling your articles.

1. Select **Save labels** to save your labels.

## Train your model

After you've labeled your data, you need to train your model.

1. Select **Training jobs** on the left side menu.
1. Select **Start a training job**.
1. Train a new model named `ClassifyArticles`.
1. Select **Use a manual split of training and testing data**.

    > **TIP**
    > In your own classification projects, the Azure AI Language service will automatically split the testing set by percentage which is useful with a large dataset. With smaller datasets, it's important to train with the right class distribution.

1. Select **Train**

> **IMPORTANT**
> Training your model can sometimes take several minutes. You'll get a notification when it's complete.

## Evaluate your model

In real world applications of text classification, it's important to evaluate and improve your model to verify it's performing as you expect.

1. Select **Model performance**, and select your **ClassifyArticles** model. There you can see the scoring of your model, performance metrics, and when it was trained. If the scoring of your model isn't 100%, it means that one of the documents used for testing didn't evaluate to what it was labeled. These failures can help you understand where to improve.
1. Select **Test set details** tab. If there are any errors, this tab allows you to see the articles you indicated for testing and what the model predicted them as and whether that conflicts with their test label. The tab defaults to show incorrect predictions only. You can toggle the **Show mismatches only** option to see all the articles you indicated for testing and what they each of them predicted as.

## Deploy your model

When you're satisfied with the training of your model, it's time to deploy it, which allows you to start classifying text through the API.

1. On the left panel, select **Deploying model**.
1. Select **Add deployment**, then enter `articles` in the **Create a new deployment name** field, and select **ClassifyArticles** in the **Model** field.
1. Select **Deploy** to deploy your model.
1. Once your model is deployed, leave that page open. You'll need your project and deployment name in the next step.

## Prepare to develop an app in Cloud Shell

To test the custom text classification capabilities of the Azure AI Language service, you'll develop a simple console application in the Azure Cloud Shell.

1. In the Azure Portal, use the **[\>_]** button to the right of the search bar at the top of the page to create a new Cloud Shell in the Azure portal, selecting a ***PowerShell*** environment. The cloud shell provides a command line interface in a pane at the bottom of the Azure portal.

    > **Note**: If you have previously created a cloud shell that uses a *Bash* environment, switch it to ***PowerShell***.

1. In the cloud shell toolbar, in the **Settings** menu, select **Go to Classic version** (this is required to use the code editor).

    **<font color="red">Ensure you've switched to the classic version of the cloud shell before continuing.</font>**

1. In the PowerShell pane, enter the following commands to clone the GitHub repo for this exercise:

    ```
   rm -r mslearn-ai-language -f
   git clone https://github.com/microsoftlearning/mslearn-ai-language
    ```

    > **Tip**: As you paste commands into the cloudshell, the ouput may take up a large amount of the screen buffer. You can clear the screen by entering the `cls` command to make it easier to focus on each task.

1. After the repo has been cloned, navigate to the folder containing the application code files:  

    ```
   cd mslearn-ai-language/Labfiles/04-text-classification/Python/classify-text
    ```

## Configure your application

1. In the command line pane, run the following command to view the code files in the **classify-text** folder:

    ```
   ls -a -l
    ```

    The files include a configuration file (**.env**) and a code file (**classify-text.py**). The text your application will analyze is in the **articles** subfolder.

1. Create a Python virtual environment and install the Azure AI Language Text Analytics SDK package and other required packages by running the following command:

    ```
   python -m venv labenv;
    ./labenv/bin/Activate.ps1;
    pip install -r requirements.txt azure-ai-textanalytics==5.3.0
    ```

1. Enter the following command to edit the application configuration file:

    ```
   code .env
    ```

    The file is opened in a code editor.

1. Update the configuration values to include the  **endpoint** and a **key** from the Azure Language resource you created (available on the **Keys and Endpoint** page for your Azure AI Language resource in the Azure portal).The file should already contain the project and deployment names for your text classification model.
1. After you've replaced the placeholders, within the code editor, use the **CTRL+S** command or **Right-click > Save** to save your changes and then use the **CTRL+Q** command or **Right-click > Quit** to close the code editor while keeping the cloud shell command line open.

## Add code to classify documents

1. Enter the following command to edit the application code file:

    ```
    code classify-text.py
    ```

1. Review the existing code. You will add code to work with the AI Language Text Analytics SDK.

    > **Tip**: As you add code to the code file, be sure to maintain the correct indentation.

1. At the top of the code file, under the existing namespace references, find the comment **Import namespaces** and add the following code to import the namespaces you will need to use the Text Analytics SDK:

    ```python
   # import namespaces
   from azure.core.credentials import AzureKeyCredential
   from azure.ai.textanalytics import TextAnalyticsClient
    ```

1. In the **main** function, note that code to load the Azure AI Language service endpoint and key and the project and deployment names from the configuration file has already been provided. Then find the comment **Create client using endpoint and key**, and add the following code to create a text analysis client:

    ```Python
   # Create client using endpoint and key
   credential = AzureKeyCredential(ai_key)
   ai_client = TextAnalyticsClient(endpoint=ai_endpoint, credential=credential)
    ```

1. Note that the existing code reads all of the files in the **articles** folder and creates a list containing their contents. Then find the comment **Get Classifications** and add the following code:

     ```Python
   # Get Classifications
   operation = ai_client.begin_single_label_classify(
        batchedDocuments,
        project_name=project_name,
        deployment_name=deployment_name
   )

   document_results = operation.result()

   for doc, classification_result in zip(files, document_results):
        if classification_result.kind == "CustomDocumentClassification":
            classification = classification_result.classifications[0]
            print("{} was classified as '{}' with confidence score {}.".format(
                doc, classification.category, classification.confidence_score)
            )
        elif classification_result.is_error is True:
            print("{} has an error with code '{}' and message '{}'".format(
                doc, classification_result.error.code, classification_result.error.message)
            )
    ```

1. Save your changes (CTRL+S), then enter the following command to run the program (you maximize the cloud shell pane and resize the panels to see more text in the command line pane):

    ```
   python classify-text.py
    ```

1. Observe the output. The application should list a classification and confidence score for each text file.

## Clean up

When you don't need your project any more, you can delete if from your **Projects** page in Language Studio. You can also remove the Azure AI Language service and associated storage account in the [Azure portal](https://portal.azure.com).


================================================
FILE: Instructions/Labs/05-extract-custom-entities.md
================================================
---
lab:
    title: 'Extract custom entities  (deprecated)'
    description: "Train a model to extract customized entities from text input using Azure AI Language."
    islab: false
---

# Extract custom entities (deprecated)

> **Note**: This exercise is deprecated. Consider reviewing the QuickStart tutorial at <https://learn.microsoft.com/azure/ai-services/language-service/custom-named-entity-recognition/quickstart>.

In addition to other natural language processing capabilities, Azure AI Language Service enables you to define custom entities, and extract instances of them from text.

To test the custom entity extraction, we'll create a model and train it through Azure AI Language Studio, then use a Python application to test it.

While this exercise is based on Python, you can develop text classification applications using multiple language-specific SDKs; including:

- [Azure AI Text Analytics client library for Python](https://pypi.org/project/azure-ai-textanalytics/)
- [Azure AI Text Analytics client library for .NET](https://www.nuget.org/packages/Azure.AI.TextAnalytics)
- [Azure AI Text Analytics client library for JavaScript](https://www.npmjs.com/package/@azure/ai-text-analytics)

This exercise takes approximately **35** minutes.

## Provision an *Azure AI Language* resource

If you don't already have one in your subscription, you'll need to provision an **Azure AI Language service** resource. Additionally, use custom text classification, you need to enable the **Custom text classification & extraction** feature.

1. In a browser, open the Azure portal at `https://portal.azure.com`, and sign in with your Microsoft account.
1. Select the **Create a resource** button, search for *Language*, and create a **Language Service** resource. When on the page for *Select additional features*, select the custom feature containing **Custom named entity recognition extraction**. Create the resource with the following settings:
    - **Subscription**: *Your Azure subscription*
    - **Resource group**: *Select or create a resource group*
    - **Region**: *Choose from one of the following regions*\*
        - Australia East
        - Central India
        - East US
        - East US 2
        - North Europe
        - South Central US
        - Switzerland North
        - UK South
        - West Europe
        - West US 2
        - West US 3
    - **Name**: *Enter a unique name*
    - **Pricing tier**: Select **F0** (*free*), or **S** (*standard*) if F is not available.
    - **Storage account**: New storage account:
      - **Storage account name**: *Enter a unique name*.
      - **Storage account type**: Standard LRS
    - **Responsible AI notice**: Selected.

1. Select **Review + create**, then select **Create** to provision the resource.
1. Wait for deployment to complete, and then go to the deployed resource.
1. View the **Keys and Endpoint** page. You will need the information on this page later in the exercise.

## Configure role-based access for your user

> **NOTE**: If you skip this step, you'll have a 403 error when trying to connect to your custom project. It's important that your current user has this role to access storage account blob data, even if you're the owner of the storage account.

1. Go to your storage account page in the Azure portal.
2. Select **Access Control (IAM)** in the left navigation menu.
3. Select **Add** to Add Role Assignments, and choose the **Storage Blob Data Contributor** role on the storage account.
4. Within **Assign access to**, select **User, group, or service principal**.
5. Select **Select members**.
6. Select your User. You can search for user names in the **Select** field.

## Upload sample ads

After you've created the Azure AI Language Service and storage account, you'll need to upload example ads to train your model later.

1. In a new browser tab, download sample classified ads from `https://aka.ms/entity-extraction-ads` and extract the files to a folder of your choice.

2. In the Azure portal, navigate to the storage account you created, and select it.

3. In your storage account select **Configuration**, located below **Settings**, and screen enable the option to **Allow Blob anonymous access** then select **Save**.

4. Select **Containers** from the left menu, located below **Data storage**. On the screen that appears, select **+ Container**. Give the container the name `classifieds`, and set **Anonymous access level** to **Container (anonymous read access for containers and blobs)**.

    > **NOTE**: When you configure a storage account for a real solution, be careful to assign the appropriate access level. To learn more about each access level, see the [Azure Storage documentation](https://learn.microsoft.com/azure/storage/blobs/anonymous-read-access-configure).

5. After creating the container, select it and click the **Upload** button and upload the sample ads you downloaded.

## Create a custom named entity recognition project

Now you're ready to create a custom named entity recognition project. This project provides a working place to build, train, and deploy your model.

> **NOTE**: You can also create, build, train, and deploy your model through the REST API.

1. In a new browser tab, open the Azure AI Language Studio portal at `https://language.cognitive.azure.com/` and sign in using the Microsoft account associated with your Azure subscription.
1. If prompted to choose a Language resource, select the following settings:

    - **Azure Directory**: The Azure directory containing your subscription.
    - **Azure subscription**: Your Azure subscription.
    - **Resource type**: Language.
    - **Language resource**: The Azure AI Language resource you created previously.

    If you are <u>not</u> prompted to choose a language resource, it may be because you have multiple Language resources in your subscription; in which case:

    1. On the bar at the top of the page, select the **Settings (&#9881;)** button.
    2. On the **Settings** page, view the **Resources** tab.
    3. Select the language resource you just created, and click **Switch resource**.
    4. At the top of the page, click **Language Studio** to return to the Language Studio home page.

1. At the top of the portal, in the **Create new** menu, select **Custom named entity recognition**.

1. Create a new project with the following settings:
    - **Connect storage**: *This  value is likely already filled. Change it to your storage account if it isn't already*
    - **Basic information**:
    - **Name**: `CustomEntityLab`
        - **Text primary language**: English (US)
        - **Does your dataset include documents that are not in the same language?** : *No*
        - **Description**: `Custom entities in classified ads`
    - **Container**:
        - **Blob store container**: classifieds
        - **Are your files labeled with classes?**: No, I need to label my files as part of this project

> **Tip**: If you get an error about not being authorized to perform this operation, you'll need to add a role assignment. To fix this, we add the role "Storage Blob Data Contributor" on the storage account for the user running the lab. More details can be found [on the documentation page](https://learn.microsoft.com/azure/ai-services/language-service/custom-named-entity-recognition/how-to/create-project?tabs=portal%2Clanguage-studio#enable-identity-management-for-your-resource)

## Label your data

Now that your project is created, you need to label your data to train your model how to identity entities.

1. If the **Data labeling** page is not already open, in the pane on the left, select **Data labeling**. You'll see a list of the files you uploaded to your storage account.
1. On the right side, in the **Activity** pane, select **Add entity** and add a new entity named `ItemForSale`.
1. Repeat the previous step to create the following entities:
    - `Price`
    - `Location`
1. After you've created your three entities, select **Ad 1.txt** so you can read it.
1. In *Ad 1.txt*:
    1. Highlight the text *face cord of firewood* and select the **ItemForSale** entity.
    1. Highlight the text *Denver, CO* and select the **Location** entity.
    1. Highlight the text *$90* and select the **Price** entity.
1. In the **Activity** pane, note that this document will be added to the dataset for training the model.
1. Use the **Next document** button to move to the next document, and continue assigning text to appropriate entities for the entire set of documents, adding them all to the training dataset.
1. When you have labeled the last document (*Ad 9.txt*), save the labels.

## Train your model

After you've labeled your data, you need to train your model.

1. Select **Training jobs** in the pane on the left.
2. Select **Start a training job**
3. Train a new model named `ExtractAds`
4. Choose **Automatically split the testing set from training data**

    > **TIP**: In your own extraction projects, use the testing split that best suits your data. For more consistent data and larger datasets, the Azure AI Language Service will automatically split the testing set by percentage. With smaller datasets, it's important to train with the right variety of possible input documents.

5. Click **Train**

    > **IMPORTANT**: Training your model can sometimes take several minutes. You'll get a notification when it's complete.

## Evaluate your model

In real world applications, it's important to evaluate and improve your model to verify it's performing as you expect. Two pages on the left show you the details of your trained model, and any testing that failed.

Select **Model performance** on the left side menu, and select your `ExtractAds` model. There you can see the scoring of your model, performance metrics, and when it was trained. You'll be able to see if any testing documents failed, and these failures help you understand where to improve.

## Deploy your model

When you're satisfied with the training of your model, it's time to deploy it, which allows you to start extracting entities through the API.

1. In the left pane, select **Deploying a model**.
2. Select **Add deployment**, then enter the name `AdEntities` and select the **ExtractAds** model.
3. Click **Deploy** to deploy your model.

## Prepare to develop an app in Cloud Shell

To test the custom entity extraction capabilities of the Azure AI Language service, you'll develop a simple console application in the Azure Cloud Shell.

1. In the Azure Portal, use the **[\>_]** button to the right of the search bar at the top of the page to create a new Cloud Shell in the Azure portal, selecting a ***PowerShell*** environment. The cloud shell provides a command line interface in a pane at the bottom of the Azure portal.

    > **Note**: If you have previously created a cloud shell that uses a *Bash* environment, switch it to ***PowerShell***.

1. In the cloud shell toolbar, in the **Settings** menu, select **Go to Classic version** (this is required to use the code editor).

    **<font color="red">Ensure you've switched to the classic version of the cloud shell before continuing.</font>**

1. In the PowerShell pane, enter the following commands to clone the GitHub repo for this exercise:

    ```
   rm -r mslearn-ai-language -f
   git clone https://github.com/microsoftlearning/mslearn-ai-language
    ```

    > **Tip**: As you paste commands into the cloudshell, the ouput may take up a large amount of the screen buffer. You can clear the screen by entering the `cls` command to make it easier to focus on each task.

    ```

1. After the repo has been cloned, navigate to the folder containing the application code files:  

    ```
    cd mslearn-ai-language/Labfiles/05-custom-entity-recognition/Python/custom-entities
    ```

## Configure your application

1. In the command line pane, run the following command to view the code files in the **custom-entities** folder:

    ```
   ls -a -l
    ```

    The files include a configuration file (**.env**) and a code file (**custom-entities.py**). The text your application will analyze is in the **ads** subfolder.

1. Create a Python virtual environment and install the Azure AI Language Text Analytics SDK package and other required packages by running the following command:

    ```
   python -m venv labenv;
    ./labenv/bin/Activate.ps1;
    pip install -r requirements.txt azure-ai-textanalytics==5.3.0
    ```

1. Enter the following command to edit the application configuration file:

    ```
   code .env
    ```

    The file is opened in a code editor.

1. Update the configuration values to include the  **endpoint** and a **key** from the Azure Language resource you created (available on the **Keys and Endpoint** page for your Azure AI Language resource in the Azure portal).The file should already contain the project and deployment names for your custom entity extraction model.
1. After you've replaced the placeholders, within the code editor, use the **CTRL+S** command or **Right-click > Save** to save your changes and then use the **CTRL+Q** command or **Right-click > Quit** to close the code editor while keeping the cloud shell command line open.

## Add code to extract entities

1. Enter the following command to edit the application code file:

    ```
    code custom-entities.py
    ```

1. Review the existing code. You will add code to work with the AI Language Text Analytics SDK.

    > **Tip**: As you add code to the code file, be sure to maintain the correct indentation.

1. At the top of the code file, under the existing namespace references, find the comment **Import namespaces** and add the following code to import the namespaces you will need to use the Text Analytics SDK:

    ```python
   # import namespaces
   from azure.core.credentials import AzureKeyCredential
   from azure.ai.textanalytics import TextAnalyticsClient
    ```

1. In the **main** function, note that code to load the Azure AI Language service endpoint and key and the project and deployment names from the configuration file has already been provided. Then find the comment **Create client using endpoint and key**, and add the following code to create a text analytics client:

    ```Python
   # Create client using endpoint and key
   credential = AzureKeyCredential(ai_key)
   ai_client = TextAnalyticsClient(endpoint=ai_endpoint, credential=credential)
    ```

1. Note that the existing code reads all of the files in the **ads** folder and creates a list containing their contents. Then find the comment **Extract entities** and add the following code:

    ```Python
   # Extract entities
   operation = ai_client.begin_recognize_custom_entities(
        batchedDocuments,
        project_name=project_name,
        deployment_name=deployment_name
   )

   document_results = operation.result()

   for doc, custom_entities_result in zip(files, document_results):
        print(doc)
        if custom_entities_result.kind == "CustomEntityRecognition":
            for entity in custom_entities_result.entities:
                print(
                    "\tEntity '{}' has category '{}' with confidence score of '{}'".format(
                        entity.text, entity.category, entity.confidence_score
                    )
                )
        elif custom_entities_result.is_error is True:
            print("\tError with code '{}' and message '{}'".format(
                custom_entities_result.error.code, custom_entities_result.error.message
                )
            )
    ```

1. Save your changes (CTRL+S), then enter the following command to run the program (you maximize the cloud shell pane and resize the panels to see more text in the command line pane):

    ```
   python custom-entities.py
    ```

1. Observe the output. The application should list details of the entities found in each text file.

## Clean up

When you don't need your project anymore, you can delete if from your **Projects** page in Language Studio. You can also remove the Azure AI Language service and associated storage account in the [Azure portal](https://portal.azure.com).


================================================
FILE: Instructions/Labs/06-translate-text.md
================================================
---
lab:
    title: 'Translate Text  (deprecated)'
    description: "Translate provided text between any supported languages with Azure AI Translator."
    islab: false
---

# Translate Text (deprecated)

> **Note**: This exercise is deprecated. Consider completing the replacement exercise at <https://go.microsoft.com/fwlink/?linkid=2356176>.

**Azure AI Translator** is a service that enables you to translate text between languages. In this exercise, you'll use it to create a simple app that translates input in any supported language to the target language of your choice.

While this exercise is based on Python, you can develop text translation applications using multiple language-specific SDKs; including:

- [Azure AI Translation client library for Python](https://pypi.org/project/azure-ai-translation-text/)
- [Azure AI Translation client library for .NET](https://www.nuget.org/packages/Azure.AI.Translation.Text)
- [Azure AI Translation client library for JavaScript](https://www.npmjs.com/package/@azure-rest/ai-translation-text)

This exercise takes approximately **30** minutes.

## Provision an *Azure AI Translator* resource

If you don't already have one in your subscription, you'll need to provision an **Azure AI Translator** resource.

1. Open the Azure portal at `https://portal.azure.com`, and sign in using the Microsoft account associated with your Azure subscription.
1. In the search field at the top, search for **Translators** then select **Translators** in the results.
1. Create a resource with the following settings:
    - **Subscription**: *Your Azure subscription*
    - **Resource group**: *Choose or create a resource group*
    - **Region**: *Choose any available region*
    - **Name**: *Enter a unique name*
    - **Pricing tier**: Select **F0** (*free*), or **S** (*standard*) if F is not available.
1. Select **Review + create**, then select **Create** to provision the resource.
1. Wait for deployment to complete, and then go to the deployed resource.
1. View the **Keys and Endpoint** page. You will need the information on this page later in the exercise.

## Prepare to develop an app in Cloud Shell

To test the text translation capabilities of Azure AI Translator, you'll develop a simple console application in the Azure Cloud Shell.

1. In the Azure Portal, use the **[\>_]** button to the right of the search bar at the top of the page to create a new Cloud Shell in the Azure portal, selecting a ***PowerShell*** environment. The cloud shell provides a command line interface in a pane at the bottom of the Azure portal.

    > **Note**: If you have previously created a cloud shell that uses a *Bash* environment, switch it to ***PowerShell***.

1. In the cloud shell toolbar, in the **Settings** menu, select **Go to Classic version** (this is required to use the code editor).

    **<font color="red">Ensure you've switched to the classic version of the cloud shell before continuing.</font>**

1. In the PowerShell pane, enter the following commands to clone the GitHub repo for this exercise:

    ```
   rm -r mslearn-ai-language -f
   git clone https://github.com/microsoftlearning/mslearn-ai-language
    ```

    > **Tip**: As you enter commands into the cloudshell, the ouput may take up a large amount of the screen buffer. You can clear the screen by entering the `cls` command to make it easier to focus on each task.

1. After the repo has been cloned, navigate to the folder containing the application code files:  

    ```
   cd mslearn-ai-language/Labfiles/06-translator-sdk/Python/translate-text
    ```

## Configure your application

1. In the command line pane, run the following command to view the code files in the **translate-text** folder:

    ```
   ls -a -l
    ```

    The files include a configuration file (**.env**) and a code file (**translate.py**).

1. Create a Python virtual environment and install the Azure AI Translation SDK package and other required packages by running the following command:

    ```
   python -m venv labenv;
   ./labenv/bin/Activate.ps1;
   pip install -r requirements.txt azure-ai-translation-text==1.0.1
    ```

1. Enter the following command to edit the application configuration file:

    ```
   code .env
    ```

    The file is opened in a code editor.

1. Update the configuration values to include the  **region** and a **key** from the Azure AI Translator resource you created (available on the **Keys and Endpoint** page for your Azure AI Translator resource in the Azure portal).

    > **NOTE**: Be sure to add the *region* for your resource, <u>not</u> t
Download .txt
gitextract_wp6mqwji/

├── .github/
│   └── workflows/
│       └── voice-live-web-files.yml
├── .gitignore
├── Instructions/
│   ├── Exercises/
│   │   ├── 01-analyze-text.md
│   │   ├── 02-language-agent.md
│   │   ├── 03-gen-ai-speech.md
│   │   ├── 04-azure-speech.md
│   │   ├── 05-azure-speech-mcp.md
│   │   ├── 06-voice-live-agent.md
│   │   └── 07-translation.md
│   └── Labs/
│       ├── 01-analyze-text.md
│       ├── 02-qna.md
│       ├── 03-language-understanding.md
│       ├── 04-text-classification.md
│       ├── 05-extract-custom-entities.md
│       ├── 06-translate-text.md
│       ├── 07-speech.md
│       ├── 08-translate-speech.md
│       ├── 09-audio-chat.md
│       ├── 10-voice-live-api.md
│       └── 11-voice-live-agent-web.md
├── LICENSE
├── Labfiles/
│   ├── 01-analyze-text/
│   │   └── Python/
│   │       ├── readme.txt
│   │       └── text-analysis/
│   │           ├── requirements.txt
│   │           ├── reviews/
│   │           │   ├── review1.txt
│   │           │   ├── review2.txt
│   │           │   ├── review3.txt
│   │           │   ├── review4.txt
│   │           │   └── review5.txt
│   │           └── text-analysis.py
│   ├── 02-language-agent/
│   │   └── Python/
│   │       └── text-agent/
│   │           ├── requirements.txt
│   │           └── text-agent.py
│   ├── 02-qna/
│   │   ├── Python/
│   │   │   ├── qna-app/
│   │   │   │   ├── qna-app.py
│   │   │   │   └── requirements.txt
│   │   │   └── readme.txt
│   │   └── ask-question.sh
│   ├── 03-gen-ai-speech/
│   │   └── Python/
│   │       ├── generate-speech/
│   │       │   ├── generate-speech.py
│   │       │   └── requirements.txt
│   │       └── transcribe-speech/
│   │           ├── requirements.txt
│   │           └── transcribe-speech.py
│   ├── 03-language/
│   │   ├── Clock.json
│   │   ├── Python/
│   │   │   ├── clock-client/
│   │   │   │   ├── clock-client.py
│   │   │   │   └── requirements.txt
│   │   │   └── readme.txt
│   │   └── send-call.sh
│   ├── 04-azure-speech/
│   │   └── Python/
│   │       └── voice-mail/
│   │           ├── requirements.txt
│   │           └── voice-mail.py
│   ├── 04-text-classification/
│   │   ├── Python/
│   │   │   ├── classify-text/
│   │   │   │   ├── articles/
│   │   │   │   │   ├── test1.txt
│   │   │   │   │   └── test2.txt
│   │   │   │   ├── classify-text.py
│   │   │   │   └── requirements.txt
│   │   │   └── readme.txt
│   │   ├── classify-text.ps1
│   │   ├── test1.txt
│   │   └── test2.txt
│   ├── 05-custom-entity-recognition/
│   │   ├── Python/
│   │   │   ├── custom-entities/
│   │   │   │   ├── ads/
│   │   │   │   │   ├── test1.txt
│   │   │   │   │   └── test2.txt
│   │   │   │   ├── custom-entities.py
│   │   │   │   └── requirements.txt
│   │   │   └── readme.txt
│   │   ├── extract-entities.ps1
│   │   ├── test1.txt
│   │   └── test2.txt
│   ├── 05-speech-tool/
│   │   └── Python/
│   │       └── speech-client/
│   │           ├── requirements.txt
│   │           └── speech-client.py
│   ├── 06-translator-sdk/
│   │   └── Python/
│   │       ├── readme.txt
│   │       └── translate-text/
│   │           ├── requirements.txt
│   │           └── translate.py
│   ├── 06-voice-live/
│   │   └── Python/
│   │       └── chat-client/
│   │           ├── chat-client.py
│   │           └── requirements.txt
│   ├── 07-speech/
│   │   └── Python/
│   │       ├── readme.txt
│   │       └── speaking-clock/
│   │           ├── requirements.txt
│   │           └── speaking-clock.py
│   ├── 07-translation/
│   │   └── Python/
│   │       ├── readme.txt
│   │       └── translators/
│   │           ├── requirements.txt
│   │           ├── translate-speech.py
│   │           └── translate-text.py
│   ├── 08-speech-translation/
│   │   └── Python/
│   │       ├── readme.txt
│   │       └── translator/
│   │           ├── requirements.txt
│   │           └── translator.py
│   ├── 09-audio-chat/
│   │   └── Python/
│   │       ├── audio-chat.py
│   │       └── requirements.txt
│   └── 11-voice-live-agent/
│       └── python/
│           ├── .dockerignore
│           ├── .gitignore
│           ├── .python-version
│           ├── Dockerfile
│           ├── README.md
│           ├── azdeploy.sh
│           ├── azure.yaml
│           ├── infra/
│           │   ├── ai-foundry.bicep
│           │   ├── main.bicep
│           │   └── main.parameters.json
│           ├── pyproject.toml
│           ├── requirements.txt
│           └── src/
│               ├── __init__.py
│               ├── flask_app.py
│               ├── static/
│               │   ├── app.js
│               │   └── style.css
│               └── templates/
│                   └── index.html
├── README.md
├── _build.yml
├── _config.yml
├── downloads/
│   └── python/
│       └── readme.md
└── index.md
Download .txt
SYMBOL INDEX (87 symbols across 19 files)

FILE: Labfiles/01-analyze-text/Python/text-analysis/text-analysis.py
  function main (line 7) | def main():

FILE: Labfiles/02-language-agent/Python/text-agent/text-agent.py
  function main (line 8) | def main():

FILE: Labfiles/02-qna/Python/qna-app/qna-app.py
  function main (line 7) | def main():

FILE: Labfiles/03-gen-ai-speech/Python/generate-speech/generate-speech.py
  function main (line 10) | def main():

FILE: Labfiles/03-gen-ai-speech/Python/transcribe-speech/transcribe-speech.py
  function main (line 10) | def main():

FILE: Labfiles/03-language/Python/clock-client/clock-client.py
  function main (line 10) | def main():
  function GetTime (line 34) | def GetTime(location):
  function GetDate (line 68) | def GetDate(day):
  function GetDay (line 95) | def GetDay(date_string):

FILE: Labfiles/04-azure-speech/Python/voice-mail/voice-mail.py
  function main (line 9) | def main():
  function record_greeting (line 44) | def record_greeting(speech_config):
  function transcribe_messages (line 57) | def transcribe_messages(speech_config):

FILE: Labfiles/04-text-classification/Python/classify-text/classify-text.py
  function main (line 7) | def main():

FILE: Labfiles/05-custom-entity-recognition/Python/custom-entities/custom-entities.py
  function main (line 7) | def main():

FILE: Labfiles/05-speech-tool/Python/speech-client/speech-client.py
  function main (line 7) | def main():

FILE: Labfiles/06-translator-sdk/Python/translate-text/translate.py
  function main (line 8) | def main():

FILE: Labfiles/06-voice-live/Python/chat-client/chat-client.py
  function main (line 12) | def main():
  class VoiceAssistant (line 49) | class VoiceAssistant:
    method __init__ (line 60) | def __init__(self, endpoint, credential, agent_name, project_name):
    method start (line 70) | async def start(self):
    method setup_session (line 98) | async def setup_session(self):
    method process_events (line 122) | async def process_events(self):
    method handle_event (line 129) | async def handle_event(self, event):
  class AudioProcessor (line 168) | class AudioProcessor:
    method __init__ (line 177) | def __init__(self, connection):
    method start_capture (line 192) | def start_capture(self):
    method start_playback (line 217) | def start_playback(self):
    method queue_audio (line 259) | def queue_audio(self, audio_data):
    method clear_playback_queue (line 263) | def clear_playback_queue(self):
    method shutdown (line 271) | def shutdown(self):

FILE: Labfiles/07-speech/Python/speaking-clock/speaking-clock.py
  function main (line 8) | def main():
  function TranscribeCommand (line 32) | def TranscribeCommand():
  function TellTime (line 45) | def TellTime():

FILE: Labfiles/07-translation/Python/translators/translate-speech.py
  function main (line 9) | def main():

FILE: Labfiles/07-translation/Python/translators/translate-text.py
  function main (line 8) | def main():

FILE: Labfiles/08-speech-translation/Python/translator/translator.py
  function main (line 8) | def main():
  function Translate (line 37) | def Translate(targetLanguage):

FILE: Labfiles/09-audio-chat/Python/audio-chat.py
  function main (line 9) | def main():

FILE: Labfiles/11-voice-live-agent/python/src/flask_app.py
  function _broadcast (line 53) | def _broadcast(event: Dict[str, Any]):
  function _start_ws_server (line 74) | def _start_ws_server(host: str = WS_SERVER_HOST, port: int = WS_SERVER_P...
  function set_state (line 128) | def set_state(state: str, message: str, *, error: str | None = None):
  class _SuppressHTTP200 (line 162) | class _SuppressHTTP200(logging.Filter):
    method filter (line 163) | def filter(self, record: logging.LogRecord) -> bool:  # noqa: D401 - s...
  function _validate_env (line 178) | def _validate_env() -> Tuple[bool, str]:
  class BasicVoiceAssistant (line 196) | class BasicVoiceAssistant:
    method append_audio (line 251) | async def append_audio(self, audio_b64: str):
    method _handle_event (line 260) | async def _handle_event(self, event, conn, verbose=False):
    method _handle_audio_delta (line 292) | async def _handle_audio_delta(self, event):
    method _handle_audio_done (line 307) | async def _handle_audio_done(self):
    method _handle_error (line 312) | async def _handle_error(self, event):
    method request_stop (line 318) | def request_stop(self):
  function _run_assistant_bg (line 322) | def _run_assistant_bg():
  function start_session (line 381) | def start_session():
  function stop_session (line 413) | def stop_session():
  function interrupt (line 423) | def interrupt():
  function audio_chunk (line 483) | def audio_chunk():
  function sse_events (line 506) | def sse_events():
  function status (line 541) | def status():
  function health (line 547) | def health():
  function index (line 558) | def index():
  function main (line 579) | def main() -> None:

FILE: Labfiles/11-voice-live-agent/python/src/static/app.js
  constant TARGET_RATE (line 16) | const TARGET_RATE = 24000;
  constant CHUNK_DURATION_MS (line 17) | const CHUNK_DURATION_MS = 150;
  constant MAX_LOG_LINES (line 18) | const MAX_LOG_LINES = 250;
  function log (line 41) | function log(msg, level='info', obj){
  function updateStatusUI (line 54) | function updateStatusUI(data){
  function openEventSource (line 94) | function openEventSource(){
  function handleSSEMessage (line 102) | function handleSSEMessage(ev) {
  function handleStatusUpdate (line 127) | function handleStatusUpdate(data) {
  function handleAudioData (line 139) | function handleAudioData(data) {
  function handleControlEvent (line 146) | function handleControlEvent(data) {
  function openAudioWebSocket (line 158) | function openAudioWebSocket(){
  function ensureAudioContext (line 168) | function ensureAudioContext(){
  function startMicCapture (line 176) | async function startMicCapture(){
  function stopMicCapture (line 202) | function stopMicCapture(){
  function mergePendingFloat (line 214) | function mergePendingFloat(){
  function downsampleToInt16 (line 225) | function downsampleToInt16(float32, inRate, outRate){
  function int16ToBase64 (line 252) | function int16ToBase64(int16){
  function flushPendingAudio (line 260) | function flushPendingAudio(){
  function sendAudioChunk (line 278) | async function sendAudioChunk(b64){
  function playAssistantPcm16 (line 299) | function playAssistantPcm16(b64){
  function stopAllAssistantPlayback (line 332) | function stopAllAssistantPlayback(){
  function startSession (line 349) | async function startSession(){
  function stopSession (line 373) | async function stopSession(){
  function setSessionButtonState (line 386) | function setSessionButtonState(state) {
  function handleStartSessionError (line 398) | function handleStartSessionError(result, errorInfo) {
  function closeConnections (line 404) | function closeConnections() {
Condensed preview — 103 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (411K chars).
[
  {
    "path": ".github/workflows/voice-live-web-files.yml",
    "chars": 779,
    "preview": "name: Zip Voice Live Web Files\non:\n  workflow_dispatch:\n  push:\n    branches:\n      - 'main'\n\n    paths:\n      - 'Labfil"
  },
  {
    "path": ".gitignore",
    "chars": 14,
    "preview": "bin\nobj\n*.sln\n"
  },
  {
    "path": "Instructions/Exercises/01-analyze-text.md",
    "chars": 11849,
    "preview": "---\nlab:\n    title: 'Analyze text'\n    description: \"Use Azure Language in Foundry Tools to analyze text.\"\n    level: 30"
  },
  {
    "path": "Instructions/Exercises/02-language-agent.md",
    "chars": 15528,
    "preview": "---\nlab:\n    title: 'Develop a text analysis agent'\n    description: 'Use Azure Language in Foundry Tools to add text an"
  },
  {
    "path": "Instructions/Exercises/03-gen-ai-speech.md",
    "chars": 14876,
    "preview": "---\nlab:\n    title: 'Use speech-capable generative AI models'\n    description: Implement speech functionality using gene"
  },
  {
    "path": "Instructions/Exercises/04-azure-speech.md",
    "chars": 11705,
    "preview": "---\nlab:\n    title: 'Recognize and synthesize speech'\n    description: Implement speech functionality using Azure Speech"
  },
  {
    "path": "Instructions/Exercises/05-azure-speech-mcp.md",
    "chars": 14776,
    "preview": "---\nlab:\n    title: 'Use Azure Speech in an agent'\n    description: Use the Azure Speech in Foundry Tools MCP server to "
  },
  {
    "path": "Instructions/Exercises/06-voice-live-agent.md",
    "chars": 15860,
    "preview": "---\nlab:\n    title: 'Develop a Voice Live agent'\n    description: 'Use Azure Speech Voice Live in Microsoft Foundry Tool"
  },
  {
    "path": "Instructions/Exercises/07-translation.md",
    "chars": 18184,
    "preview": "---\nlab:\n    title: 'Translate text and speech'\n    description: Implement translation with Azure Translator and Azure S"
  },
  {
    "path": "Instructions/Labs/01-analyze-text.md",
    "chars": 11952,
    "preview": "---\nlab:\n    title: 'Analyze text (deprecated)'\n    description: \"Use Azure AI Language to analyze text, including langu"
  },
  {
    "path": "Instructions/Labs/02-qna.md",
    "chars": 17180,
    "preview": "---\nlab:\n    title: 'Create a Question Answering solution  (deprecated)'\n    description: \"Use Azure AI Language to crea"
  },
  {
    "path": "Instructions/Labs/03-language-understanding.md",
    "chars": 26330,
    "preview": "---\nlab:\n    title: 'Create a language understanding model with the Azure AI Language service  (deprecated)'\n    descrip"
  },
  {
    "path": "Instructions/Labs/04-text-classification.md",
    "chars": 18050,
    "preview": "---\nlab:\n    title: 'Custom text classification  (deprecated)'\n    description: \"Apply custom classifications to text in"
  },
  {
    "path": "Instructions/Labs/05-extract-custom-entities.md",
    "chars": 16135,
    "preview": "---\nlab:\n    title: 'Extract custom entities  (deprecated)'\n    description: \"Train a model to extract customized entiti"
  },
  {
    "path": "Instructions/Labs/06-translate-text.md",
    "chars": 9177,
    "preview": "---\nlab:\n    title: 'Translate Text  (deprecated)'\n    description: \"Translate provided text between any supported langu"
  },
  {
    "path": "Instructions/Labs/07-speech.md",
    "chars": 15404,
    "preview": "---\nlab:\n    title: 'Recognize and Synthesize Speech  (deprecated)'\n    description: \"Implement a speaking clock that co"
  },
  {
    "path": "Instructions/Labs/08-translate-speech.md",
    "chars": 14233,
    "preview": "---\nlab:\n    title: 'Translate Speech  (deprecated)'\n    description: \"Translate language speech to speech and implement"
  },
  {
    "path": "Instructions/Labs/09-audio-chat.md",
    "chars": 14546,
    "preview": "---\nlab:\n    title: 'Develop an audio-enabled chat app  (deprecated)'\n    description: 'Learn how to use Azure AI Foundr"
  },
  {
    "path": "Instructions/Labs/10-voice-live-api.md",
    "chars": 7950,
    "preview": "---\nlab:\n    title: 'Explore the Voice Live API  (deprecated)'\n    description: 'Learn how to use, and customize, the Vo"
  },
  {
    "path": "Instructions/Labs/11-voice-live-agent-web.md",
    "chars": 15418,
    "preview": "---\nlab:\n    title: 'Develop an Azure AI Voice Live voice agent  (deprecated)'\n    description: 'Learn how to create a w"
  },
  {
    "path": "LICENSE",
    "chars": 1075,
    "preview": "MIT License\n\nCopyright (c) 2023 Microsoft Learning\n\nPermission is hereby granted, free of charge, to any person obtainin"
  },
  {
    "path": "Labfiles/01-analyze-text/Python/readme.txt",
    "chars": 32,
    "preview": "This folder contains Python code"
  },
  {
    "path": "Labfiles/01-analyze-text/Python/text-analysis/requirements.txt",
    "chars": 58,
    "preview": "python-dotenv\nazure-identity\nazure-ai-textanalytics==5.3.0"
  },
  {
    "path": "Labfiles/01-analyze-text/Python/text-analysis/reviews/review1.txt",
    "chars": 529,
    "preview": "Good Hotel and staff\nThe Royal Hotel, London, UK\n3/2/2018\nClean rooms, good service, great location near Buckingham Pala"
  },
  {
    "path": "Labfiles/01-analyze-text/Python/text-analysis/reviews/review2.txt",
    "chars": 413,
    "preview": "Tired hotel with poor service\nThe Royal Hotel, London, United Kingdom\n5/6/2018\nThis is a old hotel (has been around sinc"
  },
  {
    "path": "Labfiles/01-analyze-text/Python/text-analysis/reviews/review3.txt",
    "chars": 718,
    "preview": "Good location and helpful staff, but on a busy road.\nThe Lombard Hotel, San Francisco, USA\n8/16/2018\nWe stayed here in A"
  },
  {
    "path": "Labfiles/01-analyze-text/Python/text-analysis/reviews/review4.txt",
    "chars": 834,
    "preview": "Very noisy and rooms are tiny\nThe Lombard Hotel, San Francisco, USA\n9/5/2018\nHotel is located on Lombard street which is"
  },
  {
    "path": "Labfiles/01-analyze-text/Python/text-analysis/reviews/review5.txt",
    "chars": 132,
    "preview": "Un hôtel agréable\nL'Hotel Buckingham, Londres, UK\nJ’adore cet hôtel. Le personnel est très amical et les chambres sont c"
  },
  {
    "path": "Labfiles/01-analyze-text/Python/text-analysis/text-analysis.py",
    "chars": 852,
    "preview": "from dotenv import load_dotenv\nimport os\n\n# Import namespaces\n\n\ndef main():\n    try:\n        # Clear the console\n       "
  },
  {
    "path": "Labfiles/02-language-agent/Python/text-agent/requirements.txt",
    "chars": 55,
    "preview": "python-dotenv\nazure-identity\nazure-ai-projects==2.0.0b4"
  },
  {
    "path": "Labfiles/02-language-agent/Python/text-agent/text-agent.py",
    "chars": 593,
    "preview": "from dotenv import load_dotenv\nimport os\n\n# Import namespaces\n\n\n\ndef main():\n    try:\n        # Clear the console\n      "
  },
  {
    "path": "Labfiles/02-qna/Python/qna-app/qna-app.py",
    "chars": 550,
    "preview": "from dotenv import load_dotenv\nimport os\n\n# Import namespaces\n\n\ndef main():\n    try:\n        # Get Configuration Setting"
  },
  {
    "path": "Labfiles/02-qna/Python/qna-app/requirements.txt",
    "chars": 13,
    "preview": "python-dotenv"
  },
  {
    "path": "Labfiles/02-qna/Python/readme.txt",
    "chars": 32,
    "preview": "This folder contains Python code"
  },
  {
    "path": "Labfiles/02-qna/ask-question.sh",
    "chars": 202,
    "preview": "\nprediction_url=\"YOUR_ENDPOINT_URL\"\nkey=\"YOUR_KEY\"\n\ncurl -X POST $prediction_url -H \"Ocp-Apim-Subscription-Key: $key\" -H"
  },
  {
    "path": "Labfiles/03-gen-ai-speech/Python/generate-speech/generate-speech.py",
    "chars": 719,
    "preview": "import os\nfrom pathlib import Path\nfrom playsound3 import playsound\nfrom dotenv import load_dotenv\n\n# Import namespaces\n"
  },
  {
    "path": "Labfiles/03-gen-ai-speech/Python/generate-speech/requirements.txt",
    "chars": 46,
    "preview": "python-dotenv\nplaysound3\nazure-identity\nopenai"
  },
  {
    "path": "Labfiles/03-gen-ai-speech/Python/transcribe-speech/requirements.txt",
    "chars": 46,
    "preview": "python-dotenv\nplaysound3\nazure-identity\nopenai"
  },
  {
    "path": "Labfiles/03-gen-ai-speech/Python/transcribe-speech/transcribe-speech.py",
    "chars": 717,
    "preview": "import os\nfrom pathlib import Path\nfrom playsound3 import playsound\nfrom dotenv import load_dotenv\n\n# Import namespaces\n"
  },
  {
    "path": "Labfiles/03-language/Clock.json",
    "chars": 10631,
    "preview": "{\n    \"api-version\": \"2021-11-01-preview\",\n    \"metadata\": {\n        \"name\": \"Clock\",\n        \"description\": \"Natural la"
  },
  {
    "path": "Labfiles/03-language/Python/clock-client/clock-client.py",
    "chars": 3567,
    "preview": "from dotenv import load_dotenv\nimport os\nimport json\nfrom datetime import datetime, timedelta, date, timezone\nfrom dateu"
  },
  {
    "path": "Labfiles/03-language/Python/clock-client/requirements.txt",
    "chars": 30,
    "preview": "python-dotenv\npython-dateutil\n"
  },
  {
    "path": "Labfiles/03-language/Python/readme.txt",
    "chars": 32,
    "preview": "This folder contains Python code"
  },
  {
    "path": "Labfiles/03-language/send-call.sh",
    "chars": 478,
    "preview": "curl -X POST \"<ENDPOINT_URL>\" \\\n-H \"Ocp-Apim-Subscription-Key: <YOUR_KEY>\" \\\n-H \"Apim-Request-Id: <REQUEST_ID>\" \\\n-H \"Co"
  },
  {
    "path": "Labfiles/04-azure-speech/Python/voice-mail/requirements.txt",
    "chars": 78,
    "preview": "python-dotenv\nplaysound3\nazure-identity\nazure-cognitiveservices-speech==1.48.2"
  },
  {
    "path": "Labfiles/04-azure-speech/Python/voice-mail/voice-mail.py",
    "chars": 1818,
    "preview": "from dotenv import load_dotenv\nimport os\nfrom playsound3 import playsound\n\n# Import namespaces\n\n\n\ndef main():\n    try:\n "
  },
  {
    "path": "Labfiles/04-text-classification/Python/classify-text/articles/test1.txt",
    "chars": 281,
    "preview": "Celebrities come out for the big awards ceremony\n\nThe stars of television and cinema were out in force on Thursday night"
  },
  {
    "path": "Labfiles/04-text-classification/Python/classify-text/articles/test2.txt",
    "chars": 633,
    "preview": "League best, worst XIs: Man United stars Pogba, Maguire had season to forget; Kane, Son shone for Spurs\n\nAfter a final d"
  },
  {
    "path": "Labfiles/04-text-classification/Python/classify-text/classify-text.py",
    "chars": 872,
    "preview": "from dotenv import load_dotenv\nimport os\n\n# Import namespaces\n\n\ndef main():\n    try:\n        # Get Configuration Setting"
  },
  {
    "path": "Labfiles/04-text-classification/Python/classify-text/requirements.txt",
    "chars": 13,
    "preview": "python-dotenv"
  },
  {
    "path": "Labfiles/04-text-classification/Python/readme.txt",
    "chars": 32,
    "preview": "This folder contains Python code"
  },
  {
    "path": "Labfiles/04-text-classification/classify-text.ps1",
    "chars": 2716,
    "preview": "# Update these with your service and model values\n$key=\"<YOUR-KEY>\"\n$endpoint=\"<YOUR-ENDPOINT>\"\n$projectName = \"Classify"
  },
  {
    "path": "Labfiles/04-text-classification/test1.txt",
    "chars": 782,
    "preview": "Investigating the potential for life around the stars\n\nWhen the world’s most powerful telescope launches into space this"
  },
  {
    "path": "Labfiles/04-text-classification/test2.txt",
    "chars": 633,
    "preview": "League best, worst XIs: Man United stars Pogba, Maguire had season to forget; Kane, Son shone for Spurs\n\nAfter a final d"
  },
  {
    "path": "Labfiles/05-custom-entity-recognition/Python/custom-entities/ads/test1.txt",
    "chars": 170,
    "preview": "Bluetooth earbuds, $100. These work okay, but sometimes disconnect from the phone. I'm sure someone more technical that "
  },
  {
    "path": "Labfiles/05-custom-entity-recognition/Python/custom-entities/ads/test2.txt",
    "chars": 116,
    "preview": "Dog harness for sale, $20. Good condition, puppy just outgrew it.\n\nLocal meet up in Tucson, AZ in a public location."
  },
  {
    "path": "Labfiles/05-custom-entity-recognition/Python/custom-entities/custom-entities.py",
    "chars": 861,
    "preview": "from dotenv import load_dotenv\nimport os\n\n# import namespaces\n\n\ndef main():\n    try:\n        # Get Configuration Setting"
  },
  {
    "path": "Labfiles/05-custom-entity-recognition/Python/custom-entities/requirements.txt",
    "chars": 13,
    "preview": "python-dotenv"
  },
  {
    "path": "Labfiles/05-custom-entity-recognition/Python/readme.txt",
    "chars": 32,
    "preview": "This folder contains Python code"
  },
  {
    "path": "Labfiles/05-custom-entity-recognition/extract-entities.ps1",
    "chars": 2802,
    "preview": "# Update these with your service and model values\n$key=\"<YOUR-KEY>\"\n$endpoint=\"<YOUR-ENDPOINT>\"\n$projectName = \"customNE"
  },
  {
    "path": "Labfiles/05-custom-entity-recognition/test1.txt",
    "chars": 170,
    "preview": "Bluetooth earbuds, $100. These work okay, but sometimes disconnect from the phone. I'm sure someone more technical that "
  },
  {
    "path": "Labfiles/05-custom-entity-recognition/test2.txt",
    "chars": 116,
    "preview": "Dog harness for sale, $20. Good condition, puppy just outgrew it.\n\nLocal meet up in Tucson, AZ in a public location."
  },
  {
    "path": "Labfiles/05-speech-tool/Python/speech-client/requirements.txt",
    "chars": 55,
    "preview": "python-dotenv\nazure-identity\nazure-ai-projects==2.0.0b4"
  },
  {
    "path": "Labfiles/05-speech-tool/Python/speech-client/speech-client.py",
    "chars": 849,
    "preview": "from dotenv import load_dotenv\nimport os\n\n# import namespaces\n\n\ndef main():\n    try:\n        # Clear the console\n       "
  },
  {
    "path": "Labfiles/06-translator-sdk/Python/readme.txt",
    "chars": 32,
    "preview": "This folder contains Python code"
  },
  {
    "path": "Labfiles/06-translator-sdk/Python/translate-text/requirements.txt",
    "chars": 13,
    "preview": "python-dotenv"
  },
  {
    "path": "Labfiles/06-translator-sdk/Python/translate-text/translate.py",
    "chars": 464,
    "preview": "from dotenv import load_dotenv\nimport os\n\n# import namespaces\n\n\n\ndef main():\n    try:\n        # Get Configuration Settin"
  },
  {
    "path": "Labfiles/06-voice-live/Python/chat-client/chat-client.py",
    "chars": 9555,
    "preview": "import os\nimport asyncio\nimport base64\nimport queue\nfrom dotenv import load_dotenv\nimport pyaudio\n\n# import namespaces\n\n"
  },
  {
    "path": "Labfiles/06-voice-live/Python/chat-client/requirements.txt",
    "chars": 22,
    "preview": "dotenv\naiohttp\npyaudio"
  },
  {
    "path": "Labfiles/07-speech/Python/readme.txt",
    "chars": 32,
    "preview": "This folder contains Python code"
  },
  {
    "path": "Labfiles/07-speech/Python/speaking-clock/requirements.txt",
    "chars": 24,
    "preview": "python-dotenv\nazure.core"
  },
  {
    "path": "Labfiles/07-speech/Python/speaking-clock/speaking-clock.py",
    "chars": 1019,
    "preview": "from dotenv import load_dotenv\nfrom datetime import datetime\nimport os\n\n# Import namespaces\n\n\ndef main():\n\n    # Clear t"
  },
  {
    "path": "Labfiles/07-translation/Python/readme.txt",
    "chars": 32,
    "preview": "This folder contains Python code"
  },
  {
    "path": "Labfiles/07-translation/Python/translators/requirements.txt",
    "chars": 100,
    "preview": "python-dotenv\nazure-identity\nazure-ai-translation-text==1.0.1\nazure-cognitiveservices-speech==1.48.2"
  },
  {
    "path": "Labfiles/07-translation/Python/translators/translate-speech.py",
    "chars": 593,
    "preview": "import os\nfrom dotenv import load_dotenv\n\n\n# Import namespaces\n\n\n\ndef main():\n    try:\n\n        # Clear the console \n   "
  },
  {
    "path": "Labfiles/07-translation/Python/translators/translate-text.py",
    "chars": 505,
    "preview": "from dotenv import load_dotenv\nimport os\n\n# import namespaces\n\n\n\ndef main():\n    try:\n        # Clear the console \n     "
  },
  {
    "path": "Labfiles/08-speech-translation/Python/readme.txt",
    "chars": 32,
    "preview": "This folder contains Python code"
  },
  {
    "path": "Labfiles/08-speech-translation/Python/translator/requirements.txt",
    "chars": 24,
    "preview": "python-dotenv\nazure.core"
  },
  {
    "path": "Labfiles/08-speech-translation/Python/translator/translator.py",
    "chars": 1006,
    "preview": "from dotenv import load_dotenv\nfrom datetime import datetime\nimport os\n\n# Import namespaces\n\n\ndef main():\n    try:\n     "
  },
  {
    "path": "Labfiles/09-audio-chat/Python/audio-chat.py",
    "chars": 1165,
    "preview": "import os\nimport requests\nimport base64\nfrom dotenv import load_dotenv\n\n# Add references\n\n\ndef main(): \n\n    # Clear the"
  },
  {
    "path": "Labfiles/09-audio-chat/Python/requirements.txt",
    "chars": 13,
    "preview": "python-dotenv"
  },
  {
    "path": "Labfiles/11-voice-live-agent/python/.dockerignore",
    "chars": 599,
    "preview": "# Python\n__pycache__/\n*.py[cod]\n*$py.class\n*.so\n.Python\n\n# Virtual environments\n.venv/\nvenv/\nENV/\nenv/\n\n# UV cache\n.uv/\n"
  },
  {
    "path": "Labfiles/11-voice-live-agent/python/.gitignore",
    "chars": 152,
    "preview": "# Python-generated files\n__pycache__/\n*.py[oc]\nbuild/\ndist/\nwheels/\n*.egg-info\n.env\n\n# Virtual environments\n.venv\n#azdep"
  },
  {
    "path": "Labfiles/11-voice-live-agent/python/.python-version",
    "chars": 5,
    "preview": "3.10\n"
  },
  {
    "path": "Labfiles/11-voice-live-agent/python/Dockerfile",
    "chars": 1052,
    "preview": "FROM python:3.11-slim\n\n# Keep Python output unbuffered and avoid writing .pyc files\nENV PYTHONDONTWRITEBYTECODE=1\nENV PY"
  },
  {
    "path": "Labfiles/11-voice-live-agent/python/README.md",
    "chars": 3991,
    "preview": "# Requirements\n\n## Run in Cloud Shell\n\n* Azure subscription with OpenAI access\n* If running in the Azure Cloud Shell, ch"
  },
  {
    "path": "Labfiles/11-voice-live-agent/python/azdeploy.sh",
    "chars": 13847,
    "preview": "#!/usr/bin/env bash\n\n# Script to deploy the Flask app to Azure App Service using a container from ACR\n# and provision AI"
  },
  {
    "path": "Labfiles/11-voice-live-agent/python/azure.yaml",
    "chars": 1027,
    "preview": "# Student template: GPT Realtime model resources for AI Foundry\n# This template ONLY provisions AI resources - it does N"
  },
  {
    "path": "Labfiles/11-voice-live-agent/python/infra/ai-foundry.bicep",
    "chars": 2161,
    "preview": "@description('Primary location for all resources')\nparam location string\n\n@description('Name of the environment used to "
  },
  {
    "path": "Labfiles/11-voice-live-agent/python/infra/main.bicep",
    "chars": 1722,
    "preview": "targetScope = 'subscription'\n\n@minLength(1)\n@maxLength(64)\n@description('Name of the environment used to derive resource"
  },
  {
    "path": "Labfiles/11-voice-live-agent/python/infra/main.parameters.json",
    "chars": 501,
    "preview": "{\n  \"$schema\": \"https://schema.management.azure.com/schemas/2019-04-01/deploymentParameters.json#\",\n  \"contentVersion\": "
  },
  {
    "path": "Labfiles/11-voice-live-agent/python/pyproject.toml",
    "chars": 800,
    "preview": "[project]\nname = \"real-time-voice\"\nversion = \"0.1.0\"\ndescription = \"Add your description here\"\nreadme = \"README.md\"\nauth"
  },
  {
    "path": "Labfiles/11-voice-live-agent/python/requirements.txt",
    "chars": 2231,
    "preview": "# This file was autogenerated by uv via the following command:\n#    uv pip compile pyproject.toml -o requirements.txt\nai"
  },
  {
    "path": "Labfiles/11-voice-live-agent/python/src/__init__.py",
    "chars": 404,
    "preview": "\"\"\"Real-Time Voice package root.\n\nReintroduced to allow hatchling to detect the package under src/ for building wheels/e"
  },
  {
    "path": "Labfiles/11-voice-live-agent/python/src/flask_app.py",
    "chars": 22579,
    "preview": "from __future__ import annotations\n\nfrom pathlib import Path\nimport threading\nimport asyncio\nimport time\nimport logging\n"
  },
  {
    "path": "Labfiles/11-voice-live-agent/python/src/static/app.js",
    "chars": 13218,
    "preview": "// =============================\n// UI ELEMENTS\n// =============================\nconst startBtn = document.getElementByI"
  },
  {
    "path": "Labfiles/11-voice-live-agent/python/src/static/style.css",
    "chars": 2103,
    "preview": ":root { color-scheme: light dark; }\n/* Color tokens for adaptive theming */\n:root {\n  --c-bg: #ffffff;\n  --c-fg: #1a1a1a"
  },
  {
    "path": "Labfiles/11-voice-live-agent/python/src/templates/index.html",
    "chars": 2166,
    "preview": "<!doctype html>\n<html lang=\"en\">\n<head>\n  <meta charset=\"utf-8\" />\n  <title>Real Time Voice</title>\n  <meta name=\"viewpo"
  },
  {
    "path": "README.md",
    "chars": 79,
    "preview": "# Microsoft Learning Azure AI Language\nLab files for Azure AI Language modules\n"
  },
  {
    "path": "_build.yml",
    "chars": 1258,
    "preview": "name: '$(Date:yyyyMMdd)$(Rev:.rr)'\njobs:\n  - job: build_markdown_content\n    displayName: 'Build Markdown Content'\n    w"
  },
  {
    "path": "_config.yml",
    "chars": 413,
    "preview": "remote_theme: MicrosoftLearning/Jekyll-Theme\nexclude:\n  - readme.md\n  - .github/\nheader_pages:\n  - index.html\nauthor: Mi"
  },
  {
    "path": "downloads/python/readme.md",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "index.md",
    "chars": 1059,
    "preview": "---\ntitle: Develop AI Language and Speech solutions on Azure\npermalink: index.html\nlayout: home\n---\n\nThis page lists exe"
  }
]

About this extraction

This page contains the full source code of the MicrosoftLearning/mslearn-ai-language GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 103 files (382.2 KB), approximately 91.3k tokens, and a symbol index with 87 extracted functions, classes, methods, constants, and types. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.

Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.

Copied to clipboard!