Designing and Implementing an Azure AI Solution

Please follow the steps to complete your e-attendance

  • Please Open this link 
  • Copy and Paste the Room Name: AI100
  • Click to Join
  • Enter your Name
  • Course Pre-requisite
    • Better if you have good knowledge on Azure Services (AZ-900)
  • Get your e-books
    • Go to Skillpipe
    • Create an account 
    • Redeem your License Code (Click  for Code) and get your e-copy of the AI-100 Student Book
  • Signup with Azure Portal ( Do not Signup with your Organization Email )
  • Signup with LabonDemand
    • Go to
    • First time users:

        1. Click the Register with Training Key button
        2. Input your training key in the Register with a Training Key field
        3. Click Register
        4. This opens a registration page to create a user account. Input your registration information, and then click Save
        5. Saving your user registration opens your enrollment
    • Please launch a test lab to check for connectivity issues and read through the instructions in the lab interface to become familiar with using it. Once class starts Launch buttons will appear beside the lab modules.


This section gives a brief overview on most of Microsoft’s AI and ML service offerings. The provided resources should help you with your first steps in each area and can be used as an initial learning path. 

Cognitive Services

What is it?

  • Pre-built Machine Learning Models, published as an API. Infuse your apps, websites and bots with intelligent algorithms to see, hear, speak, understand and interpret your user needs through natural methods of communication. Cognitive Services provide the following capabilities:


Who is it for?

  • Developers, Data Scientists, Machine Learning experts

Learning Resources

Vision Services

Vision makes it possible for apps and services to accurately identify and analyze content within images and videos.

Learning Resources

Speech Services

Discover how Speech enables the integration of speech processing capabilities into any app or service. Convert spoken language into text or produce natural sounding speech from text using standard (or customizable) voice fonts.

Learning Resources

Language Understanding Services

Ensure apps and services can understand the meaning of unstructured text or recognize the intent behind a speaker’s utterances. Build Language-enabled apps and services.

Learning Resources

Knowledge Services

Leverage or create rich knowledge resources that can be integrated into apps and services with Knowledge services. (This is QnA Maker, see below)

Learning Resources

Search Services

Enable apps and services to harness the power of a web-scale, ad-free search engine with Search. Use search services to find exactly what you’re looking for across billions of web pages, images, videos, and news search results.

Learning Resources

Conversational AI

A set of services for building text or voice-enabled chatbots on Azure.

Bot Service & Bot Framework

What is it?

  • Build, connect, deploy, and manage intelligent bots to interact naturally with your users on websites, apps, Cortana, Microsoft Teams, Skype, Slack, Facebook Messenger, and more. Go beyond a great conversationalist to a bot that can recognize a user in photos, moderate content, make smart recommendations, translate language and more. Cognitive Services enable your bot to see, hear, and interpret in more human ways.

Who is it for?

  • Developers

Learning Resources

QnA Maker

What is it?

  • QnA Maker is a cloud-based API service that creates a conversational, question and answer layer over your data. QnA Maker enables you to create a knowledge-base from your semi-structured content such as Frequently Asked Question (FAQ) URLs, product manuals, support documents and custom questions and answers. The QnA Maker service answers your users’ natural language questions by matching it with the best possible answer from the QnAs in your Knowledge base.

Who is it for?

  • Developers

Learning Resources

Knowledge Mining

A set of services for indexing structured and unstructured documents, including retrieving latent information from documents.

Azure Search / Cognitive Search

What is it?

  • AI-Powered cloud search service for web and mobile app development. Easily add sophisticated cloud search capabilities to your website or application using the same integrated Microsoft natural language stack that’s used in Bing and Office and that’s been improved over 16 years. Quickly tune search results and construct rich, fine-tuned ranking models to tie search results to business goals. Reliable throughput and storage give you fast search indexing and querying to support time-sensitive search scenarios. With the new Cognitive Search feature, you can use artificial intelligence to extract insights and structured information from your documents. Create pipelines that uses cognitive skills to enrich and bring structure to your data before it gets indexed. You can select from a variety of pre-built cognitive skills and also extend its power by creating your own custom skills.

Who is it for?

  • Developers, Data Scientists

Learning Resources

Azure Machine Learning

A set of services for training, testing and deploying your own Machine Learning models.

Machine Learning Services

What is it?

  • Simplify and accelerate the building, training, and deployment of your machine learning models. Use automated machine learning to identify suitable algorithms and tune hyperparameters faster. Improve productivity and reduce costs with autoscaling compute and DevOps for machine learning. Seamlessly deploy to the cloud and the edge with one click. Access all these capabilities from your favorite Python environment using the latest open-source frameworks, such as PyTorch, TensorFlow, and scikit-learn.

Who is it for?

  • Data Scientists, Machine Learning experts (code-first, Python-focused)

Learning Resources

Machine Learning Studio

What is it?

  • A fully-managed cloud service that enables you to easily build, deploy, and share predictive analytics solutions. Machine Learning Studio is a powerfully simple browser-based, visual drag-and-drop authoring environment where no coding is necessary. Go from idea to deployment in a matter of clicks.

Who is it for?

  • Data Scientists, Machine Learning experts, Developers (Low/No-Code)

Learning Resources

Azure Databricks

What is it?

  • Accelerate big data analytics and artificial intelligence (AI) solutions with Azure Databricks, a fast, easy and collaborative Apache Spark–based analytics service.
  • Set up your Spark environment in minutes and autoscale quickly and easily. Data scientists, data engineers, and business analysts can collaborate on shared projects in an interactive workspace. Apply your existing skills with support for Python, Scala, R, and SQL, as well as deep learning frameworks and libraries like TensorFlow, Pytorch, and Scikit-learn. Native integration with Azure Active Directory (Azure AD) and other Azure services enables you to build your modern data warehouse and machine learning and real-time analytics solutions.

Who is it for?

  • Apache Spark users, Data Scientists, Machine Learning experts

Learning Resources

You can find all Lab Files and Instructions here

Download as ZIP.

Synopsis of Lab:

Lab 1: Meeting the Technical Requirements

In this lab, we will introduce our workshop case study and setup tools on your local workstation and in your Azure instance to enable you to build tools within the Microsoft Cognitive Services suite.

Lab 2: Implement Computer Vision Capabilities for a Bot

This hands-on lab guides you through creating an intelligent console application from end-to-end using Cognitive Services (specifically the Computer Vision API). We use the ImageProcessing portable class library (PCL), discussing its contents and how to use it in your own applications.

Lab 3: Basic Filtering Bot

In this lab, we will be setting up an intelligent bot from end-to-end that can respond to a user’s chat window text prompt. We will be building on what we have already learned about building bots within Azure, but adding in a layer of custom logic to give our bot more bespoke functionality.

This bot will be built in the Microsoft Bot Framework. We will evoke the architecture that enables the bot interface to receive and respond with textual messages, and we will build logic that enables our bot to respond to inquiries containing specific text.

We will also be testing our bot in the Bot Emulator, and addressing the middleware that enables us to perform specialized tasks on the message that the bot receives from the user.

We will evoke some concepts pertaining to Azure Cognitive Search, and Microsoft’s Language Understanding Intelligent Service (LUIS), but will not implement them in this lab.

Lab 4: Log Bot Chat

In the previous lab, we started with an echo bot project and modified the code to suit our needs. Now, we wish to log chats with our bots to enable our customer service team to follow up to inquiries, determine if the bot is performing in the expected manner, and to analyze customer data.

This hands-on lab guides you through enabling various logging scenarios for your bot solutions.

In the advanced analytics space, there are plenty of uses for storing log conversations. Having a corpus of chat conversations can allow developers to:

  1. Build question and answer engines specific to a domain.
  2. Determine if a bot is responding in the expected manner.
  3. Perform analysis on specific topics or products to identify trends.

In the course of the following labs, we’ll walk through how we can enable chat logging and intercept messages. We will evoke some of the various ways we might also store the data, although data solutions are not within the scope of this workshop.

Lab 5: QnA Maker

In this lab you will use the Microsoft QnA Maker application to create a knowledgebase, publish it and then consume it in your bot.

Lab 6: Implement the LUIS model

We’re going to build an end-to-end scenario that allows you to pull in your own pictures, use Cognitive Services to find objects and people in the images, and obtain a description and tags. We’ll later build a Bot Framework bot using LUIS to allow easy, targeted querying.

Lab 7: Integrate LUIS into a bot with Dialogues

Now that our bot is capable of taking in a user’s input and responding based on the user’s input, we will give our bot the ability to understand natural language with the LUIS model we built in Lab 6

Lab 8: Detect User Language

In this lab we will add the ability for your bot to detect languages from user input.

If you have trained your bot or integrated it with QnA Maker but have only done so using only one particular language, then it makes sense to inform users of that fact.

Lab 9: Test Bots in DirectLine

This hands-on lab guides you through some of the basics of testing bots. This workshop demonstrates how you can perform functional testing (using Direct Line).

To do the Labs in Python

Follow [this] link

Walkthrough – Creating a Speech to Text Service

  1. Let’s create a speech translation subscription using the Azure portal.
    Sign into the
    Azure portal.
  2. Click or Select + Create a resource, type in “Speech” (without quotation marks) in the “Search the Marketplace” entry and press Enter.
  3. Once search results are returned, select Speech from the “Results” panel, then, in the subsequent panel, Click or Select Create.
  4. Enter a unique name for your service, such as “SpeechDemo1” or other relevant name.
  5. Select your subscription.
  6. Choose a location that is closest to you.
  7. Select a Pricing tier (you can use FO for this option or the lowest cost available in your region).
  8.  Create a new Resource Group named mslearn-speechapi to hold your resources.
  9. Click or Select Create to create the service.

After a short delay, your new service will be provisioned and available, and new API keys will be generated for programmatic use.
TIP: If you miss the notification that your resource is published, you can simply Click or Select the notification
icon in the top bar of the portal and select Go To Resource as shown here.

With a Speech Translation subscription created you’re now able to access your API endpoint and subscription keys.
To access your Speech Translation subscription, you’ll need to get two pieces of information from the
Azure portal:
1. A
Subscription Key that is passed with every request to authenticate the call.

2. The Endpoint that exposes your service on the network

View the Subscription Keys

  1. Click or Select Resource groups in the left sidebar of the portal, and then Click or Select the resource
    group created for this service.
  2. Select your service that you just created.
  3. Select Keys under the “Resource Management” group to view your new API keys.
  4. Copy the value of KEY 1 or KEY 2 to the clipboard for use in an application.

View the endpoint

  1. Select Overview from the menu group, locate the “Endpoint” label, and make note of the Endpoint
    value. This value will be the URL used when generating temporary tokens.

Note: Key1 and the Endpoint are also available on the Quick Start page under the Resource Management

Walkthrough – Call the Text Analytics API from the Online Testing Console

Create an access key

Every call to the Text Analytics API requires a subscription key. Often called an access key, it is used to
validate that you have access to make the call. We’ll use the Azure portal to grab a key.

  1. Sign into the Azure portal.
  2. Click or Select Create a resource.
  3. In the Search the Marketplace search box, type in text analytics and hit return.
  4. Select Text Analytics in the search results and then select the Create button.
  5. In the Create page that opens, enter the following values into each field.
    Name MyTextAnalyticsAPIAccount – The name of the Cognitive Services account. We recommend
    using a descriptive name. Valid characters are a-z, 0-9, and -.
    Subscription Choose your subscription – The subscription under which this new Cognitive
    Services API account with Text Analytics API is created.
    Location choose a region from the dropdown
    Pricing tier F0 – The cost of your Cognitive Services account depends on the actual usage and
    the options you choose. We recommend selecting the F0 tier for our purposes here.
    Resource group Select Use existing and choose an existing RG, or create a new one if necessary
  6. Select Create at the bottom of the page to start the account creation process.
  7. Watch for a notification that the deployment is in progress. You’ll then get a notification that the
    account has been deployed successfully to your resource group.

Get the access key

Now that we have our Cognitive Services account, let’s find the access key so we can start calling the API.


  1. Click or Select on the Go to resource button on the Deployment succeeded notification. This action
    opens the account Quickstart.
  2. Select the Keys menu item from the menu on the left, or in the Grab your keys section of the quickstart. This action opens the Manage keys page.
  3. Copy one of the keys using the copy button.
    Important Always keep your access keys safe and never share them.
  4. Store this key for the rest of this walkthrough. We’ll use it shortly to make API calls from the testing
    console and throughout the rest of the module.

Call the API from the testing console

Now that we have our key, we can head over to the testing console and take the API for a spin.

  1. Navigate to the following URL in your favorite browser. Replace [location] with the location you
    selected when creating the Text Analytics cognitive services account earlier in this walkthrough. For
    example, if you created the account in eastus, you’d replace [location] with eastus in the URL.


    The landing page displays a menu on the left and content to the right. The menu lists the POST methods
    you can call on the Text Analytics API. These endpoints are Detect Language, Entities, Key Phrases, and
    Sentiment. To call one of these operations, we need to do a few things.
    Select the method we want to call.
    Add the access key that we saved earlier in the lesson to each call.


  2. From the left menu, select Sentiment. This selection opens the Sentiment documentation to the right.
    As the documentation shows, we’ll be making a REST call in the following format.
    [location] is replaced with the location that you selected when you created the Text Analytics account.
    We’ll pass in our subscription key, or access key, in the ocp-Apim-Subscription-Key header.

Make some API calls

  1. Select the appropriate location button, (it should be the same location you created the service in), to
    open the live, interactive, API console (Ensure you use the location where you created your service).
  2. Paste the access key you saved earlier into the field labeled Ocp-Apim-Subscription-Key. Notice, in the
    HTTP request panel, that the key will be written automatically into the HTTP request window as a
    header value, represented by dots rather than displaying the key’s value.
  3. Scroll to the bottom of the page and Click or Select Send.
    Let’s examine the sections of this result panel in more detail.
    In the Headers section of the user interface, we set the access, or subscription, key in the header of
    our request.

    Next, we have the request body section, which holds a documents array in JSON format. Each document in the array has three properties. The properties are “language”, “id”, and “text”. The “id” is a
    number in this example, but it can be anything you want as long as it’s unique in the documents array.
    In this example, we’re also passing in documents written in three different languages. Over 15 languages are supported in the Sentiment feature of the Text Analytics API. For more information, check
    out Supported languages in the Text Analytics API. The maximum size of a single document is 5,000 characters, and one request can have up to 1,000 documents.

    The complete request, including the headers and the request URL are displayed in the next section

    The last portion of the page shows the information about the response. The response holds the
    insight the Text Analytics API had about our documents. An array of documents is returned to us,
    without the original text. We get back an “id” and “score” for each document. The API returns a
    numeric score between 0 and 1. Scores close to 1 indicate positive sentiment, while scores close to 0
    indicate negative sentiment. A score of 0.5 indicates the lack of sentiment, a neutral statement. In this
    example, we have two pretty positive documents and one negative document.

Bot Framework V4

Microsoft Bot Framework (V4), an open-source SDK available in Node.js, C#, Python and Java.

You can use the Microsoft Bot Framework to create a single code base to deploy with Azure Bot Service, which allows you to surface your bot on many channels (Skype, Cortana, Facebook, etc). The key concepts to know for these lab includes:

  • Adapter: The Bot orchestrator, routing incoming and outgoing communication and authentication. For any interaction, it creates a TurnContext object and passes it to the bot application logic

  • Middleware: The pipeline between the adapter and the bot code. It can be used to manage bot state

  • Turn: It is the action of the TurnContextbeen received by the bot. Normally there will be some processing and the bot will answer back to the user

  • Dialogs The way conversation flows through the bot. They are a central concept in the SDK, and provide a useful way to manage a conversation with the user. Dialogs are structures in your bot that act like functions in your bot’s program; each dialog is designed to perform a specific task, in a specific order. You can specify the order of individual dialogs to guide the conversation, and invoke them in different ways – sometimes in response to a user, sometimes in response to some outside stimuli, or from other dialogs. Dialogs receive input from state or OnTurn function. Dialogs types:

    • Prompt: Provides an easy way to ask the user for information and evaluate their response. For example for a number prompt, you specify the question or information you are asking for, and the prompt automatically checks to see if it received a valid number response.
    • Waterfall: Specific implementation of a dialog that is commonly used to collect information from the user or guide the user through a series of tasks. Each step of the conversation is implemented as an asynchronous function that takes a waterfall step context, the step parameter. At each step, the bot prompts the user for input (or can begin a child dialog, but that it is often a prompt), waits for a response, and then passes the result to the next step. The result of the first function is passed as an argument into the next function, and so on.
    • Component: Provides a strategy for creating independent dialogs to handle specific scenarios, breaking a large dialog set into more manageable pieces. Each of these pieces has its own dialog set, and avoids any name collisions with the dialog set that contains it.

Dialogs hierarchy

  • State: Stores data relating to either the conversation or the user. State is a middleware component. Available storage layers are Memory (data is cleared each time the bot is restarted), Blob Storage and CosmosDB. State management automates the reading and writing of your bot’s state to the underlying storage layer.

Using the image below, check how all those concepts are integrated in the internal architecture of a bot application.

Bots Concepts

Enterprise Grade Bots

Each bot is different, but there are some common patterns, workflows, and technologies to be aware of. Especially for a bot to serve enterprise workloads, there are many design considerations beyond just the core functionality.

This reference architecture describes how to build an enterprise-grade conversational bot (chatbot) using the Azure Bot Framework.

Responsible Bots

In order for people and society to realize the full potential of bots, they need to be designed in such a way that they earn the trust of others. These guidelines are aimed at helping you to design a bot that builds trust in the company and service that the bot represents.







For more demo Questions, you can visit here.

There are 4 Sets of Practice Questions. Please click here to Download

Last Day Feedback :