Global AI Workshops
  • Welcome
  • Getting started
    • Lab 1 - Get Azure Access
    • Lab 2 - Setup Dev Environment
  • The Azure Function that can see
    • Introduction
    • Lab 1 - Train your model
    • Lab 2 - Create the Function
    • Clean up
  • Cognitive Services
    • Lab 1 - Cognitive Search
    • Lab 2 - Bot Composer
  • Azure machine learning
    • Introduction
    • Lab 1 - AML Workspace Setup
    • Lab 2 - Data
    • Lab 3 - Train your model
    • Lab 4 - Deploy to an ACI
      • Lab 4.2 - Deploy to ACI using the Azure Portal
      • Lab 4.1 - Deploy to ACI using Python
    • Lab 5 - Deploy to Managed Endpoint
    • Clean Up
Powered by GitBook
On this page

Was this helpful?

  1. The Azure Function that can see

Introduction

The Azure Function that can see

PreviousLab 2 - Setup Dev EnvironmentNextLab 1 - Train your model

Last updated 3 years ago

Was this helpful?

The Azure Function that can see

Who could have imagined a few years ago that you would be able to create an Azure Function that can tell you what is in your images? Good news, this is now possible with only a few lines of code and some sample images.

In this hands-on lab, you are going to train a ONNX classification Model with the Custom Vision service that will run in an Azure Function. The language of choice is Python and you will use Visual Studio Code as the editor and the Azure CLI to manage and create your Azure Resources.

At the end of the lab you have a serverless API that can classify images and will have learnt about how to use the Azure Custom Vision Python SDK to train an ONNX model.

Duration:

2 hours

Format:

Hands-on Lab

Pre-Requirement:

Level:

100-200

Some level of programming & Azure will be helpful

Azure Access
Dev Environment