Learn About Azure Cognitive Services

Introduction

Microsoft Cognitive Services (earlier known as Project Oxford) provide us the ability to build intelligent applications just by writing a few lines of code. These applications or services are deployed on all major platforms like Windows, iOS, and Android.  All the APIs are based on machine learning APIs and enable the developers to easily add intelligent features – such as emotion and video detection; facial, speech and vision recognition; and speech and language understanding – into their applications.

Look into the below URLs for reference of Microsoft Cognitive Services.

  • https://azure.microsoft.com/en-in/services/cognitive-services/
  • https://docs.microsoft.com/en-us/azure/cognitive-services/welcome

NOTE

Microsoft announced the preview version of Microsoft Cognitive Services on March 30, 2016.

I would recommend you to read the articles mentioned below.

In this article, I will walk you through steps of exploring FACE and EMOTION APIs of Microsoft Cognitive Services. Henceforth, this article will cover the following three parts and I would suggest you go through this article in the below-given order only.

  1. Create Azure Account
  2. FACE API
  3. EMOTION API

So, below are the prerequisites to work with Microsoft Cognitive Services API,

  1. Visual Studio 2015 (Community, Enterprise, or Professional edition)
  2. Microsoft Azure Account 

Now, let’s get started.

Part 1 – Create Azure Account

  • Sign in to the Microsoft Azure Portal.
  • You will be asked to log in with a Microsoft Account. You can take a free subscription of one month or can choose among different plans available on the Azure portal, as per your requirement or business needs. (In my case, I took free subscription of one month).

    • You will be asked for your phone number and credit card details.

  • You will be given some credit points after successful creation of your account based on your selected country and zone.
  • After successful creation of Azure Account, you will see the dashboard as shown below.

    Azure Cognitive Services

Part 2 – FACE API

Face API, provided by Microsoft Cognitive Services, helps to detect, analyze, and organize the faces in a given image. We can also tag faces in any given photo.

Face API provides the advanced face algorithms and it has two main functions:-

  1. Face Detection with Attributes 
      • Face API detects up to 64 human faces with high precision face location in an image.
      • The image can be specified by file in bytes or valid URL.

  2. Face Recognition
    • Face Verification 

      • Performs an authentication against two detected faces or authentication from one detected face to one-person object.
    • Finding Similar Face 

      • It takes target face or query face as input and finds a small set of faces that looks most similar to the target face. It has two modes – matchFace and matchPerson.
          • matchFace returns similar faces, as it ignores the same-person threshold
          • matchPerson returns similar faces after applying a same-person threshold
      • Face Grouping

        • It takes a set of unknown faces as input
        • Face Grouping API divides these unknown faces into several groups based on similarity.

      • Personal (Face) Identification

        • It identifies people based on a detected face and people database. This database needs to be created in advance and can be edited later.

    Assuming you have Azure portal account, follow the below steps to implement FACE API.

    Step 1 

    Click on “+” or “New” link in the Azure portal on left-hand side.

    Azure Cognitive Services

    Step 2 

    Once you click on “AI + Cognitive Services”, you will see the list of APIs available in Cognitive Services.

    Azure Cognitive Services

    Azure Cognitive Services

    Azure Cognitive Services

    Step 3 

    Choose Face API to subscribe to Microsoft Cognitive Services Face API and proceed further with subscription steps. After clicking on Face API, it will show the legal page. Read it carefully and then click on "Create".

    Azure Cognitive Services

    Step 4 

    After clicking on Create, there are two possibilities (happened in my case).

    • You will be given a form to fill, as shown below, in which you have to fill the below details (This option might be visible after a few hours)
      • Name
      • Subscription (In my case, I choose Free trial; you can choose F0 or S0 subscription types)
      • Resource Group

        Azure Cognitive Services

    • You will see the below image and it will ask you for a new subscription, in which case you can generate the Subscription Keys and Endpoint URL from a different URL (https://azure.microsoft.com/en-us/try/cognitive-services/).

      Azure Cognitive Services

      Azure Cognitive Services

      Below are the generated Keys and Endpoint URL.

      Azure Cognitive Services

      Step 5 

      Create a WPF Application in Visual Studio 2015 (Visual C# > Windows Desktop > WPF Application). I have named the application as “Face Tutorial”.

      Azure Cognitive Services

      Azure Cognitive Services

      Step 6 

      Add a button with the name as “Browse” on MainWindow.xaml, using designer or code. Here, I prefer adding the button using code.

      1. <Window x:Class="Face_Tutorial.MainWindow"  
      2.         xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"  
      3.         xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"  
      4.         Title="MainWindow" Height="700" Width="960">  
      5.     <Grid x:Name="BackPanel">  
      6.         <Image x:Name="FacePhoto" Stretch="Uniform" Margin="0,0,0,50" MouseMove="FacePhoto_MouseMove" />  
      7.         <DockPanel DockPanel.Dock="Bottom">  
      8.             <Button x:Name="BrowseButton" Width="72" Height="20" VerticalAlignment="Bottom" HorizontalAlignment="Left"  
      9.                     Content="Browse..."  
      10.                     Click="BrowseButton_Click" />  
      11.             <StatusBar VerticalAlignment="Bottom">  
      12.                 <StatusBarItem>  
      13.                     <TextBlock Name="faceDescriptionStatusBar" />  
      14.                 </StatusBarItem>  
      15.             </StatusBar>  
      16.         </DockPanel>  
      17.     </Grid>  
      18. </Window>  

      Step 7 

      Now, go to MainWindow.xaml.cs. The below-given directives are required in the solution to access the Face APIs.

      1. using Microsoft.ProjectOxford.Face;
      2. using Microsoft.ProjectOxford.Face.Contract;

      To add the above references, browse for two dll’s and click on "Install".

      1. Newtonsoft.JSON
      2. Microsoft.ProjectOxford.Face

        Azure Cognitive Services

        Azure Cognitive Services

      Once you add these two dll’s, these will be shown in the solution.

      Azure Cognitive Services 

      Step 8

      Add the lines of code given below to click event of Browse button.

      1. private async void BrowseButton_Click(object sender, RoutedEventArgs e)  
      2.         {  
      3.             // Get the image file to scan from the user.  
      4.             var openDlg = new Microsoft.Win32.OpenFileDialog();  
      5.   
      6.             openDlg.Filter = "JPEG Image(*.jpg)|*.jpg";  
      7.             bool? result = openDlg.ShowDialog(this);  
      8.   
      9.             // Return if canceled.  
      10.             if (!(bool)result)  
      11.             {  
      12.                 return;  
      13.             }  
      14.   
      15.             // Display the image file.  
      16.             string filePath = openDlg.FileName;  
      17.   
      18.             Uri fileUri = new Uri(filePath);  
      19.             BitmapImage bitmapSource = new BitmapImage();  
      20.   
      21.             bitmapSource.BeginInit();  
      22.             bitmapSource.CacheOption = BitmapCacheOption.None;  
      23.             bitmapSource.UriSource = fileUri;  
      24.             bitmapSource.EndInit();  
      25.   
      26.             FacePhoto.Source = bitmapSource;  
      27.   
      28.             // Detect any faces in the image.  
      29.             Title = "Detecting...";  
      30.             faces = await UploadAndDetectFaces(filePath);  
      31.             Title = String.Format("Detection Finished. {0} face(s) detected", faces.Length);  
      32.   
      33.             if(faces.Length > 0)  
      34.             {  
      35.                 // Prepare to draw rectangles around the faces.  
      36.                 DrawingVisual visual = new DrawingVisual();  
      37.                 DrawingContext drawingContext = visual.RenderOpen();  
      38.                 drawingContext.DrawImage(bitmapSource, new Rect(0, 0, bitmapSource.Width, bitmapSource.Height));  
      39.                 double dpi = bitmapSource.DpiX;  
      40.                 resizeFactor = 96 / dpi;  
      41.                 faceDescriptions = new String[faces.Length];  
      42.   
      43.                 for (int i = 0; i < faces.Length; ++i)  
      44.                 {  
      45.                     Face face = faces[i];  
      46.   
      47.                     // Draw a rectangle on the face.  
      48.                     drawingContext.DrawRectangle(  
      49.                         Brushes.Transparent,  
      50.                         new Pen(Brushes.Red, 2),  
      51.                         new Rect(  
      52.                             face.FaceRectangle.Left * resizeFactor,  
      53.                             face.FaceRectangle.Top * resizeFactor,  
      54.                             face.FaceRectangle.Width * resizeFactor,  
      55.                             face.FaceRectangle.Height * resizeFactor  
      56.                             )  
      57.                     );  
      58.   
      59.                     // Store the face description.  
      60.                     faceDescriptions[i] = FaceDescription(face);  
      61.                 }  
      62.   
      63.                 drawingContext.Close();  
      64.   
      65.                 // Display the image with the rectangle around the face.  
      66.                 RenderTargetBitmap faceWithRectBitmap = new RenderTargetBitmap(  
      67.                     (int)(bitmapSource.PixelWidth * resizeFactor),  
      68.                     (int)(bitmapSource.PixelHeight * resizeFactor),  
      69.                     96,  
      70.                     96,  
      71.                     PixelFormats.Pbgra32);  
      72.   
      73.                 faceWithRectBitmap.Render(visual);  
      74.                 FacePhoto.Source = faceWithRectBitmap;  
      75.   
      76.                 // Set the status bar text.  
      77.                 faceDescriptionStatusBar.Text = "Place the mouse pointer over a face to see the face description.";  
      78.             }  
      79.         } 

      Add a new await function named UploadAndDetectFaces(), which accepts imageFilePath as an object parameter.

      1. // Uploads the image file and calls Detect Faces.  
      2.         private async Task<Face[]> UploadAndDetectFaces(string imageFilePath)  
      3.         {  
      4.             // The list of Face attributes to return.  
      5.             IEnumerable<FaceAttributeType> faceAttributes =  
      6.                 new FaceAttributeType[] { FaceAttributeType.Gender, FaceAttributeType.Age, FaceAttributeType.Smile, FaceAttributeType.Emotion, FaceAttributeType.Glasses, FaceAttributeType.Hair };  
      7.   
      8.             // Call the Face API.  
      9.             try  
      10.             {  
      11.                 using (Stream imageFileStream = File.OpenRead(imageFilePath))  
      12.                 {  
      13.                     Face[] faces = await faceServiceClient.DetectAsync(imageFileStream, returnFaceId: true, returnFaceLandmarks: false, returnFaceAttributes: faceAttributes);  
      14.                     return faces;  
      15.                 }  
      16.             }  
      17.             // Catch and display Face API errors.  
      18.             catch (FaceAPIException f)  
      19.             {  
      20.                 MessageBox.Show(f.ErrorMessage, f.ErrorCode);  
      21.                 return new Face[0];  
      22.             }  
      23.             // Catch and display all other errors.  
      24.             catch (Exception e)  
      25.             {  
      26.                 MessageBox.Show(e.Message, "Error");  
      27.                 return new Face[0];  
      28.             }  
      29.         }  

      Below is the code which appends the string as output in the status bar.

      1. // Returns a string that describes the given face.  
      2.         private string FaceDescription(Face face)  
      3.         {  
      4.             StringBuilder sb = new StringBuilder();  
      5.   
      6.             sb.Append("Face: ");  
      7.   
      8.             // Add the gender, age, and smile.  
      9.             sb.Append(face.FaceAttributes.Gender);  
      10.             sb.Append(", ");  
      11.             sb.Append(face.FaceAttributes.Age);  
      12.             sb.Append(", ");  
      13.             sb.Append(String.Format("smile {0:F1}%, ", face.FaceAttributes.Smile * 100));  
      14.   
      15.             // Add the emotions. Display all emotions over 10%.  
      16.             sb.Append("Emotion: ");  
      17.             EmotionScores emotionScores = face.FaceAttributes.Emotion;  
      18.             if (emotionScores.Anger >= 0.1f) sb.Append(String.Format("anger {0:F1}%, ", emotionScores.Anger * 100));  
      19.             if (emotionScores.Contempt >= 0.1f) sb.Append(String.Format("contempt {0:F1}%, ", emotionScores.Contempt * 100));  
      20.             if (emotionScores.Disgust >= 0.1f) sb.Append(String.Format("disgust {0:F1}%, ", emotionScores.Disgust * 100));  
      21.             if (emotionScores.Fear >= 0.1f) sb.Append(String.Format("fear {0:F1}%, ", emotionScores.Fear * 100));  
      22.             if (emotionScores.Happiness >= 0.1f) sb.Append(String.Format("happiness {0:F1}%, ", emotionScores.Happiness * 100));  
      23.             if (emotionScores.Neutral >= 0.1f) sb.Append(String.Format("neutral {0:F1}%, ", emotionScores.Neutral * 100));  
      24.             if (emotionScores.Sadness >= 0.1f) sb.Append(String.Format("sadness {0:F1}%, ", emotionScores.Sadness * 100));  
      25.             if (emotionScores.Surprise >= 0.1f) sb.Append(String.Format("surprise {0:F1}%, ", emotionScores.Surprise * 100));  
      26.   
      27.             // Add glasses.  
      28.             sb.Append(face.FaceAttributes.Glasses);  
      29.             sb.Append(", ");  
      30.   
      31.             // Add hair.  
      32.             sb.Append("Hair: ");  
      33.   
      34.             // Display baldness confidence if over 1%.  
      35.             if (face.FaceAttributes.Hair.Bald >= 0.01f)  
      36.                 sb.Append(String.Format("bald {0:F1}% ", face.FaceAttributes.Hair.Bald * 100));  
      37.   
      38.             // Display all hair color attributes over 10%.  
      39.             HairColor[] hairColors = face.FaceAttributes.Hair.HairColor;  
      40.             foreach (HairColor hairColor in hairColors)  
      41.             {  
      42.                 if (hairColor.Confidence >= 0.1f)  
      43.                 {  
      44.                     sb.Append(hairColor.Color.ToString());  
      45.                     sb.Append(String.Format(" {0:F1}% ", hairColor.Confidence * 100));  
      46.                 }  
      47.             }  
      48.   
      49.             // Return the built string.  
      50.             return sb.ToString();  
      51.         }  

      Finally, add a function, which returns the string on mouse hover over the image.

      1. private void FacePhoto_MouseMove(object sender, MouseEventArgs e)  
      2.         {  
      3.             // If the REST call has not completed, return from this method.  
      4.             if (faces == null)  
      5.                 return;  
      6.   
      7.             // Find the mouse position relative to the image.  
      8.             Point mouseXY = e.GetPosition(FacePhoto);  
      9.   
      10.             ImageSource imageSource = FacePhoto.Source;  
      11.             BitmapSource bitmapSource = (BitmapSource)imageSource;  
      12.   
      13.             // Scale adjustment between the actual size and displayed size.  
      14.             var scale = FacePhoto.ActualWidth / (bitmapSource.PixelWidth / resizeFactor);  
      15.   
      16.             // Check if this mouse position is over a face rectangle.  
      17.             bool mouseOverFace = false;  
      18.   
      19.             for (int i = 0; i < faces.Length; ++i)  
      20.             {  
      21.                 FaceRectangle fr = faces[i].FaceRectangle;  
      22.                 double left = fr.Left * scale;  
      23.                 double top = fr.Top * scale;  
      24.                 double width = fr.Width * scale;  
      25.                 double height = fr.Height * scale;  
      26.   
      27.                 // Display the face description for this face if the mouse is over this face rectangle.  
      28.                 if (mouseXY.X >= left && mouseXY.X <= left + width && mouseXY.Y >= top && mouseXY.Y <= top + height)  
      29.                 {  
      30.                     faceDescriptionStatusBar.Text = faceDescriptions[i];  
      31.                     mouseOverFace = true;  
      32.                     break;  
      33.                 }  
      34.             }  
      35.   
      36.             // If the mouse is not over a face rectangle.  
      37.             if (!mouseOverFace)  
      38.                 faceDescriptionStatusBar.Text = "Place the mouse pointer over a face to see the face description.";  
      39.         }  

      Step 9 

      Build the application and run it. You will see the output as below.

      Azure Cognitive Services

      Summary

      Therefore, in this part, we saw that by writing very less code, we can use Microsoft Cognitive Services Face API.

      Part 3 – EMOTION API

      Emotion API provides the advanced Emotion algorithms and it has two main functions:-

      1. Emotion Recognition

        1. It takes image as an input and returns the confidence of the emotions for each face in the image.
        2. The emotions detected are happiness, sadness, surprise, anger, fear, contempt, disgust or neutral.
        3. Emotion scores are normalized to sum to one.

      2. Emotion in Video

        1. It takes video as an input and returns the confidence across a set of emotions for the group of faces in the image over a period.
        2. The emotions detected are happiness, sadness, surprise, anger, fear, contempt, disgust or neutral.
        3. It returns two types of aggregates,

          • windowMeanScores gives a mean score for all of the faces detected in a frame for each emotion.
          • windowFaceDistribution gives the distribution of faces with each emotion as the dominant emotion for that face.

        Now, let’s create a C# Console application to test Microsoft Cognitive Services Emotion API.

        Step 1 

        We have already created the Endpoint URL and subscription keys in Azure for Emotion API. We will create a Console Application in Visual Studio 2015.

        Step 2 

        Replace the Program.cs with the following code.

        1. using System;  
        2. using System.Collections.Generic;  
        3. using System.Linq;  
        4. using System.Text;  
        5. using System.Threading.Tasks;  
        6. using System.IO;  
        7. using System.Net.Http;  
        8. using System.Net.Http.Headers;  
        9.   
        10. namespace Emotion_API_Tutorial  
        11. {  
        12.     class Program  
        13.     {  
        14.         static void Main(string[] args)  
        15.         {  
        16.             Console.WriteLine("Enter the path to a JPEG image file:");  
        17.             string imageFilePath = Console.ReadLine();  
        18.               
        19.             MakeRequest(imageFilePath);  
        20.   
        21.             Console.WriteLine("\n\n\nWait for the result below, then hit ENTER to exit...\n\n\n");  
        22.             Console.ReadLine();  
        23.         }  
        24.   
        25.         static byte[] GetImageAsByteArray(string imageFilePath)  
        26.         {  
        27.             FileStream fileStream = new FileStream(imageFilePath, FileMode.Open, FileAccess.Read);  
        28.             BinaryReader binaryReader = new BinaryReader(fileStream);  
        29.             return binaryReader.ReadBytes((int)fileStream.Length);  
        30.         }  
        31.   
        32.         static async void MakeRequest(string imageFilePath)  
        33.         {  
        34.             try  
        35.             {  
        36.                 var client = new HttpClient();  
        37.                 // Request headers  
        38.                 client.DefaultRequestHeaders.Add("Ocp-Apim-Subscription-Key""<Enter Your Key Value>");  
        39.   
        40.                 //Endpoint URL  
        41.                 string uri = "https://westus.api.cognitive.microsoft.com/emotion/v1.0/recognize?";  
        42.   
        43.                 HttpResponseMessage response;  
        44.                 string responseContent;  
        45.   
        46.                 // Request body.   
        47.                 byte[] byteData = GetImageAsByteArray(imageFilePath);  
        48.   
        49.                 using (var content = new ByteArrayContent(byteData))  
        50.                 {  
        51.                     // This example uses content type "application/octet-stream".  
        52.                     // The other content types you can use are "application/json" and "multipart/form-data" and application/octet-stream.  
        53.                     content.Headers.ContentType = new MediaTypeHeaderValue("application/octet-stream");  
        54.                     response = await client.PostAsync(uri, content);  
        55.                     responseContent = response.Content.ReadAsStringAsync().Result;  
        56.                 }  
        57.   
        58.                 //A peak at the JSON response.  
        59.                 Console.WriteLine(responseContent);  
        60.             }  
        61.             catch (Exception ex)  
        62.             {  
        63.                 Console.WriteLine("Error occured:= " + ex.Message + ex.StackTrace);  
        64.                 Console.ReadLine();  
        65.             }  
        66.         }  
        67.     }  

        Step 3 

        Build and run the application. A successful call will return an array of face entries and their emotion scores. An empty response indicates that no faces were detected. An emotion entry contains the following fields.

        • faceRectangle - Rectangle location of face in the image.
        • scores - Emotion scores for each face in the image.

        Below is the output of my trial run. Provide the path of an image from your local folder and hit Enter.

        Azure Cognitive Services

        1. [  
        2.   {  
        3.     "faceRectangle": {  
        4.       "height": 44,  
        5.       "left": 62,  
        6.       "top": 36,  
        7.       "width": 44  
        8.     },  
        9.     "scores": {  
        10.       "anger": 0.000009850864,  
        11.       "contempt": 1.073325e-8,  
        12.       "disgust": 0.00000230705427,  
        13.       "fear": 1.63113334e-9,  
        14.       "happiness": 0.9999875,  
        15.       "neutral": 1.00619431e-7,  
        16.       "sadness": 1.13927945e-9,  
        17.       "surprise": 2.365794e-7  
        18.     }  
        19.   }  
        20. ]  

        Summary

        Therefore, in this part, we saw that by writing much less code, we could use Microsoft Cognitive Services Emotion API.

        Up Next
          Ebook Download
          View all
          TOP Azure
          Read by 1 people
          Download Now!
          Learn
          View all