In this article, I will be working on Azure 's Face recognition API and showing you how to exploit this very powerful API in order to add recognition capabilities to your applications by detecting human faces in images.
This will be a guide with no prerequisite skills, however, the only requirement is the access to the Azure portal and basic knowledge of web frameworks or JS syntax since we will be building our demo project using Angular or Python.
What is Azure ?
Microsoft Azure is a scalable cloud computing platform launched by Microsoft in February 2010. Due to its simplicity and the variety of services it provides, developers choose it to deploy their projects and its APIs to make things easier. Guided by a huge set of articles and proprietor documentation, Azure became one of the leaders of Cloud, in the second place just after AWS (Amazon Web Services).
What is Azure Face API?
Azure's Face API is an Application Programming Interface, provided within the Azure Portal. It is used to detect human faces and compare similar ones and identify people in order to classify them into groups.
Getting our API keys
In order to access any API, it is required that you provide an access token/key as a parameter on every request to the endpoint. Getting your FaceAPI key requires that you have access to an Azure account with permitted access to the service.
After logging in, visit the dashboard.
Then, click on the big green "+" sign in the left-top corner and search for Face API.
After that, click on 'Create'.
And, complete the form.
After that, you will automatically be redirected to the dashboard and the creation should be successful. Visit the service for the first time and something like this should appear.
Click on keys, and copy them to a safe place.
P.S
In case you accidentally showed them to an untrusted party, you can safely regenerate them by changing the status of previous keys to unvalid. Next, visit the overview page, and copy the endpoint URL. This should vary between regions.
First USAGE
Now, you should be ready to use the sample code in,
https://github.com/hbibz-deploy/AzureCognitive-py/
Copy/clone the script called 'FaceAPI.py' and save it.
Before running your code, ensure that you have both 3.X Python and pip installed. Then run the app.
You can use the sample:
Command
python FaceAPI.py {key} {Image URL}
Example
The hidden text is my access key.
When you finish, something like this should appear, i.e., detected faces should be in the blue rectangle.