Face Detection & Recognition with Azure Face API

Face Detection & Recognition has never been so easy as now with Azure Face API. API is part of Azure Cognitive Services, where we have a lot of interesting intelligent APIs such as emotion and sentiment detection, vision and speech recognition, language understanding, knowledge and search.
Only what you need is Visual Studio and subscription to Face API (free trial or full) to get your API endpoint URL & subscription key (https://azure.microsoft.com/en-us/try/cognitive-services/).

For the purpose of this example we create new WPF application named MSFaceAPITester. Of course you can create any other type of project.
Via NuGet Package Manager search and install next two things:

  • Newtonsoft.Json
  • Microsoft.ProjectOxford.Face

Firstly, we import some additional namespaces:

using System.IO;
using Microsoft.ProjectOxford.Face;
using Microsoft.ProjectOxford.Face.Contract;
using System.Globalization;
using Microsoft.Win32;

Then we create global read-only variable which represent our connection to Face API (replace subscription key and Face API endpoind URL with yours):

private readonly IFaceServiceClient faceApiConnection = new FaceServiceClient("{subscription key}", "{Face API endpoint URL}");

On our WPF grid we create one image and one button control with click event handler:

<Grid x:Name="BackPanel">
 <Image x:Name="imgPicture" Stretch="Uniform" Margin="0,0,0,30"/>
 <Button x:Name="btnBrowse" Margin="20,5" Height="20" 
 VerticalAlignment="Bottom" Content="Browse..." 
 Click="btnBrowse_Click"/>
 </Grid>

In our backend cs file we create async bntBrowse_Click function which handle button click event:

private async void btnBrowse_Click(object sender, RoutedEventArgs e)
{
    OpenFileDialog openDlg = new OpenFileDialog();
    openDlg.Filter = "JPEG Image(*.jpg)|*.jpg";
    bool? result = openDlg.ShowDialog(this);
    if (!(bool)result)
        return;

    string potDoSlike = openDlg.FileName;
    Uri potDoSlikeURI = new Uri(potDoSlike);

    BitmapImage bmpImage = new BitmapImage();
    bmpImage.BeginInit();
    bmpImage.CacheOption = BitmapCacheOption.None;
    bmpImage.UriSource = potDoSlikeURI;
    bmpImage.EndInit();

    imgPicture.Source = bmpImage;

    Title = "Searching ...";
    var faces = await FaceDetectionProgress(potDoSlike);
    Title = String.Format("Completed. {0} faces founded", faces.Item1.Length);

    if (faces.Item1.Length > 0)
    {
        DrawingVisual visual = new DrawingVisual();
        DrawingContext drawingContext = visual.RenderOpen();
        drawingContext.DrawImage(bmpImage,
            new Rect(0, 0, bmpImage.Width, bmpImage.Height));
        double dpi = bmpImage.DpiX;
        double resizeFactor = 96 / dpi;

        int indx = -1;
        foreach (var faceFrame in faces.Item1)
        {
            indx++;

            drawingContext.DrawRectangle(
                Brushes.Transparent,
                new Pen(Brushes.Red, 2),
                new Rect(
                    faceFrame.Left * resizeFactor,
                    faceFrame.Top * resizeFactor,
                    faceFrame.Width * resizeFactor,
                    faceFrame.Height * resizeFactor
                    )
            );

            var faceAttrs = faces.Item2[indx];
            drawingContext.DrawText(new FormattedText("Age: " + faceAttrs.Age + "\nGender: " + faceAttrs.Gender, CultureInfo.InvariantCulture, FlowDirection.LeftToRight, new Typeface("Times New Roman"), 14, Brushes.Red), new Point(faceFrame.Left, faceFrame.Top + faceFrame.Height));
        }

        drawingContext.Close();
        RenderTargetBitmap frame = new RenderTargetBitmap(
            (int)(bmpImage.PixelWidth * resizeFactor),
            (int)(bmpImage.PixelHeight * resizeFactor),
            96,
            96,
            PixelFormats.Pbgra32);

        frame.Render(visual);
        imgPicture.Source = frame;
    }
}

This function create OpenFileDialog where you can select a testing image. Then we call FaceDetectionProgress custom function with this image, which we will write it down a little bit later. This function return us a Tuple with FaceRectangle array and FaceAttributes array. In FaceRectangle array we get top, left, width and height parameters for rectangle around detected faces. In FaceAttributes we will get some additional attributes like age and gender.

private async Task<Tuple<FaceRectangle[], FaceAttributes[]>> FaceDetectionProgress(string pathToImage)
{
    try
    {
        using (Stream imgFS = File.OpenRead(pathToImage))
        {
            var additionalAttrs = new FaceAttributeType[] {
                FaceAttributeType.Age,
                FaceAttributeType.Gender
            };

            var faces = await faceApiConnection.DetectAsync(imgFS, returnFaceAttributes: additionalAttrs);

            var facesOkvirjis = faces.Select(face => face.FaceRectangle);
            var facesAttrs = faces.Select(face => face.FaceAttributes);

            return new Tuple<FaceRectangle[], FaceAttributes[]>(facesOkvirjis.ToArray(), facesAttrs.ToArray());
        }
    }
    catch (Exception)
    {
        return new Tuple<FaceRectangle[], FaceAttributes[]>(new FaceRectangle[0], new FaceAttributes[0]);
    }
}

Now we can start our program and test face detection on some pictures with different persons.

2017-07-17_1509

We can see that all tree persons in picture above are detected and calculated their ages and gender (all tree persons are detected as a little bit older, maybe because of late night party ;)).

So, detection is completed. We can now extend our application with recognition part.
I’m going to add all code for recognition inside async Window_Loaded method.

Firstly we create two additional global parameters (first for our user group ID, second for list of users we want to be recognized):

private string userGroupId = "friends";
private List<CreatePersonResult> users = new List<CreatePersonResult>();

Then we create a person group named “My friends” where we use the user group ID defined above:

await faceApiConnection.CreatePersonGroupAsync(userGroupId, "My friends");

We add two persons into newly created user group:

users.Add(await faceApiConnection.CreatePersonAsync(userGroupId, "Gašper Kamenšek"));
users.Add(await faceApiConnection.CreatePersonAsync(userGroupId, "Boštjan Ohnjec"));

In C:\Temp\GK we have training images for first person and in C:\Temp\BO for second person.

2017-07-18_1859

We need to connect this training images with belonging persons:

foreach (string imagePath in Directory.GetFiles(@"C:\Temp\GK", "*.jpg"))
{
    using (Stream s = File.OpenRead(imagePath))
    {
        await faceApiConnection.AddPersonFaceAsync(userGroupId, users[0].PersonId, s);
    }
}

foreach (string imagePath in Directory.GetFiles(@"C:\Temp\BO", "*.jpg"))
{
    using (Stream s = File.OpenRead(imagePath))
    {
        await faceApiConnection.AddPersonFaceAsync(userGroupId, users[1].PersonId, s);
    }
}

And finnally we start to train our person group:

await faceApiConnection.TrainPersonGroupAsync(userGroupId);
TrainingStatus trainingStatus = null;
while (true)
{
    trainingStatus = await faceApiConnection.GetPersonGroupTrainingStatusAsync(userGroupId);
    if (trainingStatus.Status != Status.Running)
        break;

    await Task.Delay(1000);
}

Then we update our FaceDetectionProgress method:

private async Task<Tuple<FaceRectangle[], FaceAttributes[], string[]>> FaceDetectionProgress(string pathToImage)
{
    try
    {
        using (Stream imgFS = File.OpenRead(pathToImage))
        {
            var additionalAttrs = new FaceAttributeType[] {
                FaceAttributeType.Age,
                FaceAttributeType.Gender
            };

            var faces = await faceApiConnection.DetectAsync(imgFS, returnFaceAttributes: additionalAttrs);
            var facesIds = faces.Select(face => face.FaceId).ToArray();
            var facesOkvirjis = faces.Select(face => face.FaceRectangle);
            var facesAttrs = faces.Select(face => face.FaceAttributes);

            var results = await faceApiConnection.IdentifyAsync(userGroupId, facesIds);
            List<string> names = new List<string>();

            foreach (var identifyResult in results)
            {
                if (identifyResult.Candidates.Length != 0)
                {
                    var kandidatId = identifyResult.Candidates[0].PersonId;
                    var person = await faceApiConnection.GetPersonAsync(userGroupId, kandidatId);
                    names.Add(person.Name);
                }
                else
                    names.Add("neznan");
            }

            return new Tuple<FaceRectangle[], FaceAttributes[], string[]>(facesOkvirjis.ToArray(), facesAttrs.ToArray(), names.ToArray());
        }
    }
    catch (Exception)
    {
        return new Tuple<FaceRectangle[], FaceAttributes[], string[]>(new FaceRectangle[0], new FaceAttributes[0], new string[0]);
    }
}

The last thing we need to do is to update our DrawText call inside the btnBrowse_Click method:

drawingContext.DrawText(new FormattedText("Name: " + faces.Item3[indx] + "\nAge: " + faceAttrs.Age + "\nGender: " + faceAttrs.Gender, CultureInfo.InvariantCulture, FlowDirection.LeftToRight, new Typeface("Times New Roman"), 30, Brushes.Red), new Point(faceFrame.Left, faceFrame.Top + faceFrame.Height));

This is all we need to do with recognition part of our program. So now you can run program again and you can see result below.

2017-07-18_1916

[ Complete code on GitHub ]

Cheers!
Gašper Rupnik

{End.}

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Powered by WordPress.com.

Up ↑

%d bloggers like this: