• Nov 5, 2025

Building a Face Detection API with Azure Cognitive Services and .NET Core

Face detection is a powerful capability that can enhance applications in security, accessibility, and user engagement. In this article, we’ll walk through how to build a face detection API using Azure Cognitive Services and .NET Core. We’ll cover:
1. What needs to be configured on the Azure side
2. How the .NET Core wrapper API works
3. How to test the API using Swagger or Postman

Step 1: Azure Setup

Before diving into code, you need to provision the necessary Azure resources: Computer Vision API (Image Analysis)

Setup Steps

  1. Create a Computer Vision Services resource

  • Go to Azure Portal → Create Resource → Search for “Computer Vision”

  • Choose pricing tier and region

2. Get API Keys and Endpoints

  • Navigate to the resource → Keys and Endpoint

  • Copy the Key and Endpoint values

Step2. Understanding the .netCore Wrapper API

The provided code defines a RESTful API using ASP.NET Core that detects people in an image using Azure’s Image Analysis SDK.

Key Components

  1. Controller Declaration

[ApiController]
[Route("api/[controller]")]
public class FaceDetectionController : ControllerBase

This sets up a RESTful controller with route prefix api/FaceDetection

2. Image Analysis Logic

var client = new ImageAnalysisClient(new Uri(endpoint), new AzureKeyCredential(key));
ImageAnalysisResult result = await client.AnalyzeAsync(imageURL, VisualFeatures.People, options);

This uses the newer Azure.AI.Vision.ImageAnalysis SDK to detect people in the image. The VisualFeatures.People ensures only people are analyzed.

3. Response Construction

var details = result.People.Values
    .Select(obj => obj.Confidence)
    .Distinct()
    .ToList();

return Ok(new
{
    items = details,
    NoOfPeople = result.People.Values.Count
});

This extracts the confidence score and returns the number of people detected.

Below is the full code snippet:

//using Azure.AI.Vision.Face;
using Azure.AI.Vision.ImageAnalysis;
using Azure;
using Microsoft.AspNetCore.Mvc;
using Microsoft.Azure.CognitiveServices.Vision.ComputerVision;
using Microsoft.Azure.CognitiveServices.Vision.Face;
using Microsoft.Azure.CognitiveServices.Vision.Face.Models;
using Microsoft.Rest;
using PictureDictionary.API.Helper;

[ApiController]
[Route("api/[controller]")]
public class FaceDetectionController : ControllerBase
{   

    public FaceDetectionController(IConfiguration config)
    {
        var endpoint = AzureAIConstants.FaceEndpoint;
        var apiKey = AzureAIConstants.FaceApiKey;       
    }    

    [HttpGet("detectFaces")]
    public async Task<IActionResult> DetectFaces(string imageUrl)
    {
        var client = new ImageAnalysisClient(new Uri(AzureAIConstants.ComputerVisionNewEndpoint), new AzureKeyCredential(AzureAIConstants.ComputerVisionNewKey));

        // Use image from URL or uploaded file
        Uri imageURL = new Uri(imageUrl);//https://i.imgur.com/TTHS1VR.jpegs

        var options = new ImageAnalysisOptions
        {
            Language = "en",
            GenderNeutralCaption = true
        };

        // Analyze image for objects
        ImageAnalysisResult result = await client.AnalyzeAsync(imageURL, VisualFeatures.People, options);

        var details = result.People.Values
            .Select(obj => obj.Confidence)
            .Distinct()
            .ToList();

        return Ok(new
        {
            items = details,
            NoOfPeople= result.People.Values.Count

        });
    }
}


Step 3: Testing the API with Postman

This API provides a clean and scalable way to integrate Azure’s image analysis capabilities into your applications. It’s modular, future-proof (with FaceClient ready for expansion), and easy to test.