• Sep 17, 2025

One-Shot Threat Detection in Texts with Azure AI and .NET Core

In today’s digital landscape, ensuring that user-facing content is safe, inclusive, and compliant is more than a best practice — it’s a necessity. In this article, we’ll walk through a powerful .NET Core API that analyzes uploaded documents for unsafe or offensive content using Azure AI Document Intelligence and Azure AI Content Safety.

In today’s digital landscape, ensuring that user-facing content is safe, inclusive, and compliant is more than a best practice — it’s a necessity. In this article, we’ll walk through a powerful .NET Core API that analyzes uploaded documents for unsafe or offensive content using Azure AI Document Intelligence and Azure AI Content Safety.
Whether you’re building a moderation pipeline for user-generated content or auditing sensitive documents, this solution offers a scalable and secure way to flag threats in one go.

Step 1: Azure Setup

Before diving into code, you’ll need to provision two Azure AI services:

  1. Document Intelligence (formerly Form Recognizer)

 Used to extract structured text from uploaded documents.
• Go to Azure Portal → Create a resource → Search “Document Intelligence”
• Choose the prebuilt-layout model for paragraph-level extraction
• Note the endpoint and API key

Refer to below article “ Build an Invoice Analyzer with .NET Core and Azure Document Intelligence” for knowing how to create Document Intelligence resource in Azure

https://www.devtechie.com/blog/906d7379-1a39-490f-80e3-f8fec4d53bb5

2. Azure AI Content Safety

Used to analyze text for harmful categories like hate speech, violence, sexual content, and self-harm.
• Go to Azure Portal → Create a resource → Search “Content Safety”
• Note the endpoint and API key

Refer to below article “Analyzing Images for Safety with Azure Content Safety” for knowing how to create Content Safety resource in Azure

https://www.devtechie.com/blog/8950f16e-2153-441e-aeb5-78253b0c380e

Step 2: .NET Core API Explained

Here’s what the API does:

Step 1: Text Extraction

var docClient = new DocumentIntelligenceClient(...);
var options = new AnalyzeDocumentOptions("prebuilt-layout", BinaryData.FromStream(stream));
var docResult = await docClient.AnalyzeDocumentAsync(WaitUntil.Completed, options);
  • Uses the prebuilt-layout model to extract paragraphs from the document.
    • Filters out empty or whitespace-only content.

Step 2: Threat Analysis

var safetyClient = new ContentSafetyClient(...);
foreach (var para in paragraphs)
{
    var result = await safetyClient.AnalyzeTextAsync(new AnalyzeTextOptions(para));
    ...
}
  • Each paragraph is sent to Azure AI Content Safety.
    • If any harmful categories are detected, the paragraph is flagged with:
    • Category (e.g., Hate, Violence)
    • Severity (0–6)

Below is the full code snippet:

[HttpPost("analyze-threats-onOnego")]
public async Task<IActionResult> AnalyzeThreatsInOneGo([FromForm] IFormFile file)
{
    if (file == null || file.Length == 0)
        return BadRequest("Please upload a valid document.");

    try
    {
        // Step 1: Extract text using Document Intelligence Layout model
        var docClient = new DocumentIntelligenceClient(new Uri(AzureAIConstants.DocumentAnalysisEndpoint), new AzureKeyCredential(AzureAIConstants.DocumentAnalysisApiKey));
        using var stream = file.OpenReadStream();
       
        var options = new AnalyzeDocumentOptions("prebuilt-layout", BinaryData.FromStream(stream));

        var docResult = await docClient.AnalyzeDocumentAsync(WaitUntil.Completed, options);

        var paragraphs = docResult?.Value?.Paragraphs?.Select(p => p.Content).Where(p => !string.IsNullOrWhiteSpace(p)).ToList();
        if (paragraphs == null || paragraphs.Count == 0)
            return Ok(new { message = "No text found in document." });

        // Step 2: Analyze each paragraph with Content Safety
        var safetyClient = new ContentSafetyClient(new Uri(AzureAIConstants.ContentSafteyEndpoint), new AzureKeyCredential(AzureAIConstants.ContentSafteyApiKey));
        var flagged = new List<object>();

        foreach (var para in paragraphs)
        {
            var result = await safetyClient.AnalyzeTextAsync(new AnalyzeTextOptions(para));
            if (result?.Value?.CategoriesAnalysis?.Any() == true)
            {
                flagged.Add(new
                {
                    Text = para,
                    Categories = result.Value.CategoriesAnalysis.Select(c => new
                    {
                        Category = c.Category.ToString(),
                        Severity = c.Severity
                    })
                });
            }
        }

        return Ok(new
        {
            message = "Threat detection completed",
            flaggedParagraphs = flagged
        });
    }
    catch (Exception ex)
    {
        return StatusCode(500, $"Error during analysis: {ex.Message}");
    }
}

Final Response
Returns a JSON object with all flagged paragraphs and their threat metadata.

Below is the response:

{
    "message": "Threat detection completed",
    "flaggedParagraphs": [
        {
            "text": "People like you don’t belong here. You’re the reason everything is falling apart,” the message began, escalating quickly into more aggressive language. “If you show up again, I swear I’ll make you regret it.” The thread continued with explicit descriptions that made several users uncomfortable, and one post even hinted at despair: “Sometimes I feel like giving up and disappearing forever.” Moderators flagged the conversation for review due to its hateful tone, violent threats, sexual undertones, and signs of emotional distress",
            "categories": [
                {
                    "category": "Hate",
                    "severity": 2
                },
                {
                    "category": "SelfHarm",
                    "severity": 0
                },
                {
                    "category": "Sexual",
                    "severity": 0
                },
                {
                    "category": "Violence",
                    "severity": 4
                }
            ]
        }
    ]
}

Step 3: Testing with Swagger or Postman

Postman
1. Create a new request
2. URL: 
3. Under Body, choose 
 • Key: 
 • Type: 
 • Value: Upload your test document
4. Send the request and inspect the JSON response

Below is the screen shot:

Why This Matters
This API is a great example of how to:
• Automate document ingestion and moderation
• Use Azure AI services in tandem
• Build secure, scalable pipelines for compliance and accessibility
You can extend this by:
• Auto-redacting flagged content
• Logging threat metadata for audits
• Integrating with Immersive Reader for safe display