
Mastering Modern .NET Logging - Structured Logging and Advanced Concepts
Author - Abdul Rahman (Bhai)
Logging
5 Articles
Table of Contents
What we gonna do?
📚 Prerequisites: This is an advanced guide. If you're new to .NET logging, start with Introduction to Logging in .NET - From Basics to Best Practices to learn the fundamentals first.
You've learned the basics of logging in .NET, but there's a whole world of power hidden beneath the surface. The modern .NET logger isn't just about writing messages to a console - it's a sophisticated system designed for structured logging, flexible filtering, and enterprise-grade observability.
In this deep-dive article, we'll explore the inner workings of ILogger and discover why the logging approach you've been using might be fundamentally wrong. We'll master structured logging principles, understand the distinction between message templates and parameters, and learn how to properly configure log levels, categories, and event IDs for production-ready .NET applications.
Whether you're building console apps, APIs, Blazor applications, or microservices, this guide will transform how you think about logging - moving from simple text output to rich, queryable, structured data that becomes invaluable when troubleshooting production issues.
Why we gonna do?
The Critical Flaw in Traditional Logging
Many developers write logs like this, thinking they're doing the right thing:
var name = "John";
var age = 30;
// ❌ WRONG - String interpolation
logger.LogInformation($"User {name} is {age} years old");
// ❌ WRONG - String concatenation
logger.LogInformation("User " + name + " is " + age + " years old");
This looks harmless - the message appears correctly in the console. But here's the problem: when you use string interpolation or concatenation, you're creating a finalized string before it reaches the logger. The logger receives this:
info: Program[0]
User John is 30 years old
Why This Breaks Structured Logging
Now let's see what happens in production with a JSON logging provider (which you'd use with modern observability platforms like Application Insights, Seq, Elasticsearch, or Datadog):
// Configure JSON console logging
services.AddLogging(builder =>
{
builder.AddJsonConsole();
});
The output shows the fundamental problem:
{
"Timestamp": "2026-05-24T10:30:45.123Z",
"Level": "Information",
"Category": "Program",
"Message": "User John is 30 years old",
"State": {
"Message": "User John is 30 years old",
"{OriginalFormat}": "User John is 30 years old"
}
}
The critical data you need - name and age - are baked into the message string. They're not separate, queryable fields. This creates serious problems:
- No filtering capability: You can't search for "all logs where age > 25" or "all logs for user John"
- No structured queries: Observability platforms can't index the values as distinct fields
- Performance overhead: String operations execute even when the log level is disabled
- Analysis nightmare: Parsing thousands of log messages with regex to extract values is brittle and slow
- Lost context: You've converted typed data (int age = 30) into untyped strings
Imagine searching through 100,000 log entries trying to find all instances where a specific user encountered an error. With string interpolation, you're forced to do full-text searches on message strings - slow, imprecise, and expensive in cloud logging services where you pay per query.
The Power of Structured Logging
Structured logging (also called semantic logging) solves this by treating log messages as templates with parameters rather than finalized strings. When done correctly, your logging provider can:
- Store parameter values as separate, indexed fields in the log entry
- Preserve data types (age stays an integer, not a string)
- Enable powerful queries like "show all logs where userId = '12345' and responseTime > 1000"
- Create automatic dashboards and alerts based on parameter values
- Reduce storage costs by efficiently indexing repeated templates
This isn't just a "nice to have" feature - it's the foundation of modern application observability. When production goes down at 3 AM, structured logging is the difference between finding the issue in 5 minutes vs. 5 hours.
Why Logs Must Be Consistent
Another critical principle: log message templates should never vary. Modern logging systems group and aggregate logs by their template pattern. If you hover over the logging methods in Visual Studio, you'll see a warning:
"The logging message template should not vary between calls"
This means the template structure should be consistent across all invocations, with only the parameter values changing - not the template itself. This enables log aggregation, pattern detection, and anomaly identification.
How we gonna do?
The Modern .NET Logger Across Application Types
One of the beautiful aspects of the .NET logging abstraction is its universal availability. The ILogger<T> interface works identically across all .NET application types. Let's see how it appears in different contexts.
Console Application
In a console app, you configure logging explicitly using dependency injection:
using Microsoft.Extensions.DependencyInjection;
using Microsoft.Extensions.Logging;
var services = new ServiceCollection();
services.AddLogging(builder =>
{
builder.AddConsole();
builder.SetMinimumLevel(LogLevel.Debug);
});
var serviceProvider = services.BuildServiceProvider();
var logger = serviceProvider.GetRequiredService<ILogger<Program>>();
logger.LogInformation("Console application started");
logger.LogDebug("Debug information available");
// Output:
// info: Program[0]
// Console application started
// dbug: Program[0]
// Debug information available
The category name (shown as "Program[0]") comes from the generic type parameter ILogger<Program>. This helps identify which part of your application generated each log entry.
Minimal API
In ASP.NET Core Minimal APIs, the logger is automatically configured and available for injection:
var builder = WebApplication.CreateBuilder(args);
var app = builder.Build();
app.MapGet("/weather", (ILogger<Program> logger) =>
{
logger.LogInformation("Weather endpoint accessed");
var forecast = new[] { "Sunny", "Rainy", "Cloudy" };
var weather = forecast[Random.Shared.Next(forecast.Length)];
logger.LogInformation("Returning weather: {Weather}", weather);
return new { weather };
});
app.Run();
Notice how you simply add ILogger<Program> as a parameter to your endpoint lambda - the framework automatically injects the logger instance.
Web API Controller
In traditional controller-based APIs, logging is typically injected via constructor:
[ApiController]
[Route("[controller]")]
public class WeatherForecastController : ControllerBase
{
private readonly ILogger<WeatherForecastController> _logger;
public WeatherForecastController(
ILogger<WeatherForecastController> logger)
{
_logger = logger;
}
[HttpGet(Name = "GetWeatherForecast")]
public IEnumerable<WeatherForecast> Get()
{
_logger.LogInformation("Retrieving weather forecast");
// Implementation
var forecasts = GenerateForecasts();
_logger.LogInformation(
"Generated {Count} weather forecasts",
forecasts.Count());
return forecasts;
}
}
The logger is injected automatically by ASP.NET Core's dependency injection container. Notice the category name now reflects the controller type: WeatherForecastController.
Blazor Component
In Blazor applications, you can inject the logger directly into components:
@page "/counter"
@inject ILogger<Counter> Logger
<h3>Counter: @currentCount</h3>
<button @onclick="IncrementCount">Increment</button>
@code {
private int currentCount = 0;
private void IncrementCount()
{
currentCount++;
Logger.LogInformation(
"Counter incremented to {Count}",
currentCount);
}
}
The pattern is consistent: @inject ILogger<ComponentType> and then use it anywhere in your component logic.
Important: ILogger vs ILogger<T>
Starting from .NET 6, the non-generic ILogger interface is not automatically registered in dependency injection. You can only inject ILogger<T> with a specific type parameter.
If you need the non-generic version, you must register it explicitly:
services.AddSingleton(sp =>
sp.GetRequiredService<ILoggerFactory>()
.CreateLogger("CustomCategory"));
However, the recommended pattern is to always use ILogger<T> where T is the class or component using the logger. This automatically sets the category name to the fully qualified type name, making it easier to filter and organize logs.
Understanding the Log Methods
When you write logger.LogInformation("message"), you're actually calling an extension method. If you examine the ILogger interface definition, you'll find it has surprisingly few members:
public interface ILogger
{
void Log<TState>(
LogLevel logLevel,
EventId eventId,
TState state,
Exception? exception,
Func<TState, Exception?, string> formatter);
bool IsEnabled(LogLevel logLevel);
IDisposable? BeginScope<TState>(TState state)
where TState : notnull;
}
The Log method is the core logging method - all convenience methods like LogInformation, LogError, and LogDebug eventually call this.
The Core Log Method
Let's deconstruct the Log method to understand what's happening under the hood:
// Using LogInformation (extension method)
logger.LogInformation("Application started");
// Is actually equivalent to:
logger.Log(
logLevel: LogLevel.Information,
eventId: new EventId(0),
state: "Application started",
exception: null,
formatter: (state, ex) => state.ToString());
You can use the Log method directly, but it's verbose and unnecessary in most cases. The extension methods provide a much cleaner API while giving you the same power.
Log Levels Explained
The LogLevel enum defines six levels of severity, plus two special values:
public enum LogLevel
{
Trace = 0, // Most detailed messages
Debug = 1, // Diagnostic information
Information = 2, // General informational messages
Warning = 3, // Something unexpected happened
Error = 4, // An error occurred but app continues
Critical = 5, // Critical failure, app may terminate
None = 6 // No logging threshold (disable all)
}
Each level has a corresponding convenience method:
logger.LogTrace("Trace-level diagnostic message");
logger.LogDebug("Debug information for development");
logger.LogInformation("General informational message");
logger.LogWarning("Warning: something unexpected occurred");
logger.LogError("Error: operation failed but app continues");
logger.LogCritical("Critical failure: immediate attention needed");
When you set a minimum log level, any logs below that threshold are suppressed. For example, if the minimum level is set to Information, then Trace and Debug logs won't appear.
Structured Logging with Message Templates
Now we arrive at the most important concept: message templates. This is where structured logging comes to life.
Instead of using string interpolation, you define a template with named placeholders in curly braces:
var userName = "John";
var userAge = 30;
// ✅ CORRECT - Structured logging with message template
logger.LogInformation(
"User {UserName} is {Age} years old",
userName,
userAge);
// Console output:
// info: Program[0]
// User John is 30 years old
Notice the curly braces in the message string? Those are template placeholders, not string interpolation. The string is called the message template or format string.
Here's how message templates transform into structured logs:
┌───────────────────────────────────────────────────────────────┐
│ Message Template Flow - How It Works │
├───────────────────────────────────────────────────────────────┤
│ │
│ Step 1: Your Code │
│ ┌─────────────────────────────────────────────────────┐ │
│ │ logger.LogInformation( │ │
│ │ "User {UserName} is {Age} years old", │ │
│ │ "John", // userName value │ │
│ │ 30); // userAge value │ │
│ └─────────────────────────────────────────────────────┘ │
│ │ │
│ ▼ │
│ Step 2: Logger Parses Template │
│ ┌─────────────────────────────────────────────────────┐ │
│ │ Template: "User {UserName} is {Age} years old" │ │
│ │ │ │
│ │ Parameters: │ │
│ │ Position 0: "John" → maps to {UserName} │ │
│ │ Position 1: 30 → maps to {Age} │ │
│ └─────────────────────────────────────────────────────┘ │
│ │ │
│ ▼ │
│ Step 3: Provider Creates Structured Output │
│ ┌─────────────────────────────────────────────────────┐ │
│ │ { │ │
│ │ "Message": "User John is 30 years old", │ │
│ │ "State": { │ │
│ │ "UserName": "John", ← Searchable field! │ │
│ │ "Age": 30, ← Typed integer! │ │
│ │ "{OriginalFormat}": "User {UserName} is..." │ │
│ │ } │ │
│ │ } │ │
│ └─────────────────────────────────────────────────────┘ │
│ │
│ Result: Parameters become indexed, queryable fields! │
│ • Query by UserName: State.UserName == "John" │
│ • Query by Age range: State.Age > 25 │
│ • Aggregate by template pattern for trend analysis │
│ │
└───────────────────────────────────────────────────────────────┘
After the template, you pass the actual values as additional parameters. The logger matches them by position to the placeholders in the template.
Why Templates Don't Need to Match Variable Names
Here's something important: the placeholder names don't have to match your variable names:
var userName = "John";
var userAge = 30;
// Placeholder names differ from variable names
logger.LogInformation(
"User {FirstName} is {YearsOld} years old",
userName, // Maps to {FirstName}
userAge); // Maps to {YearsOld}
// Output is the same:
// info: Program[0]
// User John is 30 years old
Parameters are matched by position, not by name. However, using meaningful placeholder names is crucial because they become the property names in structured log output.
Structured Logging Output
With a JSON console provider, the magic becomes visible:
// Configure JSON console provider
services.AddLogging(builder =>
{
builder.AddJsonConsole();
});
var logger = serviceProvider.GetRequiredService<ILogger<Program>>();
logger.LogInformation(
"User {UserName} logged in with role {Role}",
"John",
"Administrator");
// JSON output:
// {
// "Timestamp": "2026-05-24T10:30:45.123Z",
// "Level": "Information",
// "Category": "Program",
// "Message": "User John logged in with role Administrator",
// "State": {
// "Message": "User John logged in with role Administrator",
// "UserName": "John",
// "Role": "Administrator",
// "{OriginalFormat}": "User {UserName} logged in with role {Role}"
// }
// }
Look at the State object - the individual parameters are preserved as separate properties:
- UserName: "John"
- Role: "Administrator"
- {OriginalFormat}: The original template for grouping/aggregation
Now you can query logs with precision:
// Query examples in logging platforms:
// Find all logs for a specific user
State.UserName == "John"
// Find all administrator logins
State.Role == "Administrator" AND Message CONTAINS "logged in"
// Count login events by role
GROUP BY State.Role WHERE Message CONTAINS "logged in"
This is the power of structured logging - transforming logs from unstructured text into queryable, analyzable data.
Data Type Preservation
Structured logging preserves the original data types of your parameters:
var userId = 12345;
var responseTime = 250.75;
var isSuccessful = true;
logger.LogInformation(
"Request {UserId} completed in {ResponseTimeMs}ms - Success: {IsSuccess}",
userId,
responseTime,
isSuccessful);
// JSON output preserves types:
// {
// "State": {
// "UserId": 12345, // integer, not "12345"
// "ResponseTimeMs": 250.75, // double, not "250.75"
// "IsSuccess": true // boolean, not "true"
// }
// }
This enables numeric comparisons, range queries, and aggregations in your logging platform - something impossible with string-interpolated logs.
Here's a visual representation of how parameters become indexed fields:
┌───────────────────────────────────────────────────────────────┐
│ From Code Parameters to Indexed Log Fields │
├───────────────────────────────────────────────────────────────┤
│ │
│ Your Application Code: │
│ │
│ var userId = 12345; (integer) │
│ var responseTime = 250.75; (double) │
│ var isSuccessful = true; (boolean) │
│ │
│ logger.LogInformation( │
│ "Request {UserId} completed in {ResponseTimeMs}ms...", │
│ userId, │ │
│ responseTime, │─── Position-based mapping │
│ isSuccessful); │ │
│ │ │
│ ▼ │
│ ┌────────────────────────────────────────────────────┐ │
│ │ Logging Platform Storage (e.g., Elasticsearch, Seq)│ │
│ ├────────────────────────────────────────────────────┤ │
│ │ │ │
│ │ 🔍 Indexed Fields (Fast Queries): │ │
│ │ │ │
│ │ UserId: 12345 [Integer Index] │ │
│ │ └─ Query: UserId > 10000 │ │
│ │ └─ Aggregate: COUNT(DISTINCT UserId) │ │
│ │ │ │
│ │ ResponseTimeMs: 250.75 [Numeric Index] │ │
│ │ └─ Query: ResponseTimeMs > 200 │ │
│ │ └─ Aggregate: AVG(ResponseTimeMs) │ │
│ │ │ │
│ │ IsSuccess: true [Boolean Index] │ │
│ │ └─ Query: IsSuccess = false │ │
│ │ └─ Aggregate: COUNT WHERE IsSuccess = true │ │
│ │ │ │
│ └────────────────────────────────────────────────────┘ │
│ │
│ 🚀 Power of Structured Logging: │
│ • Fast filtering by specific user IDs │
│ • Performance analysis with numeric ranges │
│ • Success rate calculations with boolean fields │
│ • All without parsing text strings! │
│ │
└───────────────────────────────────────────────────────────────┘
Understanding Log Categories
The category is an arbitrary string that helps organize and filter logs. When you use ILogger<T>, the category is automatically set to the fully qualified name of type T:
// In a class called UserService in namespace MyApp.Services:
private readonly ILogger<UserService> _logger;
public UserService(ILogger<UserService> logger)
{
_logger = logger;
}
// Category will be: "MyApp.Services.UserService"
You can also create a logger with a custom category using ILoggerFactory:
public class UserService
{
private readonly ILogger _logger;
public UserService(ILoggerFactory loggerFactory)
{
// Custom category name
_logger = loggerFactory.CreateLogger("CustomCategory");
}
}
However, the convention is to use the type name because it provides automatic context about where the log originated.
Category-Based Filtering
Categories become powerful when combined with filtering. You can set different minimum log levels for different categories:
// appsettings.json
{
"Logging": {
"LogLevel": {
"Default": "Information",
"Microsoft": "Warning",
"Microsoft.EntityFrameworkCore": "Information",
"MyApp.Services.UserService": "Debug"
}
}
}
This configuration means:
- Most of the app logs at Information level and above
- Microsoft's internal logs are filtered to Warning and above (reduces noise)
- Entity Framework logs at Information (to see SQL queries)
- UserService logs at Debug level for detailed diagnostics
You can also configure this in code:
services.AddLogging(builder =>
{
builder.AddConsole();
// Set default minimum level
builder.SetMinimumLevel(LogLevel.Information);
// Override for specific categories
builder.AddFilter("Microsoft", LogLevel.Warning);
builder.AddFilter("Microsoft.EntityFrameworkCore", LogLevel.Information);
builder.AddFilter(
"MyApp.Services.UserService",
LogLevel.Debug);
});
Working with Minimum Log Levels
The minimum log level acts as a threshold - any logs below it are suppressed. This is crucial for performance and managing log volume in production.
services.AddLogging(builder =>
{
builder.AddConsole();
builder.SetMinimumLevel(LogLevel.Information);
});
var logger = serviceProvider.GetRequiredService<ILogger<Program>>();
// These will NOT appear (below threshold):
logger.LogTrace("This trace won't show");
logger.LogDebug("This debug won't show");
// These WILL appear (at or above threshold):
logger.LogInformation("This information will show");
logger.LogWarning("This warning will show");
logger.LogError("This error will show");
logger.LogCritical("This critical will show");
In ASP.NET Core applications, the minimum log level is typically configured in appsettings.json:
{
"Logging": {
"LogLevel": {
"Default": "Information"
}
}
}
You can override this for different environments:
// appsettings.Development.json
{
"Logging": {
"LogLevel": {
"Default": "Debug" // More verbose in development
}
}
}
// appsettings.Production.json
{
"Logging": {
"LogLevel": {
"Default": "Warning" // Less verbose in production
}
}
}
Understanding Event IDs
The EventId is an optional identifier you can assign to log entries to categorize and track specific events:
logger.LogInformation(
eventId: new EventId(1001, "UserLogin"),
message: "User {UserName} logged in",
userName);
logger.LogWarning(
eventId: new EventId(2001, "InvalidCredentials"),
message: "Invalid login attempt for user {UserName}",
userName);
logger.LogError(
eventId: new EventId(3001, "DatabaseConnectionFailed"),
message: "Failed to connect to database {DatabaseName}",
databaseName);
The EventId struct has two parts:
- Id (integer): A numeric identifier for the event
- Name (string, optional): A descriptive name for the event
Event IDs allow you to:
- Quickly filter logs by specific event types
- Create alerts based on event IDs (e.g., alert when event 3001 occurs)
- Track patterns and trends for specific operations
- Document your logging strategy with well-defined event catalogs
Organizing Event IDs
A common pattern is to define event IDs as constants in a dedicated class:
public static class LogEvents
{
// User-related events (1000-1999)
public const int UserLogin = 1001;
public const int UserLogout = 1002;
public const int UserRegistration = 1003;
public const int PasswordReset = 1004;
// Data access events (2000-2999)
public const int DatabaseQuery = 2001;
public const int DatabaseQuerySlow = 2002;
public const int DatabaseConnectionFailed = 2003;
// External service events (3000-3999)
public const int ApiCallStarted = 3001;
public const int ApiCallSucceeded = 3002;
public const int ApiCallFailed = 3003;
public const int ApiCallTimeout = 3004;
// Business logic events (4000-4999)
public const int OrderCreated = 4001;
public const int OrderCancelled = 4002;
public const int PaymentProcessed = 4003;
public const int PaymentFailed = 4004;
}
// Usage:
logger.LogInformation(
LogEvents.UserLogin,
"User {UserName} logged in successfully",
userName);
logger.LogError(
LogEvents.DatabaseConnectionFailed,
"Failed to connect to database after {RetryCount} attempts",
retryCount);
This approach makes your logging consistent, discoverable, and maintainable across large applications.
Event IDs in Console Output
When using the console provider, event IDs appear in the log output:
logger.LogInformation(LogEvents.UserLogin, "User {UserName} logged in", "John");
// Console output:
// info: MyApp.Services.UserService[1001]
// User John logged in
// ^^^^ ^^^^
// category event ID
The number in square brackets [1001] is the event ID, making it easy to spot specific event types in console output.
Best Practices for Production Logging
Let's wrap up with key best practices that will serve you well in production:
1. Always Use Structured Logging
// ❌ NEVER do this:
logger.LogInformation($"Processing order {orderId} for user {userId}");
// ✅ ALWAYS do this:
logger.LogInformation(
"Processing order {OrderId} for user {UserId}",
orderId,
userId);
2. Use Meaningful Placeholder Names
// ❌ Poor placeholder names:
logger.LogInformation("User {A} created order {B}", userId, orderId);
// ✅ Descriptive placeholder names:
logger.LogInformation(
"User {UserId} created order {OrderId}",
userId,
orderId);
3. Keep Message Templates Consistent
// ❌ Varying templates break log aggregation:
logger.LogInformation("User {UserId} logged in");
logger.LogInformation("User {UserId} has logged in");
logger.LogInformation("{UserId} logged in");
// ✅ Use ONE consistent template:
logger.LogInformation("User {UserId} logged in", userId);
4. Choose Appropriate Log Levels
- Trace: Very detailed diagnostic info, typically only in development
- Debug: Diagnostic info useful for debugging, disabled in production
- Information: General flow of the application (startup, shutdown, major operations)
- Warning: Unexpected events that don't stop execution (deprecated features, poor API usage)
- Error: Errors and exceptions that prevent specific operations from completing
- Critical: Critical failures requiring immediate attention (data corruption, disk full)
5. Include Context in Error Logs
// ❌ Insufficient context:
logger.LogError("Database query failed");
// ✅ Rich context for troubleshooting:
logger.LogError(
exception,
"Database query failed for user {UserId} when accessing {TableName} " +
"with query {QueryText}. Execution time: {ExecutionTimeMs}ms",
userId,
tableName,
queryText,
executionTime);
6. Don't Log Sensitive Information
// ❌ NEVER log passwords, tokens, or PII:
logger.LogInformation(
"User {Email} authenticated with password {Password}",
email,
password);
// ✅ Log only non-sensitive identifiers:
logger.LogInformation(
"User {UserId} authenticated successfully",
userId);
7. Use Categories for Organization
// ✅ Inject ILogger<T> where T is your class:
public class UserService
{
private readonly ILogger<UserService> _logger;
public UserService(ILogger<UserService> logger)
{
_logger = logger;
}
// Category will be "MyApp.Services.UserService"
}
8. Configure Appropriate Levels for Production
{
"Logging": {
"LogLevel": {
"Default": "Information",
"Microsoft": "Warning",
"Microsoft.Hosting.Lifetime": "Information",
"System": "Warning"
}
}
}
This reduces noise from framework code while keeping your application logs visible.
Summary
In this deep-dive exploration of the modern .NET logger, we've uncovered the fundamental principles that separate amateur logging from production-grade observability:
- Structured logging is mandatory - Never use string interpolation or concatenation. Always use message templates with parameters to preserve data types and enable powerful queries.
- ILogger<T> works everywhere - The same logging interface and patterns work across console apps, APIs, Blazor components, and all other .NET application types.
- Message templates are the foundation - Use curly-brace placeholders in your log messages and pass values as separate parameters. This enables proper structured logging output.
- Categories organize your logs - Use ILogger<T> where T is your class name to automatically set meaningful categories, enabling precise filtering and analysis.
- Log levels control verbosity - Set appropriate minimum log levels for different environments and categories to manage performance and log volume while maintaining visibility.
- Event IDs enable tracking - Assign consistent event IDs to categorize and track specific operations, enabling alerts and pattern analysis.
- Templates must be consistent - Keep message template structure identical across calls to enable log aggregation, pattern detection, and meaningful analytics.
Mastering these concepts transforms logging from an afterthought into a powerful debugging and observability tool. When production issues arise, structured logs with proper categories, levels, and event IDs become your most valuable asset - enabling you to quickly identify root causes, understand system behavior, and resolve problems before they impact users.
The investment you make in proper logging practices today will pay dividends every time you need to troubleshoot a production issue, analyze system performance, or satisfy audit requirements. Start applying structured logging consistently across your applications - your future self will thank you.
Continue learning: Now that you've mastered structured logging, explore advanced topics in our upcoming articles on Serilog enrichers, log correlation across distributed systems, OpenTelemetry integration, and building custom log providers for specialized scenarios. Want to learn more about the fundamentals? Check out our introductory guide: Introduction to Logging in .NET - From Basics to Best Practices.