Serilog is a popular diagnostic logging library, with support for many (many) destinations (called “sinks”). It supports sending traces and structured event data to the console, emails, databases (such as SQL servers, Azure CosmosDB, etc.).
Azure Data Explorer is a fully managed, high-performance, big data analytics platform that makes it easy to analyze high volumes of data in near real time. It is perfect for analyzing structured, semi-structured, and unstructured data, and makes it simple to extract key insights, spot patterns and trends, and is extremely useful for log analytics.
Coupling Serilog and Azure Data Explorer makes perfect sense, allowing developers to easily send logs and event data to their Azure Data Explorer clusters.
Serilog.Sinks.AzureDataExplorer is a Nuget package that provides a sink for the Serilog library with various features:
- Supports both Queued and Streaming ingestion
- Supports Data Mappings
- Supports AAD user and applications authentication
- Supports Azure Data Explorer, Azure Synapse Data Explorer and Azure Data Explorer Free-Tier
Getting started
Install from NuGet:
Install-Package Serilog.Sinks.AzureDataExplorer
How to use
var log = new LoggerConfiguration()
.WriteTo.AzureDataExplorer(new AzureDataExplorerSinkOptions
{
IngestionEndpointUri = "https://ingest-mycluster.northeurope.kusto.windows.net",
DatabaseName = "MyDatabase",
TableName = "Serilogs"
})
.CreateLogger();
Options
Batching
- BatchPostingLimit: The maximum number of events to post in a single batch. Defaults to 50.
- Period: The time to wait between checking for event batches. Defaults to 2 seconds.
- QueueSizeLimit: The maximum number of events that will be held in-memory while waiting to ship them to AzureDataExplorer. Beyond this limit, events will be dropped. The default is 100,000.
Target ADX Cluster
- IngestionEndpointUri: Ingestion endpoint of the target ADX cluster.
- DatabaseName: Database name where the events will be ingested.
- TableName: Table name where the events will be ingested.
- UseStreamingIngestion: Whether to use streaming ingestion (reduced latency, at the cost of reduced throughput) or queued ingestion (increased latency, but much higher throughput).
Mapping
Azure Data Explorer provides data mapping capabilities, allowing the ability to extract data rom the ingested JSONs as part of the ingestion. This allows paying a one-time cost of processing the JSON during ingestion, and reduced cost at query time.
By default, the sink uses the following data mapping:
Column Name | Column Type | JSON Path |
---|---|---|
Timestamp | datetime | $.Timestamp |
Level | string | $.Level |
Message | string | $.Message |
Exception | string | $.Exception |
Properties | dynamic | $.Properties |
This mapping can be overridden using the following options:
- MappingName: Use a data mapping configured in ADX.
- ColumnsMapping: Use an ingestion-time data mapping.
Authentication
The sink supports authentication using various methods. Use one of the following methods to configure the desired authentication methods:
new AzureDataExplorerSinkOptions() .WithXXX(...)
Mode | Method | Notes |
---|---|---|
AadUserPrompt | WithAadUserPrompt | Recommended only development! |
AadUserToken | WithAadUserToken | |
AadApplicationCertificate | WithAadApplicationCertificate | |
AadApplicationKey | WithAadApplicationKey | |
AadApplicationSubjectName | WithAadApplicationSubjectName | |
AadApplicationThumbprint | WithAadApplicationThumbprint | |
AadApplicationToken | WithAadApplicationToken | |
AadAzureTokenCredentials | WithAadAzureTokenCredentials |