-
Azure SQL and enabling auditing with Terraform
Sometimes when you're using Terraform for your Infrastructure as Code with Azure, it's a bit tricky to match up what you can see in the Azure Portal versus the Terraform resources in the AzureRM provider. Enabling auditing in Azure SQL is a great example.
In the Azure Portal, select your Azure SQL resource, then expand the Security menu and select Auditing. You can then choose to Enable Azure SQL Auditing, and upon doing this you can then choose to send auditing data to any or all of Azure Storage, Log Analytics and/or Event Hub.
It's also worth highlighting that usually you'd enable auditing at the server level, but it is also possible to enable it per database.
The two Terraform resources you may have encountered to manage this are
mssql_server_extended_auditing_policy
andmssql_database_extended_auditing_policy
.It's useful to refer back to the Azure SQL documentation on setting up auditing to understand how to use these.
A couple of points that are worth highlighting:
-
If you don't use the
audit_actions_and_groups
property, the default groups of actions that will be audited are:BATCH_COMPLETED_GROUP SUCCESSFUL_DATABASE_AUTHENTICATION_GROUP FAILED_DATABASE_AUTHENTICATION_GROUP
-
If you do define auditing at the server level, the policy applies to all existing and newly created databases on the server. If you define auditing at the database level, the policy will apply in addition to any server level settings. So be careful you don't end up auditing the same thing twice unintentionally!
Sometimes it can also be useful to review equivalent the Bicep/ARM definitions Microsoft.Sql/servers/extendedAuditingSettings, as sometimes they can clarify how to use various properties.
You'll see both the Terraform and Bicep have properties to configure using a Storage Account, but while you can see Log Analytics and Event Hub in the Portal UI, it's not obvious how those set up.
The simplest policy you can set is this:
resource "azurerm_mssql_server_extended_auditing_policy" "auditing" { server_id = azurerm_mssql_server.mssql.id }
This enables the server auditing policy, but the data isn't going anywhere yet!
Storage account
When you select an Azure Storage Account for storing auditing data, you will end up with a bunch
.xel
files created under a sqldbauditlogs blob container.There are a number of ways to view the
.xel
files, documented hereUsing a storage account for storing auditing has a few variations, depending on how you want to authenticate to the Storage Account.
Access key
resource "azurerm_mssql_server_extended_auditing_policy" "auditing" { server_id = azurerm_mssql_server.mssql.id storage_endpoint = azurerm_storage_account.storage.primary_blob_endpoint storage_account_access_key = azurerm_storage_account.storage.primary_access_key storage_account_access_key_is_secondary = false retention_in_days = 6 }
Normally
storage_account_access_key_is_secondary
would be set tofalse
, but if you are rotating your storage access keys, then you may choose to switch to the secondary key while you're rotating the primary.Managed identity
You can also use managed identity to authenticate to the storage account. In this case you don't supply the access_key properties, but you will need to add a role assignment granting the Storage Blob Data Contributor role to the identity of your Azure SQL resource.
resource "azurerm_mssql_server_extended_auditing_policy" "auditing" { server_id = azurerm_mssql_server.mssql.id storage_endpoint = azurerm_storage_account.storage.primary_blob_endpoint retention_in_days = 6 }
Log analytics workspaces
To send data to a Log Analytics Workspace, the
log_monitoring_enabled
property needs to be set totrue
. This is the default.But to tell it which workspace to send the data to, you need to add a
azurerm_monitor_diagnostic_setting
resource.resource "azurerm_monitor_diagnostic_setting" "mssql_server_to_log_analytics" { name = "example-diagnostic-setting" target_resource_id = "${azurerm_mssql_server.mssql.id}/databases/master" log_analytics_workspace_id = azurerm_log_analytics_workspace.la.id enabled_log { category = "SQLSecurityAuditEvents" } }
Note that for the server policy, you set the
target_resource_id
to the master database of the server, not the resource id of the server itself.Here's what the auditing data looks like when viewed in Log Analytics:
Event Hub
Likewise, if you want data to go to an Event Hub, you need to use the
azurerm_monitor_diagnostic_setting
resource.resource "azurerm_monitor_diagnostic_setting" "mssql_server_to_event_hub" { name = "ds_mssql_event_hub" target_resource_id = "${azurerm_mssql_server.mssql.id}/databases/master" eventhub_authorization_rule_id = azurerm_eventhub_namespace_authorization_rule.eh.id eventhub_name = azurerm_eventhub.eh.name enabled_log { category = "SQLSecurityAuditEvents" } }
Multiple destinations
As is implied by the Azure Portal, you can have one, two or all three destinations enabled for auditing. But it isn't immediately obvious that you should only have one
azurerm_monitor_diagnostic_setting
for your server auditing - don't create separateazurerm_monitor_diagnostic_setting
resources for each destination - Azure will not allow it.For example, if you're going to log to all three, you'd have a single diagnostic resource like this:
resource "azurerm_monitor_diagnostic_setting" "mssql_server" { name = "diagnostic_setting" target_resource_id = "${azurerm_mssql_server.mssql.id}/databases/master" eventhub_authorization_rule_id = azurerm_eventhub_namespace_authorization_rule.eh.id eventhub_name = azurerm_eventhub.eh.name log_analytics_workspace_id = azurerm_log_analytics_workspace.la.id log_analytics_destination_type = "Dedicated" enabled_log { category = "SQLSecurityAuditEvents" }
Note, this Terraform resource does have a
storage_account_id
property, but this doesn't seem to be necessary as storage is configured via theazurerm_mssql_server_extended_auditing_policy
resource.You would need separate
azurerm_monitor_diagnostic_setting
resources if you were configuring auditing per database though.Common problems
The diagnostic setting can't find the master database
Error: creating Monitor Diagnostics Setting "diagnostic_setting" for Resource "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/rg-terraform-sql-auditing-australiaeast/providers/Microsoft.Sql/servers/sql-terraform-sql-auditing-australiaeast/databases/master": unexpected status 404 (404 Not Found) with error: ResourceNotFound: The Resource 'Microsoft.Sql/servers/sql-terraform-sql-auditing-australiaeast/databases/master' under resource group 'rg-terraform-sql-auditing-australiaeast' was not found. For more details please go to https://aka.ms/ARMResourceNotFoundFix
It appears that sometimes the
azurerm_mssql_server
resource reports it is created, but the master database is not yet ready. The workaround is to add a dependency on another database resource - as by definition the master database must exist before any other user databases can be created.Diagnostic setting fails to update with 409 Conflict
Error: creating Monitor Diagnostics Setting "diagnostic_setting" for Resource "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/rg-terraform-sql-auditing-australiaeast/providers/Microsoft.Sql/servers/sql-terraform-sql-auditing-australiaeast/databases/master": unexpected status 409 (409 Conflict) with response: {"code":"Conflict","message":"Data sink '/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/rg-terraform-sql-auditing-australiaeast/providers/Microsoft.EventHub/namespaces/evhns-terraform-sql-auditing-australiaeast/authorizationRules/evhar-terraform-sql-auditing-australiaeast' is already used in diagnostic setting 'SQLSecurityAuditEvents_3d229c42-c7e7-4c97-9a99-ec0d0d8b86c1' for category 'SQLSecurityAuditEvents'. Data sinks can't be reused in different settings on the same category for the same resource."}
After a lot of trial and error, I've found the solution is to add a
depends_on
block in yourazurerm_mssql_server_extended_auditing_policy
resource, so that theazurerm_monitor_diagnostic_setting
is created first. (This feels like a bug in the Terraform AzureRM provider)resource "azurerm_mssql_server_extended_auditing_policy" "auditing" { server_id = azurerm_mssql_server.mssql.id storage_endpoint = azurerm_storage_account.storage.primary_blob_endpoint retention_in_days = 6 depends_on = [azurerm_monitor_diagnostic_setting.mssql_server] }
Switching from Storage access keys to managed identity has no effect
Removing the storage access key properties from
azurerm_mssql_server_extended_auditing_policy
doesn't currently switch the authentication to managed identity. The problem may relate to thestorage_account_subscription_id
property. This is an optional property and while you usually don't need to set it if the storage account is in the same subscription, it appears that the AzureRM provider is setting it on your behalf, such that when you remove the other access key properties it doesn't know to set this property to null.If you know ahead of time that you'll be transitioning from access keys to managed identity, it might be worth setting
storage_account_subscription_id
first. Then later on, when you remove that and the other access_key properties maybe Terraform will do the right thing?Solution resource
If you ever hit the Save button on the Azure SQL Auditing page, you may end up with a Solution resource being created for your auditing. This is useful, though it can cause problems if you are trying to destroy your Terraform resources, as it can put locks on the resources and Terraform doesn't know to destroy the solution resource first.
You could try to pre-emptively create the solution resource in Terraform. For example:
resource "azurerm_log_analytics_solution" "example" { solution_name = "SQLAuditing" location = data.azurerm_resource_group.rg.location resource_group_name = data.azurerm_resource_group.rg.name workspace_resource_id = azurerm_log_analytics_workspace.la.id workspace_name = azurerm_log_analytics_workspace.la.name plan { publisher = "Microsoft" product = "SQLAuditing" } depends_on = [azurerm_monitor_diagnostic_setting.mssql_server] }
Though it seems that when you use Terraform to create this resource, it names it
SQLAuditing(log-terraform-sql-auditing-australiaeast)
, whereas if you use the portal, it is namedSQLAuditing[log-terraform-sql-auditing-australiaeast]
.So instead this looks like a good use for the AzApi provider and the
azapi_resource
resource "azapi_resource" "symbolicname" { type = "Microsoft.OperationsManagement/solutions@2015-11-01-preview" name = "SQLAuditing[${azurerm_log_analytics_workspace.la.name}]" location = data.azurerm_resource_group.rg.location parent_id = data.azurerm_resource_group.rg.id tags = {} body = { plan = { name = "SQLAuditing[${azurerm_log_analytics_workspace.la.name}]" product = "SQLAuditing" promotionCode = "" publisher = "Microsoft" } properties = { containedResources = [ "${azurerm_log_analytics_workspace.la.id}/views/SQLSecurityInsights", "${azurerm_log_analytics_workspace.la.id}/views/SQLAccessToSensitiveData" ] referencedResources = [] workspaceResourceId = azurerm_log_analytics_workspace.la.id } } }
Other troubleshooting tips
The Azure CLI can also be useful in checking what the current state of audit configuration is.
Here's two examples showing auditing configured for all three destinations:
az monitor diagnostic-settings list --resource /subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/rg-terraform-sql-auditing-australiaeast/providers/Microsoft.Sql/servers/sql-terraform-sql-auditing-australiaeast/databases/master
gives the following:
[ { "eventHubAuthorizationRuleId": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/rg-terraform-sql-auditing-australiaeast/providers/Microsoft.EventHub/namespaces/evhns-terraform-sql-auditing-australiaeast/authorizationRules/evhar-terraform-sql-auditing-australiaeast", "eventHubName": "evh-terraform-sql-auditing-australiaeast", "id": "/subscriptions/00000000-0000-0000-0000-000000000000/resourcegroups/rg-terraform-sql-auditing-australiaeast/providers/microsoft.sql/servers/sql-terraform-sql-auditing-australiaeast/databases/master/providers/microsoft.insights/diagnosticSettings/diagnostic_setting", "logs": [ { "category": "SQLSecurityAuditEvents", "enabled": true, "retentionPolicy": { "days": 0, "enabled": false } }, { "category": "SQLInsights", "enabled": false, "retentionPolicy": { "days": 0, "enabled": false } }, { "category": "AutomaticTuning", "enabled": false, "retentionPolicy": { "days": 0, "enabled": false } }, { "category": "QueryStoreRuntimeStatistics", "enabled": false, "retentionPolicy": { "days": 0, "enabled": false } }, { "category": "QueryStoreWaitStatistics", "enabled": false, "retentionPolicy": { "days": 0, "enabled": false } }, { "category": "Errors", "enabled": false, "retentionPolicy": { "days": 0, "enabled": false } }, { "category": "DatabaseWaitStatistics", "enabled": false, "retentionPolicy": { "days": 0, "enabled": false } }, { "category": "Timeouts", "enabled": false, "retentionPolicy": { "days": 0, "enabled": false } }, { "category": "Blocks", "enabled": false, "retentionPolicy": { "days": 0, "enabled": false } }, { "category": "Deadlocks", "enabled": false, "retentionPolicy": { "days": 0, "enabled": false } }, { "category": "DevOpsOperationsAudit", "enabled": false, "retentionPolicy": { "days": 0, "enabled": false } } ], "metrics": [ { "category": "Basic", "enabled": false, "retentionPolicy": { "days": 0, "enabled": false } }, { "category": "InstanceAndAppAdvanced", "enabled": false, "retentionPolicy": { "days": 0, "enabled": false } }, { "category": "WorkloadManagement", "enabled": false, "retentionPolicy": { "days": 0, "enabled": false } } ], "name": "diagnostic_setting", "resourceGroup": "rg-terraform-sql-auditing-australiaeast", "type": "Microsoft.Insights/diagnosticSettings", "workspaceId": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/rg-terraform-sql-auditing-australiaeast/providers/Microsoft.OperationalInsights/workspaces/log-terraform-sql-auditing-australiaeast" } ]
And the Azure SQL audit policy
az sql server audit-policy show -g rg-terraform-sql-auditing-australiaeast -n sql-terraform-sql-auditing-australiaeast
Gives
{ "auditActionsAndGroups": [ "SUCCESSFUL_DATABASE_AUTHENTICATION_GROUP", "FAILED_DATABASE_AUTHENTICATION_GROUP", "BATCH_COMPLETED_GROUP" ], "blobStorageTargetState": "Enabled", "eventHubAuthorizationRuleId": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/rg-terraform-sql-auditing-australiaeast/providers/Microsoft.EventHub/namespaces/evhns-terraform-sql-auditing-australiaeast/authorizationRules/evhar-terraform-sql-auditing-australiaeast", "eventHubName": "evh-terraform-sql-auditing-australiaeast", "eventHubTargetState": "Enabled", "id": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/rg-terraform-sql-auditing-australiaeast/providers/Microsoft.Sql/servers/sql-terraform-sql-auditing-australiaeast/auditingSettings/Default", "isAzureMonitorTargetEnabled": true, "isDevopsAuditEnabled": null, "isManagedIdentityInUse": true, "isStorageSecondaryKeyInUse": null, "logAnalyticsTargetState": "Enabled", "logAnalyticsWorkspaceResourceId": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/rg-terraform-sql-auditing-australiaeast/providers/Microsoft.OperationalInsights/workspaces/log-terraform-sql-auditing-australiaeast", "name": "Default", "queueDelayMs": null, "resourceGroup": "rg-terraform-sql-auditing-australiaeast", "retentionDays": 6, "state": "Enabled", "storageAccountAccessKey": null, "storageAccountSubscriptionId": "00000000-0000-0000-0000-000000000000", "storageEndpoint": "https://sttfsqlauditauew0o.blob.core.windows.net/", "type": "Microsoft.Sql/servers/auditingSettings" }
-
-
Why is PowerShell not expanding variables for a command?
This had me perplexed. I have a PowerShell script that calls Docker and passes in some build arguments like this:
docker build --secret id=npm,src=$($env:USERPROFILE)/.npmrc --progress=plain -t imagename .
But it was failing with this error:
ERROR: failed to stat $($env:USERPROFILE)/.npmrc: CreateFile $($env:USERPROFILE)/.npmrc: The filename, directory name, or volume label syntax is incorrect.
It should be evaluating the
$($env:USERPROFILE)
expression to the current user's profile/home directory, but it isn't.Is there some recent breaking change in how PowerShell evaluates arguments to a native command? I skimmed the release notes but nothing jumped out.
I know you can use the "stop-parsing token"
--%
to stop PowerShell from interpreting subsequent text on the line as commands or expressions, but I wasn't using that.In fact the whole about_Parsing documentation is a good read to understand the different modes and how PowerShell passes arguments to native and PowerShell commands. But I still couldn't figure it out.
So what's going on?
Another tool I find useful when trying to diagnose issues with passing arguments is EchoArgs. It too reported the argument was not being evaluated.
But then I noticed something curious on the command line:
That comma is being rendered in my command line in grey, but the rest of the arguments are white (with the exception of the variable expression). Could that be the problem?
Let's try enclosing the arguments in double quotes..
docker build --secret "id=npm,src=$($env:USERPROFILE)/.npmrc" --progress=plain -t imagename .
Notice the colours on the command line - the comma is not different now:
And the colouring on the command line also hints that it is not treating the comma as something special.
And now our Docker command works!
-
.NET Azure Functions, Isolated worker model, Serilog to App Insights
There's already some good resources online about configuring .NET Azure Functions with Serilog. For example, Shazni gives a good introduction to Serilog and then shows how to configure for in-process and isolated Azure Functions, and Simon shows how to use Serilog with Azure Functions in isolated worker model, but neither cover using App Insights.
It's important to note that the in-process model goes out of support (along with .NET 8) in November 2026. Going forward, only the isolated worker model is supported by future versions of .NET (starting with .NET 9)
The Serilog Sink package for logging data to Application Insights is Serilog.Extensions.AppInsights, and it has some useful code samples in the README, but they also lack mentioning the differences for isolated worker model.
So my goal here is to demonstrate the following combination:
- A .NET Azure Function
- That is using isolated worker mode
- That logs to Azure App Insights
- Uses Serilog for structured logging
- Uses the Serilog 'bootstrapper' pattern to capture any errors during startup/configuration
Note: There are full working samples for this post in https://github.com/flcdrg/azure-function-dotnet-isolated-logging.
Our starting point is an Azure Function that has Application Insights enabled. We uncommented to the two lines in Program.cs and the two .csproj file from the default Functions project template.
using Microsoft.Azure.Functions.Worker; using Microsoft.Extensions.Hosting; using Microsoft.Extensions.DependencyInjection; var host = new HostBuilder() .ConfigureFunctionsWebApplication() .ConfigureServices(services => { services.AddApplicationInsightsTelemetryWorkerService(); services.ConfigureFunctionsApplicationInsights(); }) .Build(); host.Run();
One of the challenges with using the App Insights Serilog Sink, is that it needs to be configured with an existing
TelemetryConfiguration
. The old way of doing this was to referenceTelemetryConfiguration.Active
, however using this property has been discouraged in .NET Core (aka modern .NET).Instead you're encouraged to retrieve a valid
TelemetryConfiguration
instance from the service provider, like this:Log.Logger = new LoggerConfiguration() .WriteTo.ApplicationInsights( serviceProvider.GetRequiredService<TelemetryConfiguration>(), TelemetryConverter.Traces) .CreateLogger();
Except we have a problem. How can we reference the service provider? We need to move this under the
HostBuilder
, so we have access to a service provider.There's a couple of ways to do this. Traditionally we would use
UseSerilog
to register Serilog similar to this:var build = Host.CreateDefaultBuilder(args) .UseSerilog((_, services, loggerConfiguration) => loggerConfiguration .Enrich.FromLogContext() .Enrich.WithProperty("ExtraInfo", "FuncWithSerilog") .WriteTo.ApplicationInsights( services.GetRequiredService<TelemetryConfiguration>(), TelemetryConverter.Traces))
But as of relatively recently, you can now also use
AddSerilog
, as it turns out under the covers,UseSerilog
just callsAddSerilog
.So this is the equivalent:
builder.Services .AddSerilog((serviceProvider, loggerConfiguration) => { loggerConfiguration .Enrich.FromLogContext() .Enrich.WithProperty("ExtraInfo", "FuncWithSerilog") .WriteTo.ApplicationInsights( serviceProvider.GetRequiredService<TelemetryConfiguration>(), TelemetryConverter.Traces); })
There's also the 'bootstrap logging' pattern that was first outlined here.
This can be useful if you want to log any configuration errors at start up. The only issue here is it will be tricky to log those into App Insights as you won't have the main Serilog configuration (where you wire up App Insights integration) completed yet. You could log to another sink (Console, or Debug if you're running locally).
Here's an example that includes bootstrap logging.
using Microsoft.Azure.Functions.Worker; using Microsoft.Extensions.Hosting; using Microsoft.Extensions.DependencyInjection; using Microsoft.Extensions.Logging; using Microsoft.ApplicationInsights.Extensibility; using Serilog; Log.Logger = new LoggerConfiguration() .WriteTo.Console() .WriteTo.Debug() .CreateBootstrapLogger(); try { Log.Warning("Starting up.."); // Only logged to console var build = Host.CreateDefaultBuilder(args) .UseSerilog((_, services, loggerConfiguration) => loggerConfiguration .Enrich.FromLogContext() .Enrich.WithProperty("ExtraInfo", "FuncWithSerilog") .WriteTo.ApplicationInsights( services.GetRequiredService<TelemetryConfiguration>(), TelemetryConverter.Traces)) .ConfigureFunctionsWebApplication() .ConfigureServices(services => { services.AddApplicationInsightsTelemetryWorkerService(); services.ConfigureFunctionsApplicationInsights(); }) .ConfigureLogging(logging => { // Remove the default Application Insights logger provider so that Information logs are sent // https://learn.microsoft.com/en-us/azure/azure-functions/dotnet-isolated-process-guide?tabs=hostbuilder%2Clinux&WT.mc_id=DOP-MVP-5001655#managing-log-levels logging.Services.Configure<LoggerFilterOptions>(options => { LoggerFilterRule? defaultRule = options.Rules.FirstOrDefault(rule => rule.ProviderName == "Microsoft.Extensions.Logging.ApplicationInsights.ApplicationInsightsLoggerProvider"); if (defaultRule is not null) { options.Rules.Remove(defaultRule); } }); }) .Build(); build.Run(); Log.Warning("After run"); } catch (Exception ex) { Log.Fatal(ex, "An unhandled exception occurred during bootstrapping"); } finally { Log.Warning("Exiting application"); Log.CloseAndFlush(); }
In my experimenting with this, when the Function is closed normally (eg. by being requested to stop in the Azure Portal / or pressing Ctrl-C in the console window when running locally) I was not able to get any logging working in the
finally
block. I think by then it's pretty much game over and the Function Host is keen to wrap things up.But what if the Function is running in Azure? The Debug or Console sinks won't be much use there. In ApplicationInsights sink docs, there's a section on how to flush messages manually. The code sample shows creating a new instance of
TelemetryClient
so that you can use the ApplicationInsights sink in the bootstrap logger.Log.Logger = new LoggerConfiguration() .WriteTo.Console() .WriteTo.Debug() .WriteTo.ApplicationInsights(new TelemetryClient(new TelemetryConfiguration()), new TraceTelemetryConverter()) .CreateBootstrapLogger();
If I simulate a configuration error by throwing an exception inside the
ConfigureServices
call, then you do get data sent to App Insights. eg.{ "name": "AppExceptions", "time": "2025-02-08T06:32:25.4548247Z", "tags": { "ai.cloud.roleInstance": "Delphinium", "ai.internal.sdkVersion": "dotnetc:2.22.0-997" }, "data": { "baseType": "ExceptionData", "baseData": { "ver": 2, "exceptions": [ { "id": 59941933, "outerId": 0, "typeName": "System.InvalidOperationException", "message": "This is a test exception", "hasFullStack": true, "parsedStack": [ { "level": 0, "method": "Program+<>c.<<Main>$>b__0_1", "assembly": "FuncWithSerilog, Version=1.2.6.0, Culture=neutral, PublicKeyToken=null", "fileName": "D:\\git\\azure-function-dotnet-isolated-logging\\net9\\FuncWithSerilog\\Program.cs", "line": 36 }, { "level": 1, "method": "Microsoft.Extensions.Hosting.HostBuilder.InitializeServiceProvider", "assembly": "Microsoft.Extensions.Hosting, Version=9.0.0.0, Culture=neutral, PublicKeyToken=adb9793829ddae60", "line": 0 }, { "level": 2, "method": "Microsoft.Extensions.Hosting.HostBuilder.Build", "assembly": "Microsoft.Extensions.Hosting, Version=9.0.0.0, Culture=neutral, PublicKeyToken=adb9793829ddae60", "line": 0 }, { "level": 3, "method": "Program.<Main>$", "assembly": "FuncWithSerilog, Version=1.2.6.0, Culture=neutral, PublicKeyToken=null", "fileName": "D:\\git\\azure-function-dotnet-isolated-logging\\net9\\FuncWithSerilog\\Program.cs", "line": 21 } ] } ], "severityLevel": "Critical", "properties": { "MessageTemplate": "An unhandled exception occurred during bootstrapping" } } } }
So there you go!
And this is all well and good, but it's important to mention that Microsoft are suggesting for new codebases use OpenTelemetry instead of App Insights! I'll have to check out how that works soon.
- •
- 1
- 2