<?xml version="1.0" encoding="UTF-8"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en-AU" xmlns:media="http://search.yahoo.com/mrss/">
  <id>https://david.gardiner.net.au/Terraform.xml</id>
  <title type="html">David Gardiner - Terraform</title>
  <updated>2026-03-06T00:21:40.147Z</updated>
  <subtitle>Blog posts tagged with &apos;Terraform&apos; - A blog of software development, .NET and other interesting things</subtitle>
  <generator uri="https://github.com/flcdrg/astrojs-atom" version="1.0.218">astrojs-atom</generator>
  <author>
    <name>David Gardiner</name>
  </author>
  <link href="https://david.gardiner.net.au/Terraform.xml" rel="self" type="application/atom+xml"/>
  <link href="https://david.gardiner.net.au/tags/Terraform" rel="alternate" type="text/html" hreflang="en-AU"/>
  <entry>
    <id>https://david.gardiner.net.au/2026/02/azure-postgresql-upgrade</id>
    <updated>2026-02-28T13:00:00.000+10:30</updated>
    <title>Upgrading Azure Database for PostgreSQL flexible server</title>
    <link href="https://david.gardiner.net.au/2026/02/azure-postgresql-upgrade" rel="alternate" type="text/html" title="Upgrading Azure Database for PostgreSQL flexible server"/>
    <category term="Azure"/>
    <category term="Azure Pipelines"/>
    <category term="Terraform"/>
    <published>2026-02-28T13:00:00.000+10:30</published>
    <summary type="html">
      <![CDATA[How to upgrade the PostgreSQL server in Azure, with examples using Terraform, and some workarounds
for known issues you may encounter during the upgrade process.]]>
    </summary>
    <content type="html" xml:base="https://david.gardiner.net.au/2026/02/azure-postgresql-upgrade">
      <![CDATA[<p>I was working on a project recently that made use of <a href="https://learn.microsoft.com/azure/postgresql/overview?WT.mc_id=DOP-MVP-5001655">Azure Database for PostgreSQL flexible server</a>. The system had been set up a while ago, and so when I was reviewing the resources in the Azure Portal, I noticed a warning banner for the PostreSQL server:</p>
<pre><code>Your server version will lose standard Azure support on March 31, 2026. Upgrade now to avoid extended support charges starting April 1, 2026.
</code></pre>
<p><img src="https://david.gardiner.net.au/_astro/postgresql-upgrade-old-version.BsaaRaF4_2tq5ef.webp" alt="Screenshot of Azure Portal showing PostgreSQL server with warning about standard support ending 31st March 2026" /></p>
<p>Terraform was being used for Infrastructure as Code, and it looked similar to this:</p>

<pre><code>resource "azurerm_postgresql_flexible_server" "server" {
  name                              = "psql-postgresql-apps-australiaeast"
  resource_group_name               = data.azurerm_resource_group.rg.name
  location                          = data.azurerm_resource_group.rg.location
  version                           = "11"
  delegated_subnet_id               = azurerm_subnet.example.id
  private_dns_zone_id               = azurerm_private_dns_zone.example.id
  public_network_access_enabled     = false
  administrator_login               = "psqladmin"
  administrator_password_wo         = ephemeral.random_password.postgresql_password.result
  administrator_password_wo_version = 1
  zone                              = "1"

  storage_mb   = 32768
  storage_tier = "P4"

  sku_name   = "B_Standard_B1ms"
  depends_on = [azurerm_private_dns_zone_virtual_network_link.example]
}
</code></pre>


<p>As you can see from the code and screenshot above, the PostgreSQL version in use was 11. Doing a bit of research, I found version 11 was <a href="https://www.postgresql.org/support/versioning/">first released back in 2018</a>, and the the final minor update 11.22 was released in 2023.</p>
<p>Azure provides standard support for PostgreSQL versions (documented at <a href="https://learn.microsoft.com/en-us/azure/postgresql/configure-maintain/concepts-version-policy?WT.mc_id=DOP-MVP-5001655">Azure Database for PostgreSQL version policy</a>). There is also the option of paying for <a href="https://learn.microsoft.com/en-us/azure/postgresql/configure-maintain/extended-support?WT.mc_id=DOP-MVP-5001655">extended support</a>, though in the case of v11 that only gets you to November this year, so just a few extra months.</p>
<p>In my case, I wanted to do a test of the upgrade process first, so I restored a backup of the existing server to a new resource. This essentially creates an exact copy of the server at the same version.</p>
<p>While we are using Infrastructure as Code, I decided to use the Azure Portal to test the upgrade, as I figured if there were any problems, they might be easier to understand, rather than try and interpret weird Terraform/AzureRM errors.</p>
<p>Following the <a href="https://learn.microsoft.com/en-us/azure/postgresql/configure-maintain/how-to-perform-major-version-upgrade?WT.mc_id=DOP-MVP-5001655">upgrade documentation</a>, I clicked on the <strong>Upgrade</strong> in the Portal.</p>
<p><img src="https://david.gardiner.net.au/_astro/postgresql-upgrade-portal1.WLqDAiM5_ZscVNA.webp" alt="Screenshot of Azure Portal upgrade screen" /></p>
<p>This initiates a deployment, which depending on how much data you have and the particular SKU you're running on (eg. how fast the VM you're using is), this may take quite a while. One time it took over an hour (which was important as that may be longer than the default Terraform lifecycle, and also the pipeline job timeouts).</p>
<p><img src="https://david.gardiner.net.au/_astro/postgresql-upgrade-progress.CC7oJ8mA_Z1fvKmn.webp" alt="Screenshot of Azure Portal showing PostgreSQL resource with upgrade in progress" /></p>
<p>If that succeeds, then you should be good to try the real thing with IaC.</p>
<h2>Upgrading with Terraform</h2>
<p>To upgrade a major version with Terraform, you need to make a couple of changes to your <a href="https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/postgresql_flexible_server"><code>azurerm_postgresql_flexible_server</code></a> resource:</p>
<ol>
<li>The <code>version</code> property should be updated to the desired version</li>
<li>The <code>create_mode</code> property should be set to <code>Update</code> (if it wasn't specified then the default is 'Default')</li>
</ol>
<pre><code>resource "azurerm_postgresql_flexible_server" "server" {
  name                              = "psql-postgresql-apps-australiaeast"
  resource_group_name               = data.azurerm_resource_group.rg.name
  location                          = data.azurerm_resource_group.rg.location
  version                           = "17"
  delegated_subnet_id               = azurerm_subnet.example.id
  private_dns_zone_id               = azurerm_private_dns_zone.example.id
  public_network_access_enabled     = false
  administrator_login               = "psqladmin"
  administrator_password_wo         = ephemeral.random_password.postgresql_password.result
  administrator_password_wo_version = 1
  zone                              = "1"
  create_mode                       = "Update"

  storage_mb   = 32768
  storage_tier = "P4"

  sku_name   = "B_Standard_B1ms"
  depends_on = [azurerm_private_dns_zone_virtual_network_link.example]
}
</code></pre>
<p>The weird thing (which I assume is a side-effect of Terraform state) is that even after you've completed the upgrade, you can't change <code>create_mode</code> back to <code>Default</code> - Terraform will throw an error if you try that. Instead you just need to leave it set to <code>Update</code>, but as long as the <code>version</code> property doesn't change then Terraform will leave it at the same version.</p>
<h3>Adjust your timeouts</h3>
<p>I was using Azure Pipelines, so I added a <code>timeoutInMinutes</code> property to the job and set it to 90 minutes. Be aware that there are <a href="https://learn.microsoft.com/azure/devops/pipelines/process/phases?view=azure-devops&amp;tabs=yaml&amp;WT.mc_id=DOP-MVP-5001655#timeouts">different default and maximum timeouts</a> depending on what kind of build agent you use.</p>
<p>Likewise the Terraform <code>azurerm_postgresql_flexible_server</code> resource has <a href="https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/postgresql_flexible_server#timeouts">default timeouts</a>. You may want to specify a <code>timeout</code> block to extend those values if necessary.</p>
<h2>Gotchas</h2>
<p>I hit some compatibility issues with the PostgreSQL instance I was attempting to upgrade. The Portal displayed the following error(s):</p>
<pre><code>The major version upgrade failed precheck. Upgrading shared_preload_libraries library pg_failover_slots from source version 11 to target version 17 is not supported.;
Upgrading shared_preload_libraries library pg_failover_slots from source version 11 to target version 17 is not supported.;
Upgrading shared_preload_libraries library pg_failover_slots from source version 11 to target version 17 is not supported.;
Upgrading shared_preload_libraries library pg_failover_slots from source version 11 to target version 17 is not supported.;
Upgrading shared_preload_libraries library pg_failover_slots from source version 11 to target version 17 is not supported.;
Upgrading with password authentication mode enabled is not allowed from source version MajorVersion11. Please enable SCRAM and reset the passwords prior to retrying the upgrade.
</code></pre>
<p>There's two issues here:</p>
<ul>
<li>The <code>pg_failover_slots</code> shared preloaded library is <a href="https://learn.microsoft.com/en-au/answers/questions/5730837/attempt-to-upgrade-azure-database-for-postgresql-f">not supported for upgrading</a></li>
<li>Legacy MD5 passwords are deprecated in newer versions, <a href="https://techcommunity.microsoft.com/blog/azuredbsupport/azure-postgresql-lesson-learned-6-major-upgrade-blocked-by-password-auth-the-one/4469545">and "SCRAM" needs to be enabled</a></li>
</ul>
<p>How do we resolve these with Infrastructure as Code? In this case as we're using Terraform, we need to map/import those settings and then we can modify them. We make use of the <a href="https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/postgresql_flexible_server_configuration"><code>azurerm_postgresql_flexible_server_configuration</code></a> resource for this.</p>
<p>The <code>value</code> properties should initially match the existing values (eg. Make sure that Terraform thinks they are unchanged). A trick to get the existing values is to run the Terraform in 'plan' mode and take note of what values it can see from and then copy those into your code.</p>
<pre><code>import {
  to = azurerm_postgresql_flexible_server_configuration.accepted_pasword_auth_method
  id = "${azurerm_resource_group.group.id}/providers/Microsoft.DBforPostgreSQL/flexibleServers/psql-postgresql-apps-australiaeast/configurations/azure.accepted_password_auth_method"
}

resource "azurerm_postgresql_flexible_server_configuration" "accepted_pasword_auth_method" {
  name      = "azure.accepted_password_auth_method"
  server_id = azurerm_postgresql_flexible_server.server.id
  value     = "md5"
}

import {
  to = azurerm_postgresql_flexible_server_configuration.password_encryption
  id = "${azurerm_resource_group.group.id}/providers/Microsoft.DBforPostgreSQL/flexibleServers/psql-postgresql-apps-australiaeast/configurations/password_encryption"
}

resource "azurerm_postgresql_flexible_server_configuration" "password_encryption" {
  name      = "password_encryption"
  server_id = azurerm_postgresql_flexible_server.server.id
  value     = "md5"
}

import {
  to = azurerm_postgresql_flexible_server_configuration.shared_preload_libraries
  id = "${azurerm_resource_group.group.id}/providers/Microsoft.DBforPostgreSQL/flexibleServers/psql-postgresql-apps-australiaeast/configurations/shared_preload_libraries"
}

resource "azurerm_postgresql_flexible_server_configuration" "shared_preload_libraries" {
  name      = "shared_preload_libraries"
  server_id = azurerm_postgresql_flexible_server.server.id
  value     = "anon,auto_explain,pg_cron,pg_failover_slots,pg_hint_plan,pg_partman_bgw,pg_prewarm,pg_stat_statements,pgaudit,pglogical,timescaledb,wal2json"
}
</code></pre>
<p>Once you've got those in place then you can make the changes to remove the upgrade block:</p>
<pre><code>resource "azurerm_postgresql_flexible_server_configuration" "accepted_pasword_auth_method" {
  name      = "azure.accepted_password_auth_method"
  server_id = azurerm_postgresql_flexible_server.server.id
  value     = "md5,SCRAM-SHA-256"
}

resource "azurerm_postgresql_flexible_server_configuration" "password_encryption" {
  name      = "password_encryption"
  server_id = azurerm_postgresql_flexible_server.server.id
  value     = "SCRAM-SHA-256"
}

resource "azurerm_postgresql_flexible_server_configuration" "shared_preload_libraries" {
  name      = "shared_preload_libraries"
  server_id = azurerm_postgresql_flexible_server.server.id
  value     = "anon,auto_explain,pg_cron,pg_hint_plan,pg_partman_bgw,pg_prewarm,pg_stat_statements,pgaudit,pglogical,timescaledb,wal2json"
}
</code></pre>
<p>This will allow any existing MD5 passwords to continue to work, but any new passwords will use the more modern SCRAM-SHA-256.</p>
<p>For the <code>shared_preload_libraries</code>, we've removed the offending <code>pg_failover_slots</code> from the list.</p>
<h2>Tips</h2>
<ul>
<li>Temporarily upgrade your server SKU to beefier hardware so the upgrade goes faster. If you're using IaC then make sure you use that to make the change.</li>
<li>Note that if you change the separate storage performance tier (IOPS), <a href="https://learn.microsoft.com/en-us/azure/virtual-machines/disks-performance-tiers?tabs=azure-cli#restrictions">you will need to wait 12 hours before downgrading again</a>.</li>
</ul>
<h2>Completion</h2>
<p>If everything goes to plan, you should end up with your PostgreSQL resource upgraded to the version that you specified. Here's my resource upgraded to 17.7. <a href="https://techcommunity.microsoft.com/blog/adforpostgresql/postgresql-18-now-ga-on-azure-postgres-flexible-server/4469802?WT.mc_id=DOP-MVP-5001655">v18 is actually available</a> but I wasn't offered it due to 'regional capacity constraints', which explains why the 'Upgrade' button is now disabled.</p>
<p><img src="https://david.gardiner.net.au/_astro/postgresql-upgrade-complete.BDSml29o_Z3kvTd.webp" alt="Screenshot of Azure Portal showing PostgreSQL upgrade complete" /></p>
<p>I've published source code for a working example of Azure Database for PostgreSQL flexible server with an Azure Container app and using a VNet at <a href="https://github.com/flcdrg/terraform-azure-postgresql-containerapps">https://github.com/flcdrg/terraform-azure-postgresql-containerapps</a></p>
]]>
    </content>
    <media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://david.gardiner.net.au/_astro/postgresql-logo.BZ7GfDHR.png"/>
    <media:content medium="image" xmlns:media="http://search.yahoo.com/mrss/" url="https://david.gardiner.net.au/_astro/postgresql-logo.BZ7GfDHR.png"/>
  </entry>
  <entry>
    <id>https://david.gardiner.net.au/2025/02/azure-sql-auditing</id>
    <updated>2025-02-17T08:00:00.000+10:30</updated>
    <title>Azure SQL and enabling auditing with Terraform</title>
    <link href="https://david.gardiner.net.au/2025/02/azure-sql-auditing" rel="alternate" type="text/html" title="Azure SQL and enabling auditing with Terraform"/>
    <category term="Azure"/>
    <category term="SQL"/>
    <category term="Terraform"/>
    <published>2025-02-17T08:00:00.000+10:30</published>
    <summary type="html">
      <![CDATA[Sometimes when you're using Terraform for your Infrastructure as Code with Azure, it's a bit tricky to match up what you can see in the Azure Portal versus the Terraform resources in the AzureRM provider. Enabling auditing in Azure SQL is a great example.]]>
    </summary>
    <content type="html" xml:base="https://david.gardiner.net.au/2025/02/azure-sql-auditing">
      <![CDATA[<p><img src="https://david.gardiner.net.au/_astro/azure-logo.BF5E_tzp_16YqLd.webp" alt="Azure logo" /></p>
<p>Sometimes when you're using Terraform for your Infrastructure as Code with Azure, it's a bit tricky to match up what you can see in the Azure Portal versus the Terraform resources in the AzureRM provider. Enabling auditing in Azure SQL is a great example.</p>
<p><img src="https://david.gardiner.net.au/_astro/azure-sql-auditing-enabled-only.DmeJYD5Y_Z1UPn40.webp" alt="Screenshot of Azure SQL Auditing portal page, showing auditing enabled, but no data stores selected" /></p>
<p>In the Azure Portal, select your Azure SQL resource, then expand the <strong>Security</strong> menu and select <strong>Auditing</strong>. You can then choose to <strong>Enable Azure SQL Auditing</strong>, and upon doing this you can then choose to send auditing data to any or all of Azure Storage, Log Analytics and/or Event Hub.</p>
<p>It's also worth highlighting that usually you'd <a href="https://learn.microsoft.com/azure/azure-sql/database/auditing-server-level-database-level?view=azuresql&amp;WT.mc_id=DOP-MVP-5001655">enable auditing at the server level</a>, but it is also possible to enable it per database.</p>
<p>The two Terraform resources you may have encountered to manage this are <a href="https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/mssql_server_extended_auditing_policy"><code>mssql_server_extended_auditing_policy</code></a> and <a href="https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/mssql_database_extended_auditing_policy"><code>mssql_database_extended_auditing_policy</code></a>.</p>
<p>It's useful to refer back to the <a href="https://learn.microsoft.com/en-us/azure/azure-sql/database/auditing-setup?view=azuresql&amp;WT.mc_id=DOP-MVP-5001655">Azure SQL documentation on setting up auditing</a> to understand how to use these.</p>
<p>A couple of points that are worth highlighting:</p>
<ol>
<li><p>If you don't use the <code>audit_actions_and_groups</code> property, the default groups of actions that will be audited are:</p>
<p> BATCH_COMPLETED_GROUP
 SUCCESSFUL_DATABASE_AUTHENTICATION_GROUP
 FAILED_DATABASE_AUTHENTICATION_GROUP</p>
</li>
<li><p>If you do define auditing at the server level, the policy applies to all existing and newly created databases on the server. If you define auditing at the database level, the policy will apply in addition to any server level settings. So be careful you don't end up auditing the same thing twice unintentionally!</p>
</li>
</ol>
<p>Sometimes it can also be useful to review equivalent the Bicep/ARM definitions <a href="https://learn.microsoft.com/en-us/azure/templates/microsoft.sql/servers/extendedauditingsettings?pivots=deployment-language-bicep&amp;WT.mc_id=DOP-MVP-5001655">Microsoft.Sql/servers/extendedAuditingSettings</a>, as sometimes they can clarify how to use various properties.</p>
<p>You'll see both the Terraform and Bicep have properties to configure using a Storage Account, but while you can see Log Analytics and Event Hub in the Portal UI, it's not obvious how those set up.</p>
<p>The simplest policy you can set is this:</p>
<pre><code>resource "azurerm_mssql_server_extended_auditing_policy" "auditing" {
  server_id = azurerm_mssql_server.mssql.id
}
</code></pre>
<p>This enables the server auditing policy, but the data isn't going anywhere yet!</p>
<h2>Storage account</h2>
<p>When you select an Azure Storage Account for storing auditing data, you will end up with a bunch <code>.xel</code> files created under a <strong>sqldbauditlogs</strong> blob container.</p>
<p>There are a number of ways to view the <code>.xel</code> files, <a href="https://learn.microsoft.com/en-us/azure/azure-sql/database/auditing-analyze-audit-logs?view=azuresql&amp;WT.mc_id=DOP-MVP-5001655#analyze-logs-using-logs-in-an-azure-storage-account">documented here</a></p>
<p>Using a storage account for storing auditing has a few variations, depending on how you want to authenticate to the Storage Account.</p>
<h3>Access key</h3>
<pre><code>resource "azurerm_mssql_server_extended_auditing_policy" "auditing" {
  server_id = azurerm_mssql_server.mssql.id

  storage_endpoint                        = azurerm_storage_account.storage.primary_blob_endpoint
  storage_account_access_key              = azurerm_storage_account.storage.primary_access_key
  storage_account_access_key_is_secondary = false
  retention_in_days                       = 6
}
</code></pre>
<p>Normally <code>storage_account_access_key_is_secondary</code> would be set to <code>false</code>, but if you are rotating your storage access keys, then you may choose to switch to the secondary key while you're rotating the primary.</p>
<p><img src="https://david.gardiner.net.au/_astro/azure-sql-auditing-storage-access-keys.oEcgse7T_Z153SPc.webp" alt="Azure Portal showing Azure Storage Account with access key authentication" /></p>
<h3>Managed identity</h3>
<p>You can also use managed identity to authenticate to the storage account. In this case you don't supply the access_key properties, but you will need to add a role assignment granting the <strong>Storage Blob Data Contributor</strong> role to the identity of your Azure SQL resource.</p>
<pre><code>resource "azurerm_mssql_server_extended_auditing_policy" "auditing" {
  server_id = azurerm_mssql_server.mssql.id

  storage_endpoint  = azurerm_storage_account.storage.primary_blob_endpoint
  retention_in_days = 6
}
</code></pre>
<h2>Log analytics workspaces</h2>
<p>To send data to a Log Analytics Workspace, the <code>log_monitoring_enabled</code> property needs to be set to <code>true</code>. This is the default.</p>
<p>But to tell it <em>which</em> workspace to send the data to, you need to add a <a href="https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/monitor_diagnostic_setting"><code>azurerm_monitor_diagnostic_setting</code></a> resource.</p>
<pre><code>resource "azurerm_monitor_diagnostic_setting" "mssql_server_to_log_analytics" {
  name                       = "example-diagnostic-setting"
  target_resource_id         = "${azurerm_mssql_server.mssql.id}/databases/master"
  log_analytics_workspace_id = azurerm_log_analytics_workspace.la.id

  enabled_log {
    category = "SQLSecurityAuditEvents"
  }
}
</code></pre>
<p><img src="https://david.gardiner.net.au/_astro/azure-sql-auditing-log-analytics.DD3OzwDe_J6rzG.webp" alt="Screenshot of Log Analytics destination from the Azure Portal" /></p>
<p>Note that for the server policy, you set the <code>target_resource_id</code> to the master database of the server, not the resource id of the server itself.</p>
<p>Here's what the auditing data looks like when viewed in Log Analytics:</p>
<p><img src="https://david.gardiner.net.au/_astro/azure-sql-auditing-view-log-analytics.yKPixQWS_Kj21d.webp" alt="Screenshot of viewing audit details in Log Analytics" /></p>
<h2>Event Hub</h2>
<p>Likewise, if you want data to go to an Event Hub, you need to use the <code>azurerm_monitor_diagnostic_setting</code> resource.</p>
<pre><code>resource "azurerm_monitor_diagnostic_setting" "mssql_server_to_event_hub" {
  name                           = "ds_mssql_event_hub"
  target_resource_id             = "${azurerm_mssql_server.mssql.id}/databases/master"
  eventhub_authorization_rule_id = azurerm_eventhub_namespace_authorization_rule.eh.id
  eventhub_name                  = azurerm_eventhub.eh.name

  enabled_log {
    category = "SQLSecurityAuditEvents"
  }
}
</code></pre>
<p><img src="https://david.gardiner.net.au/_astro/azure-sql-auditing-event-hub.BXc3xPm7_1JBNoa.webp" alt="Screenshot showing Event Hub destination in the Azure Portal" /></p>
<h2>Multiple destinations</h2>
<p>As is implied by the Azure Portal, you can have one, two or all three destinations enabled for auditing. But it isn't immediately obvious that you should only have one <code>azurerm_monitor_diagnostic_setting</code> for your server auditing - don't create separate <code>azurerm_monitor_diagnostic_setting</code> resources for each destination - Azure will not allow it.</p>
<p>For example, if you're going to log to all three, you'd have a single diagnostic resource like this:</p>
<pre><code>resource "azurerm_monitor_diagnostic_setting" "mssql_server" {
  name                           = "diagnostic_setting"
  target_resource_id             = "${azurerm_mssql_server.mssql.id}/databases/master"
  eventhub_authorization_rule_id = azurerm_eventhub_namespace_authorization_rule.eh.id
  eventhub_name                  = azurerm_eventhub.eh.name

  log_analytics_workspace_id     = azurerm_log_analytics_workspace.la.id
  log_analytics_destination_type = "Dedicated"

  enabled_log {
    category = "SQLSecurityAuditEvents"
  }
</code></pre>
<p>Note, this Terraform resource does have a <code>storage_account_id</code> property, but this doesn't seem to be necessary as storage is configured via the <code>azurerm_mssql_server_extended_auditing_policy</code> resource.</p>
<p>You would need separate <code>azurerm_monitor_diagnostic_setting</code> resources if you were configuring auditing per database though.</p>
<h2>Common problems</h2>
<h3>The diagnostic setting can't find the master database</h3>
<p>Error: creating Monitor Diagnostics Setting "diagnostic_setting" for Resource "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/rg-terraform-sql-auditing-australiaeast/providers/Microsoft.Sql/servers/sql-terraform-sql-auditing-australiaeast/databases/master": unexpected status 404 (404 Not Found) with error: ResourceNotFound: The Resource 'Microsoft.Sql/servers/sql-terraform-sql-auditing-australiaeast/databases/master' under resource group 'rg-terraform-sql-auditing-australiaeast' was not found. For more details please go to <a href="https://aka.ms/ARMResourceNotFoundFix">https://aka.ms/ARMResourceNotFoundFix</a></p>
<p>It appears that <a href="https://github.com/hashicorp/terraform-provider-azurerm/issues/22226">sometimes the <code>azurerm_mssql_server</code> resource reports it is created, but the master database is not yet ready</a>. The workaround is to add a dependency on another database resource - as by definition the master database must exist before any other user databases can be created.</p>
<h3>Diagnostic setting fails to update with 409 Conflict</h3>
<p><a href="https://github.com/hashicorp/terraform-provider-azurerm/issues/21161">This error seems to happen to me when I try and set up Storage, Event Hubs and Log Analytics at the same time</a>.</p>
<p>Error: creating Monitor Diagnostics Setting "diagnostic_setting" for Resource "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/rg-terraform-sql-auditing-australiaeast/providers/Microsoft.Sql/servers/sql-terraform-sql-auditing-australiaeast/databases/master": unexpected status 409 (409 Conflict) with response: {"code":"Conflict","message":"Data sink '/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/rg-terraform-sql-auditing-australiaeast/providers/Microsoft.EventHub/namespaces/evhns-terraform-sql-auditing-australiaeast/authorizationRules/evhar-terraform-sql-auditing-australiaeast' is already used in diagnostic setting 'SQLSecurityAuditEvents_3d229c42-c7e7-4c97-9a99-ec0d0d8b86c1' for category 'SQLSecurityAuditEvents'. Data sinks can't be reused in different settings on the same category for the same resource."}</p>
<p>After a lot of trial and error, I've found the solution is to add a <code>depends_on</code> block in your <code>azurerm_mssql_server_extended_auditing_policy</code> resource, so that the <code>azurerm_monitor_diagnostic_setting</code> is created first. (This feels like a bug in the Terraform AzureRM provider)</p>
<pre><code>resource "azurerm_mssql_server_extended_auditing_policy" "auditing" {
  server_id = azurerm_mssql_server.mssql.id

  storage_endpoint  = azurerm_storage_account.storage.primary_blob_endpoint
  retention_in_days = 6

  depends_on = [azurerm_monitor_diagnostic_setting.mssql_server]
}
</code></pre>
<h3>Switching from Storage access keys to managed identity has no effect</h3>
<p>Removing the storage access key properties from <code>azurerm_mssql_server_extended_auditing_policy</code> doesn't currently switch the authentication to managed identity. The problem may relate to the <code>storage_account_subscription_id</code> property. This is an optional property and while you usually don't need to set it if the storage account is in the same subscription, it appears that the AzureRM provider is setting it on your behalf, such that when you remove the other access key properties it doesn't know to set this property to null.</p>
<p>If you know ahead of time that you'll be transitioning from access keys to managed identity, it might be worth setting <code>storage_account_subscription_id</code> first. Then later on, when you remove that and the other access_key properties maybe Terraform will do the right thing?</p>
<h3>Solution resource</h3>
<p>If you ever hit the <strong>Save</strong> button on the Azure SQL <strong>Auditing</strong> page, you may end up with a Solution resource being created for your auditing. This is useful, though it can cause problems if you are trying to destroy your Terraform resources, as it can put locks on the resources and Terraform doesn't know to destroy the solution resource first.</p>
<p>You could try to pre-emptively create the solution resource in Terraform. For example:</p>
<pre><code>resource "azurerm_log_analytics_solution" "example" {
  solution_name         = "SQLAuditing"
  location              = data.azurerm_resource_group.rg.location
  resource_group_name   = data.azurerm_resource_group.rg.name
  workspace_resource_id = azurerm_log_analytics_workspace.la.id
  workspace_name        = azurerm_log_analytics_workspace.la.name

  plan {
    publisher = "Microsoft"
    product   = "SQLAuditing"
  }

  depends_on = [azurerm_monitor_diagnostic_setting.mssql_server]
}
</code></pre>
<p>Though it seems that when you use Terraform to create this resource, it names it <code>SQLAuditing(log-terraform-sql-auditing-australiaeast)</code>, whereas if you use the portal, it is named <code>SQLAuditing[log-terraform-sql-auditing-australiaeast]</code>.</p>
<p>So instead this looks like a good use for the AzApi provider and the <a href="https://registry.terraform.io/providers/Azure/azapi/latest/docs/resources/resource"><code>azapi_resource</code></a></p>
<pre><code>resource "azapi_resource" "symbolicname" {
  type      = "Microsoft.OperationsManagement/solutions@2015-11-01-preview"
  name      = "SQLAuditing[${azurerm_log_analytics_workspace.la.name}]"
  location  = data.azurerm_resource_group.rg.location
  parent_id = data.azurerm_resource_group.rg.id

  tags = {}
  body = {
    plan = {
      name          = "SQLAuditing[${azurerm_log_analytics_workspace.la.name}]"
      product       = "SQLAuditing"
      promotionCode = ""
      publisher     = "Microsoft"
    }
    properties = {
      containedResources = [
        "${azurerm_log_analytics_workspace.la.id}/views/SQLSecurityInsights",
        "${azurerm_log_analytics_workspace.la.id}/views/SQLAccessToSensitiveData"
      ]
      referencedResources = []
      workspaceResourceId = azurerm_log_analytics_workspace.la.id
    }
  }
}
</code></pre>
<h2>Other troubleshooting tips</h2>
<p>The Azure CLI can also be useful in checking what the current state of audit configuration is.</p>
<p>Here's two examples showing auditing configured for all three destinations:</p>
<pre><code>az monitor diagnostic-settings list --resource /subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/rg-terraform-sql-auditing-australiaeast/providers/Microsoft.Sql/servers/sql-terraform-sql-auditing-australiaeast/databases/master
</code></pre>
<p>gives the following:</p>
<pre><code>[
  {
    "eventHubAuthorizationRuleId": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/rg-terraform-sql-auditing-australiaeast/providers/Microsoft.EventHub/namespaces/evhns-terraform-sql-auditing-australiaeast/authorizationRules/evhar-terraform-sql-auditing-australiaeast",
    "eventHubName": "evh-terraform-sql-auditing-australiaeast",
    "id": "/subscriptions/00000000-0000-0000-0000-000000000000/resourcegroups/rg-terraform-sql-auditing-australiaeast/providers/microsoft.sql/servers/sql-terraform-sql-auditing-australiaeast/databases/master/providers/microsoft.insights/diagnosticSettings/diagnostic_setting",
    "logs": [
      {
        "category": "SQLSecurityAuditEvents",
        "enabled": true,
        "retentionPolicy": {
          "days": 0,
          "enabled": false
        }
      },
      {
        "category": "SQLInsights",
        "enabled": false,
        "retentionPolicy": {
          "days": 0,
          "enabled": false
        }
      },
      {
        "category": "AutomaticTuning",
        "enabled": false,
        "retentionPolicy": {
          "days": 0,
          "enabled": false
        }
      },
      {
        "category": "QueryStoreRuntimeStatistics",
        "enabled": false,
        "retentionPolicy": {
          "days": 0,
          "enabled": false
        }
      },
      {
        "category": "QueryStoreWaitStatistics",
        "enabled": false,
        "retentionPolicy": {
          "days": 0,
          "enabled": false
        }
      },
      {
        "category": "Errors",
        "enabled": false,
        "retentionPolicy": {
          "days": 0,
          "enabled": false
        }
      },
      {
        "category": "DatabaseWaitStatistics",
        "enabled": false,
        "retentionPolicy": {
          "days": 0,
          "enabled": false
        }
      },
      {
        "category": "Timeouts",
        "enabled": false,
        "retentionPolicy": {
          "days": 0,
          "enabled": false
        }
      },
      {
        "category": "Blocks",
        "enabled": false,
        "retentionPolicy": {
          "days": 0,
          "enabled": false
        }
      },
      {
        "category": "Deadlocks",
        "enabled": false,
        "retentionPolicy": {
          "days": 0,
          "enabled": false
        }
      },
      {
        "category": "DevOpsOperationsAudit",
        "enabled": false,
        "retentionPolicy": {
          "days": 0,
          "enabled": false
        }
      }
    ],
    "metrics": [
      {
        "category": "Basic",
        "enabled": false,
        "retentionPolicy": {
          "days": 0,
          "enabled": false
        }
      },
      {
        "category": "InstanceAndAppAdvanced",
        "enabled": false,
        "retentionPolicy": {
          "days": 0,
          "enabled": false
        }
      },
      {
        "category": "WorkloadManagement",
        "enabled": false,
        "retentionPolicy": {
          "days": 0,
          "enabled": false
        }
      }
    ],
    "name": "diagnostic_setting",
    "resourceGroup": "rg-terraform-sql-auditing-australiaeast",
    "type": "Microsoft.Insights/diagnosticSettings",
    "workspaceId": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/rg-terraform-sql-auditing-australiaeast/providers/Microsoft.OperationalInsights/workspaces/log-terraform-sql-auditing-australiaeast"
  }
]
</code></pre>
<p>And the Azure SQL audit policy</p>
<pre><code>az sql server audit-policy show -g rg-terraform-sql-auditing-australiaeast -n sql-terraform-sql-auditing-australiaeast
</code></pre>
<p>Gives</p>
<pre><code>{
  "auditActionsAndGroups": [
    "SUCCESSFUL_DATABASE_AUTHENTICATION_GROUP",
    "FAILED_DATABASE_AUTHENTICATION_GROUP",
    "BATCH_COMPLETED_GROUP"
  ],
  "blobStorageTargetState": "Enabled",
  "eventHubAuthorizationRuleId": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/rg-terraform-sql-auditing-australiaeast/providers/Microsoft.EventHub/namespaces/evhns-terraform-sql-auditing-australiaeast/authorizationRules/evhar-terraform-sql-auditing-australiaeast",
  "eventHubName": "evh-terraform-sql-auditing-australiaeast",
  "eventHubTargetState": "Enabled",
  "id": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/rg-terraform-sql-auditing-australiaeast/providers/Microsoft.Sql/servers/sql-terraform-sql-auditing-australiaeast/auditingSettings/Default",
  "isAzureMonitorTargetEnabled": true,
  "isDevopsAuditEnabled": null,
  "isManagedIdentityInUse": true,
  "isStorageSecondaryKeyInUse": null,
  "logAnalyticsTargetState": "Enabled",
  "logAnalyticsWorkspaceResourceId": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/rg-terraform-sql-auditing-australiaeast/providers/Microsoft.OperationalInsights/workspaces/log-terraform-sql-auditing-australiaeast",
  "name": "Default",
  "queueDelayMs": null,
  "resourceGroup": "rg-terraform-sql-auditing-australiaeast",
  "retentionDays": 6,
  "state": "Enabled",
  "storageAccountAccessKey": null,
  "storageAccountSubscriptionId": "00000000-0000-0000-0000-000000000000",
  "storageEndpoint": "https://sttfsqlauditauew0o.blob.core.windows.net/",
  "type": "Microsoft.Sql/servers/auditingSettings"
}
</code></pre>
]]>
    </content>
    <media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://david.gardiner.net.au/_astro/azure-logo.BF5E_tzp.jpg"/>
    <media:content medium="image" xmlns:media="http://search.yahoo.com/mrss/" url="https://david.gardiner.net.au/_astro/azure-logo.BF5E_tzp.jpg"/>
  </entry>
  <entry>
    <id>https://david.gardiner.net.au/2023/12/migrate-terraform-resources-part2</id>
    <updated>2023-12-26T16:30:00.000+10:30</updated>
    <title>Migrating deprecated Terraform resources (part 2)</title>
    <link href="https://david.gardiner.net.au/2023/12/migrate-terraform-resources-part2" rel="alternate" type="text/html" title="Migrating deprecated Terraform resources (part 2)"/>
    <category term="Azure Pipelines"/>
    <category term="DevOps"/>
    <category term="Terraform"/>
    <published>2023-12-26T16:30:00.000+10:30</published>
    <summary type="html">
      <![CDATA[In my previous post, I showed how to migrate deprecated Terraform resources to a supported resource type. But I hinted that there are some gotchas to be aware of. Here are some issues that I have encountered and how to work around them. The import statement will fail if you try to deploy to a brand new empty environment. It makes sense when you think about it - how can you import a resource that doesn't exist yet? Unfortunately, there's no way to make the import statement conditional. …]]>
    </summary>
    <content type="html" xml:base="https://david.gardiner.net.au/2023/12/migrate-terraform-resources-part2">
      <![CDATA[<p>In my <a href="/2023/12/migrate-terraform-resources">previous post</a>, I showed how to migrate deprecated Terraform resources to a supported resource type. But I hinted that there are some gotchas to be aware of. Here are some issues that I have encountered and how to work around them.</p>
<h2>Deploying to a new environment</h2>
<p>The <code>import</code> statement will fail if you try to deploy to a brand new empty environment. It makes sense when you think about it - how can you import a resource that doesn't exist yet? Unfortunately, there's no way to make the <code>import</code> statement conditional.</p>
<p>An example of this would be if your application has been in development for a while, and to assist in testing you now want to create a new UAT environment. There are no resources in the UAT environment yet, so the <code>import</code> statement will fail.</p>
<pre><code>Initializing plugins and modules...
data.azurerm_resource_group.group: Refreshing...
data.azurerm_client_config.client: Refreshing...
data.azurerm_client_config.client: Refresh complete after 0s [id=xxxxxxxxxxxxx=]
data.azurerm_resource_group.group: Refresh complete after 0s [id=/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/rg-tfupgrade-australiasoutheast]
azurerm_service_plan.plan: Refreshing state... [id=/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/rg-tfupgrade-australiasoutheast/providers/Microsoft.Web/serverfarms/plan-tfupgrade-australiasoutheast]
╷
│ Error: Cannot import non-existent remote object
│
│ While attempting to import an existing object to
│ "azurerm_service_plan.plan", the provider detected that no object exists
│ with the given id. Only pre-existing objects can be imported; check that
│ the id is correct and that it is associated with the provider's configured
│ region or endpoint, or use "terraform apply" to create a new remote object
│ for this resource.
╵
Operation failed: failed running terraform plan (exit 1)
</code></pre>
<h2>Deploying to an environment where resources may have been deleted</h2>
<p>This is similar to the previous scenario. It's one I've encountered where we have non-production environments where you want to "mothball" when they're not being actively used to save money. We selectively delete resources, and then when the environment is needed we re-provision them. Again, the <code>import</code> statement will fail if it is referencing a deleted resource.</p>
<h2>A Solution</h2>
<p>The workaround to support these scenarios is to not use the <code>import</code> statement in your HCL code, but instead use the <code>terraform import</code> command. Because we're calling the command from the command line, we can make it conditional. e.g.</p>
<pre><code>if az appservice plan show --name plan-tfupgrade-australiasoutheast --resource-group $ARM_RESOURCE_GROUP --query id --output tsv &gt; /dev/null 2&gt;&amp;1; then
  terraform import azurerm_service_plan.plan /subscriptions/$ARM_SUBSCRIPTION_ID/resourceGroups/$ARM_RESOURCE_GROUP/providers/Microsoft.Web/serverfarms/plan-tfupgrade-australiasoutheast
else
  echo "Resource plan-tfupgrade-australiasoutheast does not exist in Azure"
fi
</code></pre>
<p>Repeat this pattern for all resources that need to be imported. Also, take care when referencing the Azure resource names. They need to be correct for the detection code to work as expected!</p>
<p>It's not quite as elegant as using the <code>import</code> statement in your HCL code, but it does the job.</p>
<h2>Example output</h2>
<p>Here's an example of the output from the <code>terraform import</code> command:</p>
<pre><code>data.azurerm_resource_group.group: Reading...
data.azurerm_client_config.client: Reading...
data.azurerm_client_config.client: Read complete after 0s [id=xxxxxxxxxxxx=]
data.azurerm_resource_group.group: Read complete after 0s [id=/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/rg-tfupgrade-australiasoutheast]
azurerm_service_plan.plan: Importing from ID "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/rg-tfupgrade-australiasoutheast/providers/Microsoft.Web/serverfarms/plan-tfupgrade-australiasoutheast"...
azurerm_service_plan.plan: Import prepared!
  Prepared azurerm_service_plan for import
azurerm_service_plan.plan: Refreshing state... [id=/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/rg-tfupgrade-australiasoutheast/providers/Microsoft.Web/serverfarms/plan-tfupgrade-australiasoutheast]

Import successful!
</code></pre>
<p>And the resultant plan shows no additional changes are necessary, which is just what we like to see!</p>
<pre><code>data.azurerm_resource_group.group: Reading...
data.azurerm_client_config.client: Reading...
data.azurerm_client_config.client: Read complete after 0s [id=xxxxxxxxxxxxx=]
data.azurerm_resource_group.group: Read complete after 0s [id=/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/rg-tfupgrade-australiasoutheast]
azurerm_service_plan.plan: Refreshing state... [id=/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/rg-tfupgrade-australiasoutheast/providers/Microsoft.Web/serverfarms/plan-tfupgrade-australiasoutheast]
azurerm_linux_web_app.appservice: Refreshing state... [id=/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/rg-tfupgrade-australiasoutheast/providers/Microsoft.Web/sites/appservice-tfupgrade-australiasoutheast]

No changes. Your infrastructure matches the configuration.

Terraform has compared your real infrastructure against your configuration
and found no differences, so no changes are needed.
</code></pre>
<p>And if we apply this to an empty environment, it also runs successfully!</p>
<pre><code>azurerm_service_plan.plan: Creating...
azurerm_service_plan.plan: Still creating... [10s elapsed]
azurerm_service_plan.plan: Creation complete after 10s [id=/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/rg-tfupgrade-australiasoutheast/providers/Microsoft.Web/serverfarms/plan-tfupgrade-australiasoutheast]
azurerm_linux_web_app.appservice: Creating...
azurerm_linux_web_app.appservice: Still creating... [10s elapsed]
azurerm_linux_web_app.appservice: Still creating... [20s elapsed]
azurerm_linux_web_app.appservice: Still creating... [30s elapsed]
azurerm_linux_web_app.appservice: Still creating... [40s elapsed]
azurerm_linux_web_app.appservice: Still creating... [50s elapsed]
azurerm_linux_web_app.appservice: Creation complete after 56s [id=/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/rg-tfupgrade-australiasoutheast/providers/Microsoft.Web/sites/appservice-tfupgrade-australiasoutheast]

Apply complete! Resources: 2 added, 0 changed, 0 destroyed.
</code></pre>
<h2>Post-migration clean up</h2>
<p>Whether you use the <code>import</code> statement or the <code>terraform import</code> command, once you've migrated your Terraform resources in all existing environments, I recommend you remove the migration code from HCL and pipeline YAML files.</p>
<p>Remove the <code>import</code> statements as they have no further purpose, and you only risk encountering the issue mentioned above if you ever want to deploy to a new environment.</p>
<p>There's less risk of leaving the conditional <code>terraform import</code> commands in the pipeline YAML files, but as long as they remain they are slowing the pipeline down. Remove them to keep your pipelines running as fast as possible and to keep your YAML files as simple as possible. If you ever need to do another migration in the future, you'll have the commands in your source control history to refer to.</p>
]]>
    </content>
    <media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://david.gardiner.net.au/_astro/terraform-logo.CiRDK2M7.png"/>
    <media:content medium="image" xmlns:media="http://search.yahoo.com/mrss/" url="https://david.gardiner.net.au/_astro/terraform-logo.CiRDK2M7.png"/>
  </entry>
  <entry>
    <id>https://david.gardiner.net.au/2023/12/migrate-terraform-resources</id>
    <updated>2023-12-04T08:00:00.000+10:30</updated>
    <title>Migrating deprecated Terraform resources</title>
    <link href="https://david.gardiner.net.au/2023/12/migrate-terraform-resources" rel="alternate" type="text/html" title="Migrating deprecated Terraform resources"/>
    <category term="Azure Pipelines"/>
    <category term="DevOps"/>
    <category term="Terraform"/>
    <published>2023-12-04T08:00:00.000+10:30</published>
    <summary type="html">
      <![CDATA[One of the challenges with using Terraform for your infrastructure as code is that the providers (that interact with cloud providers like Azure) are updated very frequently, and especially with major version releases this includes deprecating specific resource types. For example, when the Azure provider (AzureRM) version 3.0 was released, it deprecated many resource types and data sources. Some of these still exist in version 3, but are deprecated, will not receive any updates, and will be removed in version 4. Others have already been removed entirely. …]]>
    </summary>
    <content type="html" xml:base="https://david.gardiner.net.au/2023/12/migrate-terraform-resources">
      <![CDATA[<p>One of the challenges with using Terraform for your infrastructure as code is that the providers (that interact with cloud providers like Azure) are updated very frequently, and especially with major version releases this includes deprecating specific resource types. For example, when the Azure provider (AzureRM) version 3.0 was released, it deprecated many resource types and data sources. Some of these still exist in version 3, but are <a href="https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/guides/3.0-upgrade-guide#removal-of-deprecated-fields-data-sources-and-resources">deprecated, will not receive any updates, and will be removed in version 4</a>. Others have already been removed entirely.</p>
<p><img src="https://david.gardiner.net.au/_astro/terraform-logo.CiRDK2M7_Z21UAf8.webp" alt="Terraform logo" /></p>
<p><em>I've created an example repo that demonstrates the migration process outlined here. Find it at <a href="https://github.com/flcdrg/terraform-azure-upgrade-resources">https://github.com/flcdrg/terraform-azure-upgrade-resources</a>.</em></p>
<p>While this post uses Azure and Azure Pipelines, the same principles should apply for other cloud providers and CI/CD systems.</p>
<p>To set the scene, here's some Terraform code that creates an Azure App Service Plan and an App Service (<a href="https://github.com/flcdrg/terraform-azure-upgrade-resources/blob/main/v2/app-service.tf">src</a>). The resource types are from v2.x AzureRM provider. Bear in mind, the last release of v2 was <a href="https://github.com/hashicorp/terraform-provider-azurerm/releases/tag/v2.99.0">v2.99.0</a> in March 2022.</p>
<pre><code># https://registry.terraform.io/providers/hashicorp/azurerm/2.99.0/docs/resources/app_service_plan
resource "azurerm_app_service_plan" "plan" {
  name                = "plan-tfupgrade-australiasoutheast"
  resource_group_name = data.azurerm_resource_group.group.name
  location            = data.azurerm_resource_group.group.location
  kind                = "Linux"
  reserved            = true
  sku {
    tier = "Basic"
    size = "B1"
  }
}

# https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/app_service
resource "azurerm_app_service" "appservice" {
  app_service_plan_id = azurerm_app_service_plan.plan.id
  name                = "appservice-tfupgrade-australiasoutheast"
  location            = data.azurerm_resource_group.group.location
  resource_group_name = data.azurerm_resource_group.group.name
  https_only          = true

  app_settings = {
    "TEST" = "TEST"
  }

  site_config {
    always_on                 = true
    ftps_state                = "Disabled"
    http2_enabled             = true
    linux_fx_version          = "DOTNETCORE|6.0"
    min_tls_version           = "1.2"
    use_32_bit_worker_process = false
  }
  identity {
    type = "SystemAssigned"
  }
}
</code></pre>
<p>The documentation for AzureRM v3.x shows that these resource types are deprecated and will be completely removed in v4.x. In addition, as these resource types are not being updated, they don't support the latest features of Azure App Services, such as the new .NET 8 runtime.</p>
<p>So how can we switch to the <a href="https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/service_plan">azurerm_service_plan</a> and <a href="https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/linux_web_app">azurerm_linux_web_app</a> resource types?</p>
<p>You might think it's just a matter of just changing the resource types and updating a few properties. But if you tried that you'll discover that Terraform will try to delete the existing resources and create new ones. This is because the resource types are different, and Terraform doesn't know that they are actually the same thing because the state representation of those resources is different.</p>
<p>Instead, we need to let Terraform know that the Azure resources that have already been created map to the new Terraform resource types we've defined in our configuration. In addition, we want to do this in a testable way using a pull request to verify that our changes look correct before we merge them into the main branch.</p>
<p>The approach we'll take is to make use of the relatively new <a href="https://developer.hashicorp.com/terraform/language/import"><code>import</code> block</a> language feature. (In a future blog post I'll cover when you might consider using the <code>terraform import</code> CLI command instead).</p>
<p>By using the <code>import</code> block, we can tell Terraform that the existing resources in Azure should be mapped to the new resource types we've defined in Terraform configuration. This means that Terraform will not try to delete the existing resources and create new ones. Instead, it will update the existing resources to match the Terraform configuration.</p>
<p>In the following example, we're indicating that the Azure resource with the resource ID <code>/subscriptions/.../resourceGroups/rg-tfupgrade-australiasoutheast/providers/Microsoft.Web/serverfarms/plan-tfupgrade-australiasoutheast</code> should be mapped to the <code>azurerm_service_plan</code> resource type. Note the use of the data block reference to insert the subscription ID, rather than hard-coding it.</p>
<pre><code>import {
  id = "/subscriptions/${data.azurerm_client_config.client.subscription_id}/resourceGroups/rg-tfupgrade-australiasoutheast/providers/Microsoft.Web/serverfarms/plan-tfupgrade-australiasoutheast"
  to = azurerm_service_plan.plan
}

resource "azurerm_service_plan" "plan" {
  name                = "plan-tfupgrade-australiasoutheast"
  resource_group_name = data.azurerm_resource_group.group.name
  location            = data.azurerm_resource_group.group.location
  sku_name            = "B1"
  os_type             = "Linux"
}
</code></pre>
<p>Often when you're adding the new resource, the property names and 'shape' will change. Sometimes it's pretty easy to figure out the equivalent, but sometimes you might need some help. One option you can utilise is <a href="https://developer.hashicorp.com/terraform/language/import/generating-configuration">generating the configuration</a>.</p>
<p>In this case, you add the <code>import</code> block, but don't add the resource block. If you then run <code>terraform plan -generate-config-out=generated_resources.tf</code>. Terraform will create a new file <code>generated_resources.tf</code> which will contain generated resources. You can then copy/paste those over into your regular .tf files. You'll almost certainly want to edit them to remove redundant settings and replace hard-coded values with variable references where applicable. If you're doing this as part of a pipeline, publish the generated file as a build artifact, so you can download it and incorporate the changes. You could make this an optional part of the pipeline that is enabled by setting a pipeline parameter to true.</p>
<p>There's still one problem to solve though. While we've mapped the new resource types to the existing resources, Terraform state still knows about the old resource types, and will try to delete them now that they are no longer defined in the Terraform configuration. To solve this, we can use the <a href="https://www.terraform.io/cli/commands/state/rm"><code>terraform state rm</code></a> command to remove the old resources from state. If they're not in state, then Terraform doesn't know about them and won't try to delete them.</p>
<p>The following script will remove the old resources from state if they exist. Note that the <code>terraform state rm</code> command will fail if the resource doesn't exist in state, so we need to check for the existence of the resource first.</p>
<pre><code># Remove state of old resources from Terraform
mapfile -t RESOURCES &lt; &lt;( terraform state list )

if [[ " ${RESOURCES[@]} " =~ "azurerm_app_service_plan.plan" ]]; then
  terraform state rm azurerm_app_service_plan.plan
fi

if [[ " ${RESOURCES[@]} " =~ "azurerm_app_service.appservice" ]]; then
  terraform state rm azurerm_app_service.appservice
fi
</code></pre>
<p>You will need to add an entry in this script for each Terraform resource type that you are removing.</p>
<h2>Testing</h2>
<p>Ok, so we have a strategy for upgrading our Terraform resources. But how do we test it? We don't want to just merge the changes into the main branch and hope for the best. We want to test it first, and do this in isolation from other changes that might be happening in the main branch (or other branches). To test our changes need to update the Terraform state. But if we update the state used by everyone else then we won't be popular when their builds start failing because Terraform will be trying to recreate resources that we've just deleted from state. Except those resources still exist in Azure!</p>
<p>What we want is a local copy of the Terraform state that we can try out our changes in without affecting anyone else. One way to do this is to copy the remote state to a local file, then reinitialise Terraform to use the 'local' backend. Obviously we won't do a real deployment using this, but it is perfect for running <code>terraform plan</code> against.</p>
<p>Here's an example script that will copy the remote state to a local file, then reinitialise Terraform to use the local backend. It assumes that your backend configuration is defined separately in a <a href="https://github.com/flcdrg/terraform-azure-upgrade-resources/blob/main/v3/backend.tf"><code>backend.tf</code> file</a>. Normally this would be pointing to a remote backend (e.g. Terraform Cloud or an Azure Storage account), within the pipeline run we replace this file with configuration to use a local backend.</p>
<pre><code>terraform state pull &gt; $(Build.ArtifactStagingDirectory)/pull.tfstate

cat &gt; backend.tf &lt;&lt;EOF
terraform {
  backend "local" {
    path = "$(Build.ArtifactStagingDirectory)/pull.tfstate"
  }
}
EOF

# Reset Terraform to use local backend
terraform init -reconfigure -no-color -input=false
</code></pre>
<p>We now should have all the pieces in place to test our changes on PR build, and then once we're happy with the plan, merge the changes and run the migration for real. If you have multiple environments (dev/test/prod) then you can roll this out to each environment as part of the normal release process.</p>
<p>For Azure Pipelines, we make use of conditional expressions, so that on PR builds we test the migration using local state, but on the main branch we modify the remote state and actually apply the changes.</p>
<p>Here's the Azure Pipeline in full (<a href="https://github.com/flcdrg/terraform-azure-upgrade-resources/blob/main/upgrade.yml">src</a>):</p>
<pre><code>trigger: none

pr:
  branches:
    include:
      - main

pool:
  vmImage: ubuntu-latest

variables:
  - group: Terraform-Token

jobs:
  - job: build
    displayName: "Test Terraform Upgrade"

    variables:
      TerraformSourceDirectory: $(System.DefaultWorkingDirectory)/v3

    steps:
      - script: echo "##vso[task.setvariable variable=TF_TOKEN_app_terraform_io]$(TF_TOKEN)"
        displayName: "Terraform Token"

      - task: TerraformInstaller@2
        displayName: "Terraform: Installer"
        inputs:
          terraformVersion: "latest"

      - task: TerraformCLI@2
        displayName: "Terraform: init"
        inputs:
          command: init
          workingDirectory: "$(TerraformSourceDirectory)"
          backendType: selfConfigured
          commandOptions: -no-color -input=false
          allowTelemetryCollection: false

      - ${{ if ne(variables['Build.SourceBranch'], 'refs/heads/main') }}:
          # Copy state from Terraform Cloud to local, so we can modify it without affecting the remote state
          - script: |
              terraform state pull &gt; $(Build.ArtifactStagingDirectory)/pull.tfstate

              # Write multiple lines of text to local file using bash
              cat &gt; backend.tf &lt;&lt;EOF
              terraform {
                backend "local" {
                  path = "$(Build.ArtifactStagingDirectory)/pull.tfstate"
                }
              }
              EOF

              # Reset Terraform to use local backend
              terraform init -reconfigure -no-color -input=false
            displayName: "Script: Use Terraform Local Backend"
            workingDirectory: $(TerraformSourceDirectory)

      - script: |
          # Remove state of old resources from Terraform
          mapfile -t RESOURCES &lt; &lt;( terraform state list )

          if [[ " ${RESOURCES[@]} " =~ "azurerm_app_service_plan.plan" ]]; then
            terraform state rm azurerm_app_service_plan.plan
          fi

          if [[ " ${RESOURCES[@]} " =~ "azurerm_app_service.appservice" ]]; then
            terraform state rm azurerm_app_service.appservice
          fi
        displayName: "Script: Remove old resources from Terraform State"
        workingDirectory: $(TerraformSourceDirectory)

      - task: TerraformCLI@2
        displayName: "Terraform: validate"
        inputs:
          command: validate
          workingDirectory: "$(TerraformSourceDirectory)"
          commandOptions: -no-color

      - ${{ if ne(variables['Build.SourceBranch'], 'refs/heads/main') }}:
          - task: TerraformCLI@2
            displayName: "Terraform: plan"
            inputs:
              command: plan
              workingDirectory: "$(TerraformSourceDirectory)"
              commandOptions: -no-color -input=false -detailed-exitcode
              environmentServiceName: Azure MSDN - rg-tfupgrade-australiasoutheast
              publishPlanResults: Plan
              allowTelemetryCollection: false

      - ${{ if eq(variables['Build.SourceBranch'], 'refs/heads/main') }}:
          - task: TerraformCLI@2
            displayName: "Terraform: apply"
            inputs:
              command: apply
              workingDirectory: "$(TerraformSourceDirectory)"
              commandOptions: -no-color -input=false -auto-approve
              allowTelemetryCollection: false
</code></pre>
<h2>Using it in practise</h2>
<p>Ideally when you migrate the resource types, there will be no changes to the properties (and Terraform will report that no changes need to be made). Often the new resource type provides additional properties that you can take advantage of. Whether you set this initially or in a subsequent PR is up to you.</p>
<p>Here's an example output from terraform plan:</p>
<pre><code>Terraform v1.6.4
on linux_amd64
Initializing plugins and modules...
data.azurerm_resource_group.group: Refreshing...
data.azurerm_client_config.client: Refreshing...
data.azurerm_client_config.client: Refresh complete after 0s [id=xxxxxxxxxxx=]
data.azurerm_resource_group.group: Refresh complete after 0s [id=/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/rg-tfupgrade-australiasoutheast]
azurerm_service_plan.plan: Refreshing state... [id=/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/rg-tfupgrade-australiasoutheast/providers/Microsoft.Web/serverfarms/plan-tfupgrade-australiasoutheast]
azurerm_linux_web_app.appservice: Refreshing state... [id=/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/rg-tfupgrade-australiasoutheast/providers/Microsoft.Web/sites/appservice-tfupgrade-australiasoutheast]

Terraform will perform the following actions:

  # azurerm_linux_web_app.appservice will be imported
    resource "azurerm_linux_web_app" "appservice" {
        app_settings                                   = {
            "TEST" = "TEST"
        }
        client_affinity_enabled                        = false
        client_certificate_enabled                     = false
        client_certificate_mode                        = "Required"
        custom_domain_verification_id                  = (sensitive value)
        default_hostname                               = "appservice-tfupgrade-australiasoutheast.azurewebsites.net"
        enabled                                        = true
        ftp_publish_basic_authentication_enabled       = true
        https_only                                     = true
        id                                             = "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/rg-tfupgrade-australiasoutheast/providers/Microsoft.Web/sites/appservice-tfupgrade-australiasoutheast"
        key_vault_reference_identity_id                = "SystemAssigned"
        kind                                           = "app,linux"
        location                                       = "australiasoutheast"
        name                                           = "appservice-tfupgrade-australiasoutheast"
        outbound_ip_address_list                       = [
            "52.189.223.107",
            "13.77.42.25",
            "13.77.46.217",
            "52.189.221.141",
            "13.77.50.99",
        ]
        outbound_ip_addresses                          = "52.189.223.107,13.77.42.25,13.77.46.217,52.189.221.141,13.77.50.99"
        possible_outbound_ip_address_list              = [
            "52.189.223.107",
            "13.77.42.25",
            "13.77.46.217",
            "52.189.221.141",
            "52.243.85.201",
            "52.243.85.94",
            "52.189.234.152",
            "13.77.56.61",
            "52.189.214.112",
            "20.11.210.198",
            "20.211.233.197",
            "20.211.238.191",
            "20.11.210.187",
            "20.11.211.1",
            "20.11.211.80",
            "4.198.70.38",
            "20.92.41.250",
            "4.198.68.27",
            "4.198.68.42",
            "20.92.47.59",
            "20.92.42.78",
            "13.77.50.99",
        ]
        possible_outbound_ip_addresses                 = "52.189.223.107,13.77.42.25,13.77.46.217,52.189.221.141,52.243.85.201,52.243.85.94,52.189.234.152,13.77.56.61,52.189.214.112,20.11.210.198,20.211.233.197,20.211.238.191,20.11.210.187,20.11.211.1,20.11.211.80,4.198.70.38,20.92.41.250,4.198.68.27,4.198.68.42,20.92.47.59,20.92.42.78,13.77.50.99"
        public_network_access_enabled                  = true
        resource_group_name                            = "rg-tfupgrade-australiasoutheast"
        service_plan_id                                = "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/rg-tfupgrade-australiasoutheast/providers/Microsoft.Web/serverfarms/plan-tfupgrade-australiasoutheast"
        site_credential                                = (sensitive value)
        tags                                           = {}
        webdeploy_publish_basic_authentication_enabled = true

        identity {
            identity_ids = []
            principal_id = "a71e1fd5-e61b-4591-a439-98bad90cc837"
            tenant_id    = "59b0934d-4f35-4bff-a2b7-a451fe5f8bd6"
            type         = "SystemAssigned"
        }

        site_config {
            always_on                               = true
            auto_heal_enabled                       = false
            container_registry_use_managed_identity = false
            default_documents                       = []
            detailed_error_logging_enabled          = false
            ftps_state                              = "Disabled"
            health_check_eviction_time_in_min       = 0
            http2_enabled                           = true
            linux_fx_version                        = "DOTNETCORE|6.0"
            load_balancing_mode                     = "LeastRequests"
            local_mysql_enabled                     = false
            managed_pipeline_mode                   = "Integrated"
            minimum_tls_version                     = "1.2"
            remote_debugging_enabled                = false
            remote_debugging_version                = "VS2019"
            scm_minimum_tls_version                 = "1.2"
            scm_type                                = "VSTSRM"
            scm_use_main_ip_restriction             = false
            use_32_bit_worker                       = false
            vnet_route_all_enabled                  = false
            websockets_enabled                      = false
            worker_count                            = 1

            application_stack {
                docker_registry_password = (sensitive value)
                dotnet_version           = "6.0"
            }
        }
    }

  # azurerm_service_plan.plan will be imported
    resource "azurerm_service_plan" "plan" {
        id                           = "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/rg-tfupgrade-australiasoutheast/providers/Microsoft.Web/serverfarms/plan-tfupgrade-australiasoutheast"
        kind                         = "linux"
        location                     = "australiasoutheast"
        maximum_elastic_worker_count = 1
        name                         = "plan-tfupgrade-australiasoutheast"
        os_type                      = "Linux"
        per_site_scaling_enabled     = false
        reserved                     = true
        resource_group_name          = "rg-tfupgrade-australiasoutheast"
        sku_name                     = "B1"
        tags                         = {}
        worker_count                 = 1
        zone_balancing_enabled       = false
    }

Plan: 2 to import, 0 to add, 0 to change, 0 to destroy.
</code></pre>
<p>In the next post, I'll cover a few things to watch out for, and some post-migration clean up steps.</p>
]]>
    </content>
    <media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://david.gardiner.net.au/_astro/terraform-logo.CiRDK2M7.png"/>
    <media:content medium="image" xmlns:media="http://search.yahoo.com/mrss/" url="https://david.gardiner.net.au/_astro/terraform-logo.CiRDK2M7.png"/>
  </entry>
  <entry>
    <id>https://david.gardiner.net.au/2023/11/terraform-init-error</id>
    <updated>2023-11-27T09:00:00.000+10:30</updated>
    <title>Terraform command &apos;init&apos; failed with exit code &apos;1&apos;</title>
    <link href="https://david.gardiner.net.au/2023/11/terraform-init-error" rel="alternate" type="text/html" title="Terraform command &apos;init&apos; failed with exit code &apos;1&apos;"/>
    <category term="Azure Pipelines"/>
    <category term="DevOps"/>
    <category term="Terraform"/>
    <published>2023-11-27T09:00:00.000+10:30</published>
    <summary type="html">
      <![CDATA[I'm setting up a new GitHub repo to demonstrate using Terraform with Azure Pipelines via the CLI and I hit a weird error right at the start. I was using Jason Johnson's Azure Pipelines Terraform Tasks extension like this: and it kept failing with the error:]]>
    </summary>
    <content type="html" xml:base="https://david.gardiner.net.au/2023/11/terraform-init-error">
      <![CDATA[<p>I'm setting up a new GitHub repo to demonstrate using Terraform with Azure Pipelines via the CLI and I hit a weird error right at the start. I was using <a href="https://marketplace.visualstudio.com/items?itemName=JasonBJohnson.azure-pipelines-tasks-terraform">Jason Johnson's Azure Pipelines Terraform Tasks</a> extension like this:</p>
<pre><code>- task: TerraformCLI@2
  inputs:
    command: init
    workingDirectory: "$(System.DefaultWorkingDirectory)/v2"
    backendType: selfConfigured
    commandOptions: -no-color -input=false
    allowTelemetryCollection: false
</code></pre>
<p>and it kept failing with the error:</p>
<pre><code>/opt/hostedtoolcache/terraform/1.6.4/x64/terraform version
Terraform v1.6.4
on linux_amd64
+ provider registry.terraform.io/hashicorp/azurerm v2.99.0
+ provider registry.terraform.io/hashicorp/random v3.5.1
/opt/hostedtoolcache/terraform/1.6.4/x64/terraform init --input=false --no-color
Usage: terraform [global options] init [options]
...
Terraform command 'init' failed with exit code '1'.
</code></pre>
<p>I tried all sorts of things. It looked identical to other pipelines I had working (and ones I'd seen online). Was there a weird invisible character being passed on the command line? Out of desperation I copied the <code>commandOptions</code> line from another pipeline (which looked identical except that the arguments were in a different order).</p>
<p>But when I looked at the diff, not only were the arguments different, I realised that the dashes were different too! Terraform CLI arguments use a single dash, not a double dash. So the correct line is</p>
<pre><code>commandOptions: -no-color -input=false
</code></pre>
<p>In hindsight, I realised that this is the first pipeline I've written on GitHub using the Terraform CLI (e.g. not in a work context) and so I did it manually, rather than copy/pasting from an existing (working) pipeline. A pity Terraform doesn't support double dashes, but there you go.</p>
]]>
    </content>
    <media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://david.gardiner.net.au/_astro/terraform-logo.CiRDK2M7.png"/>
    <media:content medium="image" xmlns:media="http://search.yahoo.com/mrss/" url="https://david.gardiner.net.au/_astro/terraform-logo.CiRDK2M7.png"/>
  </entry>
  <entry>
    <id>https://david.gardiner.net.au/2023/07/passed-terraform-associate</id>
    <updated>2023-07-13T14:00:00.000+09:30</updated>
    <title>HashiCorp Certified: Terraform Associate (003)</title>
    <link href="https://david.gardiner.net.au/2023/07/passed-terraform-associate" rel="alternate" type="text/html" title="HashiCorp Certified: Terraform Associate (003)"/>
    <category term="Terraform"/>
    <category term="Training and Certification"/>
    <published>2023-07-13T14:00:00.000+09:30</published>
    <summary type="html">
      <![CDATA[I've been using Terraform quite a bit recently and noticed that HashiCorp have a Terraform Associate certification. Reviewing the exam objectives it sounded like it seemed to cover most of the things I've already been doing, so I decided to give it a go.]]>
    </summary>
    <content type="html" xml:base="https://david.gardiner.net.au/2023/07/passed-terraform-associate">
      <![CDATA[<p>I've been using <a href="https://developer.hashicorp.com/terraform/intro">Terraform</a> quite a bit recently and noticed that HashiCorp have a <a href="https://www.hashicorp.com/certification/terraform-associate/">Terraform Associate certification</a>. Reviewing the exam objectives it sounded like it seemed to cover most of the things I've already been doing, so I decided to give it a go.</p>
<p><img src="https://david.gardiner.net.au/_astro/hashicorp-certified-terraform-associate-003.DTuneGtE_ZIUfNw.webp" alt="Terraform certified associate badge" /></p>
<p>The exam is run by <a href="https://home.psiexams.com">PSI</a>, so was a slightly different experience to those I've taken for Microsoft certifications. The sign-in process was a bit simpler (eg. scanning your ID with your webcam rather than having to upload them from your phone). The exam software required that I turn off a few background processes (OneDrive, a Zoom background process and the Virtual Machine Management Service). Once I'd done that the software was happy to proceed and when the proctor was satisfied with my room and desk setup I was able to start the exam.</p>
<p>I finished the exam in good time and I was pleased to learn that I passed!</p>
<p>The email summary of my results included a breakdown of how I went in each of the areas covered.</p>
<p>Overall Score: 78%</p>
<p>Breakdown by content area:</p>
<ol>
<li>Understand infrastructure as code (IaC) concepts: 100%</li>
<li>Understand the purpose of Terraform (vs other IaC): 100%</li>
<li>Understand Terraform basics: 87%</li>
<li>Use Terraform outside the core workflow: 100%</li>
<li>Interact with Terraform modules: 40%</li>
<li>Use the core Terraform workflow: 88%</li>
<li>Implement and maintain state: 72%</li>
<li>Read, generate, and modify configuration: 63%</li>
<li>Understand Terraform Cloud capabilities: 100%</li>
</ol>
<p>So looks like modules are an area I'm not as strong on! That's fair as I haven't made a lot of use of them so far.</p>
<p>If you're using Terraform, then I'd encourage you to go ahead and take the exam. Have a look at the <a href="https://developer.hashicorp.com/terraform/tutorials/certification-003/associate-study-003">study guide</a>, <a href="https://developer.hashicorp.com/terraform/tutorials/certification-003/associate-questions">sample questions</a> and <a href="https://developer.hashicorp.com/terraform/tutorials/certification-003/associate-review-003">exam review</a> to ensure you're comfortable with all the topics being covered and how questions will be asked. Then register for the exam and give it a go!</p>
]]>
    </content>
    <media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://david.gardiner.net.au/_astro/hashicorp-certified-terraform-associate-003.DTuneGtE.png"/>
    <media:content medium="image" xmlns:media="http://search.yahoo.com/mrss/" url="https://david.gardiner.net.au/_astro/hashicorp-certified-terraform-associate-003.DTuneGtE.png"/>
  </entry>
  <entry>
    <id>https://david.gardiner.net.au/2023/01/azure-vm-terraform</id>
    <updated>2023-01-19T20:30:00.000+10:30</updated>
    <title>Provision an Azure Virtual Machine with Terraform Cloud</title>
    <link href="https://david.gardiner.net.au/2023/01/azure-vm-terraform" rel="alternate" type="text/html" title="Provision an Azure Virtual Machine with Terraform Cloud"/>
    <category term="Azure"/>
    <category term="Terraform"/>
    <published>2023-01-19T20:30:00.000+10:30</published>
    <summary type="html">
      <![CDATA[Sometimes I need to spin up a virtual machine to quickly test something out on a 'vanilla' machine, for example, to test out a Chocolatey package that I maintain. Most of the time I log in to the Azure Portal and click around to create a VM. The addition of the choice to use the Azure Virtual Machine with preset configuration does make it a bit easier, but it's still lots of clicking. Maybe I should try automating this!]]>
    </summary>
    <content type="html" xml:base="https://david.gardiner.net.au/2023/01/azure-vm-terraform">
      <![CDATA[<p><img src="https://david.gardiner.net.au/_astro/terraform-logo.CiRDK2M7_Z21UAf8.webp" alt="Terraform logo" /></p>
<p>Sometimes I need to spin up a virtual machine to quickly test something out on a 'vanilla' machine, for example, to test out a <a href="https://github.com/flcdrg/au-packages">Chocolatey package that I maintain</a>.</p>
<p>Most of the time I log in to the Azure Portal and click around to create a VM. The addition of the choice to use the Azure Virtual Machine with preset configuration does make it a bit easier, but it's still lots of clicking. Maybe I should try automating this!</p>
<p>There are a few choices for automating, but seeing as I've been using <a href="https://developer.hashicorp.com/terraform">Terraform</a> lately I thought I'd try that out, together with Terraform Cloud. As I'll be putting the Terraform files in a public repository on GitHub, I can use the free tier for Terraform Cloud.</p>
<p>You can find the source for the Terraform files at <a href="https://github.com/flcdrg/terraform-azure-vm/">https://github.com/flcdrg/terraform-azure-vm/</a>.</p>
<p>You'll also need to have both the Azure CLI and Terraform CLI installed. You can do this easily via Chocolatey:</p>
<pre><code>choco install terraform
choco install azure-cli
</code></pre>
<h2>Setting up Terraform Cloud Workspace with GitHub</h2>
<ol>
<li>Log in (or sign up) to Terraform Cloud at <a href="https://app.terraform.io">https://app.terraform.io</a>, select (or create) your organisation, then go to <strong>Workspaces</strong> and click on <strong>Create a workspace</strong>
 <img src="https://david.gardiner.net.au/_astro/terraform-github-01.BBEqQkYu_1HA8LO.webp" alt="Terraform Cloud - Workspaces tab" /></li>
<li>Select how you'd like to trigger a workflow. To keep things simple, I chose <strong>Version control workflow</strong>
 <img src="https://david.gardiner.net.au/_astro/terraform-github-02.D9LbvB3v_1A1yG0.webp" alt="Terraform Cloud - Create a new workspace" /></li>
<li>Select the version control provider - <strong>Github.com</strong>.
 <img src="https://david.gardiner.net.au/_astro/terraform-github-03.DTvyLPLj_1RoSSu.webp" alt="Alt text" /></li>
<li>You will now need to authenticate with GitHub.
 <img src="https://david.gardiner.net.au/_astro/terraform-github-04.tY80MKHc_2dKmh0.webp" alt="Alt text" /></li>
<li>Watch out if you get a notification about a pop-up blocker.
 <img src="https://david.gardiner.net.au/_astro/terraform-github-05.B2UKQtOe_ZtXGe4.webp" alt="Alt text" />
If you do, then enable pop-ups for this domain
 <img src="https://david.gardiner.net.au/_astro/terraform-github-06.ODlbymuB_Z1pTTJT.webp" alt="Alt text" /></li>
<li>Choose which GitHub account or organisation to use:
 <img src="https://david.gardiner.net.au/_astro/terraform-github-07.Cz-_jMNJ_HiEmc.webp" alt="Alt text" /></li>
<li>Select which repositories should be linked to Terraform Cloud.
 <img src="https://david.gardiner.net.au/_astro/terraform-github-08.BN7v6H26_CIv3J.webp" alt="Alt text" /></li>
<li>If you use multi-factor authentication then you'll need to approve the access.
 <img src="https://david.gardiner.net.au/_astro/terraform-github-09.DZhVLwM7_v2t11.webp" alt="Alt text" /></li>
<li>Now that your GitHub repositories are connected, you need to select the repository that Terraform Cloud will use for this workspace.
 <img src="https://david.gardiner.net.au/_astro/terraform-github-10.Dx6fpseP_kCIly.webp" alt="Alt text" /></li>
<li>Enter a workspace name (and optionally a description)
<img src="https://david.gardiner.net.au/_astro/terraform-github-11.Ownw56zG_FmLIl.webp" alt="Alt text" /></li>
<li>Now your workspace has been created!
<img src="https://david.gardiner.net.au/_astro/terraform-github-12.BS-biQ19_ZL7j6S.webp" alt="Alt text" /></li>
</ol>
<p>You're now ready to add Terraform files to your GitHub repository. I like to use the Terraform CLI to validate and format my .tf files before I commit them to version control.</p>
<p>After adding <code>versions.tf</code> file that contains a <code>cloud</code> definition (along with any providers), you can run <code>terraform login</code></p>
<pre><code>terraform {
  cloud {
    organization = "flcdrg"
    hostname     = "app.terraform.io"

    workspaces {
      name = "terraform-azure-vm"
    }
  }

  required_providers {
    azurerm = {
      source  = "hashicorp/azurerm"
      version = "=3.39.1"
    }
    random = {
      source  = "hashicorp/random"
      version = "3.4.3"
    }
  }
}

provider "azurerm" {
  features {}
}
</code></pre>
<p>A browser window will launch to allow you to create an API token that you can then paste back into the CLI.
<img src="https://david.gardiner.net.au/_astro/terraform-github-13.D0zlzHfX_22JDAk.webp" alt="Terraform Cloud - Create API Token dialog" /></p>
<p>The next thing we need to do is create an Azure service principal that Terraform Cloud can use when deploying to Azure.</p>
<p>In my case, I created a resource group and granted the service principal Contributor access to it (assuming that all the resources you want Terraform to create will live within that resource group). You could also allow the service principal access to the whole subscription if you prefer.</p>
<pre><code>az login
az group create --location westus --resource-group MyResourceGroup
az ad sp create-for-rbac --name &lt;service_principal_name&gt; --role Contributor --scopes /subscriptions/&lt;subscription_id&gt;/resourceGroups/&lt;resourceGroupName&gt;
</code></pre>
<p>Now go back to Terraform Cloud, and after selecting the newly created workspace, select <strong>Variables</strong>.</p>
<p>Under <strong>Workspace variables</strong>, click <strong>Add variable</strong>, then select <strong>Environment variables</strong>. Add a variable for each of the following (for <code>ARM_CLIENT_SECRET</code> also check the <strong>Sensitive</strong> checkbox), for the value copy the appropriate value from the output from creating the service principal:</p>
<ul>
<li><code>ARM_CLIENT_ID</code> - appId</li>
<li><code>ARM_CLIENT_SECRET</code> - password</li>
<li><code>ARM_SUBSCRIPTION_ID</code> - id from <code>az account show</code></li>
<li><code>ARM_TENANT_ID</code></li>
</ul>
<p><img src="https://david.gardiner.net.au/_astro/terraform-github-15.B1xUhqQm_1R3cOu.webp" alt="Workspace variables" /></p>
<p>With those variables set, you can now push your Terraform files to the GitHub repository.</p>
<p>The Terraform Cloud workspace is configured to evaluate a plan on pull requests, and on pushes or merges to <code>main</code> it will apply those changes.</p>
<p><img src="https://david.gardiner.net.au/_astro/terraform-github-16.8Os8CkBH_nAxbW.webp" alt="Terraform Cloud - Plan" /></p>
<p>By default, you need to manually confirm before 'apply' will run (you can change the workspace to auto-approve to avoid this).
<img src="https://david.gardiner.net.au/_astro/terraform-github-17.UmJ43VzI_ZgQdBo.webp" alt="Confirm Plan dialog" /></p>
<p>After a short wait, all the Azure resources (including the VM) should have been created and ready to use.
<img src="https://david.gardiner.net.au/_astro/terraform-github-18.Dc8KXHP6_Z21m3Dw.webp" alt="Terraform Cloud - changes applied" /></p>
<h2>Virtual machine password</h2>
<p>I'm not hardcoding the password for the virtual machine - rather I'm using the Terraform <a href="https://registry.terraform.io/providers/hashicorp/random/latest/docs/resources/password"><code>random_password</code></a> resource to generate a random password. The password is not displayed in the logs as it is marked as 'sensitive'. But I will actually need to know the password so I can RDP to the VM. It turns out the password value is saved in Terraform state, and you can examine this via the <strong>States</strong> tab of the workspace.</p>
<p><img src="https://david.gardiner.net.au/_astro/terraform-github-19.CMn6EC9g_Z1HwxoH.webp" alt="Terraform Cloud - Workspace state" /></p>
<p>With that, I'm now able to navigate to the VM resource in the Azure Portal and connect via RDP and do what I need to do.</p>
<p>If you wanted to stick with the CLI, you can also use <a href="https://learn.microsoft.com/azure/virtual-machines/windows/connect-rdp?WT.mc_id=DOP-MVP-5001655#connect-to-the-virtual-machine-using-powershell">Azure PowerShell to launch an RDP session</a>.</p>
<h2>Extra configuration</h2>
<p>If you review the Terraform in the repo, you'll notice I also make use of the <a href="https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/virtual_machine_extension"><code>azurerm_virtual_machine_extension</code></a> resource to run some PowerShell that installs Chocolatey. That just saves me from having to do it manually. If you can automate it, why not!</p>
<h2>Cleaning up when you're done</h2>
<p>For safety, the virtual machine is set to auto shutdown in the evening, which will reduce any costs. To completely remove the virtual machine and any associated storage you can run a "destroy plan"</p>
<p>From the workspace, go to <strong>Settings</strong>, then <strong>Destruction and deletion</strong>, and click <strong>Queue destroy plan</strong>.</p>
<p><img src="https://david.gardiner.net.au/_astro/terraform-github-14.Bz_F0flW_Z1DGknQ.webp" alt="Alt text" /></p>
]]>
    </content>
    <media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://david.gardiner.net.au/_astro/terraform-logo.CiRDK2M7.png"/>
    <media:content medium="image" xmlns:media="http://search.yahoo.com/mrss/" url="https://david.gardiner.net.au/_astro/terraform-logo.CiRDK2M7.png"/>
  </entry>
</feed>
