<?xml version="1.0" encoding="UTF-8"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en-AU" xmlns:media="http://search.yahoo.com/mrss/">
  <id>https://david.gardiner.net.au/Azure Pipelines.xml</id>
  <title type="html">David Gardiner - Azure Pipelines</title>
  <updated>2026-03-06T00:21:39.955Z</updated>
  <subtitle>Blog posts tagged with &apos;Azure Pipelines&apos; - A blog of software development, .NET and other interesting things</subtitle>
  <generator uri="https://github.com/flcdrg/astrojs-atom" version="1.0.218">astrojs-atom</generator>
  <author>
    <name>David Gardiner</name>
  </author>
  <link href="https://david.gardiner.net.au/Azure Pipelines.xml" rel="self" type="application/atom+xml"/>
  <link href="https://david.gardiner.net.au/tags/Azure Pipelines" rel="alternate" type="text/html" hreflang="en-AU"/>
  <entry>
    <id>https://david.gardiner.net.au/2026/02/azure-postgresql-upgrade</id>
    <updated>2026-02-28T13:00:00.000+10:30</updated>
    <title>Upgrading Azure Database for PostgreSQL flexible server</title>
    <link href="https://david.gardiner.net.au/2026/02/azure-postgresql-upgrade" rel="alternate" type="text/html" title="Upgrading Azure Database for PostgreSQL flexible server"/>
    <category term="Azure"/>
    <category term="Azure Pipelines"/>
    <category term="Terraform"/>
    <published>2026-02-28T13:00:00.000+10:30</published>
    <summary type="html">
      <![CDATA[How to upgrade the PostgreSQL server in Azure, with examples using Terraform, and some workarounds
for known issues you may encounter during the upgrade process.]]>
    </summary>
    <content type="html" xml:base="https://david.gardiner.net.au/2026/02/azure-postgresql-upgrade">
      <![CDATA[<p>I was working on a project recently that made use of <a href="https://learn.microsoft.com/azure/postgresql/overview?WT.mc_id=DOP-MVP-5001655">Azure Database for PostgreSQL flexible server</a>. The system had been set up a while ago, and so when I was reviewing the resources in the Azure Portal, I noticed a warning banner for the PostreSQL server:</p>
<pre><code>Your server version will lose standard Azure support on March 31, 2026. Upgrade now to avoid extended support charges starting April 1, 2026.
</code></pre>
<p><img src="https://david.gardiner.net.au/_astro/postgresql-upgrade-old-version.BsaaRaF4_2tq5ef.webp" alt="Screenshot of Azure Portal showing PostgreSQL server with warning about standard support ending 31st March 2026" /></p>
<p>Terraform was being used for Infrastructure as Code, and it looked similar to this:</p>

<pre><code>resource "azurerm_postgresql_flexible_server" "server" {
  name                              = "psql-postgresql-apps-australiaeast"
  resource_group_name               = data.azurerm_resource_group.rg.name
  location                          = data.azurerm_resource_group.rg.location
  version                           = "11"
  delegated_subnet_id               = azurerm_subnet.example.id
  private_dns_zone_id               = azurerm_private_dns_zone.example.id
  public_network_access_enabled     = false
  administrator_login               = "psqladmin"
  administrator_password_wo         = ephemeral.random_password.postgresql_password.result
  administrator_password_wo_version = 1
  zone                              = "1"

  storage_mb   = 32768
  storage_tier = "P4"

  sku_name   = "B_Standard_B1ms"
  depends_on = [azurerm_private_dns_zone_virtual_network_link.example]
}
</code></pre>


<p>As you can see from the code and screenshot above, the PostgreSQL version in use was 11. Doing a bit of research, I found version 11 was <a href="https://www.postgresql.org/support/versioning/">first released back in 2018</a>, and the the final minor update 11.22 was released in 2023.</p>
<p>Azure provides standard support for PostgreSQL versions (documented at <a href="https://learn.microsoft.com/en-us/azure/postgresql/configure-maintain/concepts-version-policy?WT.mc_id=DOP-MVP-5001655">Azure Database for PostgreSQL version policy</a>). There is also the option of paying for <a href="https://learn.microsoft.com/en-us/azure/postgresql/configure-maintain/extended-support?WT.mc_id=DOP-MVP-5001655">extended support</a>, though in the case of v11 that only gets you to November this year, so just a few extra months.</p>
<p>In my case, I wanted to do a test of the upgrade process first, so I restored a backup of the existing server to a new resource. This essentially creates an exact copy of the server at the same version.</p>
<p>While we are using Infrastructure as Code, I decided to use the Azure Portal to test the upgrade, as I figured if there were any problems, they might be easier to understand, rather than try and interpret weird Terraform/AzureRM errors.</p>
<p>Following the <a href="https://learn.microsoft.com/en-us/azure/postgresql/configure-maintain/how-to-perform-major-version-upgrade?WT.mc_id=DOP-MVP-5001655">upgrade documentation</a>, I clicked on the <strong>Upgrade</strong> in the Portal.</p>
<p><img src="https://david.gardiner.net.au/_astro/postgresql-upgrade-portal1.WLqDAiM5_ZscVNA.webp" alt="Screenshot of Azure Portal upgrade screen" /></p>
<p>This initiates a deployment, which depending on how much data you have and the particular SKU you're running on (eg. how fast the VM you're using is), this may take quite a while. One time it took over an hour (which was important as that may be longer than the default Terraform lifecycle, and also the pipeline job timeouts).</p>
<p><img src="https://david.gardiner.net.au/_astro/postgresql-upgrade-progress.CC7oJ8mA_Z1fvKmn.webp" alt="Screenshot of Azure Portal showing PostgreSQL resource with upgrade in progress" /></p>
<p>If that succeeds, then you should be good to try the real thing with IaC.</p>
<h2>Upgrading with Terraform</h2>
<p>To upgrade a major version with Terraform, you need to make a couple of changes to your <a href="https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/postgresql_flexible_server"><code>azurerm_postgresql_flexible_server</code></a> resource:</p>
<ol>
<li>The <code>version</code> property should be updated to the desired version</li>
<li>The <code>create_mode</code> property should be set to <code>Update</code> (if it wasn't specified then the default is 'Default')</li>
</ol>
<pre><code>resource "azurerm_postgresql_flexible_server" "server" {
  name                              = "psql-postgresql-apps-australiaeast"
  resource_group_name               = data.azurerm_resource_group.rg.name
  location                          = data.azurerm_resource_group.rg.location
  version                           = "17"
  delegated_subnet_id               = azurerm_subnet.example.id
  private_dns_zone_id               = azurerm_private_dns_zone.example.id
  public_network_access_enabled     = false
  administrator_login               = "psqladmin"
  administrator_password_wo         = ephemeral.random_password.postgresql_password.result
  administrator_password_wo_version = 1
  zone                              = "1"
  create_mode                       = "Update"

  storage_mb   = 32768
  storage_tier = "P4"

  sku_name   = "B_Standard_B1ms"
  depends_on = [azurerm_private_dns_zone_virtual_network_link.example]
}
</code></pre>
<p>The weird thing (which I assume is a side-effect of Terraform state) is that even after you've completed the upgrade, you can't change <code>create_mode</code> back to <code>Default</code> - Terraform will throw an error if you try that. Instead you just need to leave it set to <code>Update</code>, but as long as the <code>version</code> property doesn't change then Terraform will leave it at the same version.</p>
<h3>Adjust your timeouts</h3>
<p>I was using Azure Pipelines, so I added a <code>timeoutInMinutes</code> property to the job and set it to 90 minutes. Be aware that there are <a href="https://learn.microsoft.com/azure/devops/pipelines/process/phases?view=azure-devops&amp;tabs=yaml&amp;WT.mc_id=DOP-MVP-5001655#timeouts">different default and maximum timeouts</a> depending on what kind of build agent you use.</p>
<p>Likewise the Terraform <code>azurerm_postgresql_flexible_server</code> resource has <a href="https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/postgresql_flexible_server#timeouts">default timeouts</a>. You may want to specify a <code>timeout</code> block to extend those values if necessary.</p>
<h2>Gotchas</h2>
<p>I hit some compatibility issues with the PostgreSQL instance I was attempting to upgrade. The Portal displayed the following error(s):</p>
<pre><code>The major version upgrade failed precheck. Upgrading shared_preload_libraries library pg_failover_slots from source version 11 to target version 17 is not supported.;
Upgrading shared_preload_libraries library pg_failover_slots from source version 11 to target version 17 is not supported.;
Upgrading shared_preload_libraries library pg_failover_slots from source version 11 to target version 17 is not supported.;
Upgrading shared_preload_libraries library pg_failover_slots from source version 11 to target version 17 is not supported.;
Upgrading shared_preload_libraries library pg_failover_slots from source version 11 to target version 17 is not supported.;
Upgrading with password authentication mode enabled is not allowed from source version MajorVersion11. Please enable SCRAM and reset the passwords prior to retrying the upgrade.
</code></pre>
<p>There's two issues here:</p>
<ul>
<li>The <code>pg_failover_slots</code> shared preloaded library is <a href="https://learn.microsoft.com/en-au/answers/questions/5730837/attempt-to-upgrade-azure-database-for-postgresql-f">not supported for upgrading</a></li>
<li>Legacy MD5 passwords are deprecated in newer versions, <a href="https://techcommunity.microsoft.com/blog/azuredbsupport/azure-postgresql-lesson-learned-6-major-upgrade-blocked-by-password-auth-the-one/4469545">and "SCRAM" needs to be enabled</a></li>
</ul>
<p>How do we resolve these with Infrastructure as Code? In this case as we're using Terraform, we need to map/import those settings and then we can modify them. We make use of the <a href="https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/postgresql_flexible_server_configuration"><code>azurerm_postgresql_flexible_server_configuration</code></a> resource for this.</p>
<p>The <code>value</code> properties should initially match the existing values (eg. Make sure that Terraform thinks they are unchanged). A trick to get the existing values is to run the Terraform in 'plan' mode and take note of what values it can see from and then copy those into your code.</p>
<pre><code>import {
  to = azurerm_postgresql_flexible_server_configuration.accepted_pasword_auth_method
  id = "${azurerm_resource_group.group.id}/providers/Microsoft.DBforPostgreSQL/flexibleServers/psql-postgresql-apps-australiaeast/configurations/azure.accepted_password_auth_method"
}

resource "azurerm_postgresql_flexible_server_configuration" "accepted_pasword_auth_method" {
  name      = "azure.accepted_password_auth_method"
  server_id = azurerm_postgresql_flexible_server.server.id
  value     = "md5"
}

import {
  to = azurerm_postgresql_flexible_server_configuration.password_encryption
  id = "${azurerm_resource_group.group.id}/providers/Microsoft.DBforPostgreSQL/flexibleServers/psql-postgresql-apps-australiaeast/configurations/password_encryption"
}

resource "azurerm_postgresql_flexible_server_configuration" "password_encryption" {
  name      = "password_encryption"
  server_id = azurerm_postgresql_flexible_server.server.id
  value     = "md5"
}

import {
  to = azurerm_postgresql_flexible_server_configuration.shared_preload_libraries
  id = "${azurerm_resource_group.group.id}/providers/Microsoft.DBforPostgreSQL/flexibleServers/psql-postgresql-apps-australiaeast/configurations/shared_preload_libraries"
}

resource "azurerm_postgresql_flexible_server_configuration" "shared_preload_libraries" {
  name      = "shared_preload_libraries"
  server_id = azurerm_postgresql_flexible_server.server.id
  value     = "anon,auto_explain,pg_cron,pg_failover_slots,pg_hint_plan,pg_partman_bgw,pg_prewarm,pg_stat_statements,pgaudit,pglogical,timescaledb,wal2json"
}
</code></pre>
<p>Once you've got those in place then you can make the changes to remove the upgrade block:</p>
<pre><code>resource "azurerm_postgresql_flexible_server_configuration" "accepted_pasword_auth_method" {
  name      = "azure.accepted_password_auth_method"
  server_id = azurerm_postgresql_flexible_server.server.id
  value     = "md5,SCRAM-SHA-256"
}

resource "azurerm_postgresql_flexible_server_configuration" "password_encryption" {
  name      = "password_encryption"
  server_id = azurerm_postgresql_flexible_server.server.id
  value     = "SCRAM-SHA-256"
}

resource "azurerm_postgresql_flexible_server_configuration" "shared_preload_libraries" {
  name      = "shared_preload_libraries"
  server_id = azurerm_postgresql_flexible_server.server.id
  value     = "anon,auto_explain,pg_cron,pg_hint_plan,pg_partman_bgw,pg_prewarm,pg_stat_statements,pgaudit,pglogical,timescaledb,wal2json"
}
</code></pre>
<p>This will allow any existing MD5 passwords to continue to work, but any new passwords will use the more modern SCRAM-SHA-256.</p>
<p>For the <code>shared_preload_libraries</code>, we've removed the offending <code>pg_failover_slots</code> from the list.</p>
<h2>Tips</h2>
<ul>
<li>Temporarily upgrade your server SKU to beefier hardware so the upgrade goes faster. If you're using IaC then make sure you use that to make the change.</li>
<li>Note that if you change the separate storage performance tier (IOPS), <a href="https://learn.microsoft.com/en-us/azure/virtual-machines/disks-performance-tiers?tabs=azure-cli#restrictions">you will need to wait 12 hours before downgrading again</a>.</li>
</ul>
<h2>Completion</h2>
<p>If everything goes to plan, you should end up with your PostgreSQL resource upgraded to the version that you specified. Here's my resource upgraded to 17.7. <a href="https://techcommunity.microsoft.com/blog/adforpostgresql/postgresql-18-now-ga-on-azure-postgres-flexible-server/4469802?WT.mc_id=DOP-MVP-5001655">v18 is actually available</a> but I wasn't offered it due to 'regional capacity constraints', which explains why the 'Upgrade' button is now disabled.</p>
<p><img src="https://david.gardiner.net.au/_astro/postgresql-upgrade-complete.BDSml29o_Z3kvTd.webp" alt="Screenshot of Azure Portal showing PostgreSQL upgrade complete" /></p>
<p>I've published source code for a working example of Azure Database for PostgreSQL flexible server with an Azure Container app and using a VNet at <a href="https://github.com/flcdrg/terraform-azure-postgresql-containerapps">https://github.com/flcdrg/terraform-azure-postgresql-containerapps</a></p>
]]>
    </content>
    <media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://david.gardiner.net.au/_astro/postgresql-logo.BZ7GfDHR.png"/>
    <media:content medium="image" xmlns:media="http://search.yahoo.com/mrss/" url="https://david.gardiner.net.au/_astro/postgresql-logo.BZ7GfDHR.png"/>
  </entry>
  <entry>
    <id>https://david.gardiner.net.au/2025/07/azure-pipeline-template-expression</id>
    <updated>2025-07-14T08:00:00.000+09:30</updated>
    <title>Azure Pipelines template expressions</title>
    <link href="https://david.gardiner.net.au/2025/07/azure-pipeline-template-expression" rel="alternate" type="text/html" title="Azure Pipelines template expressions"/>
    <category term="Azure Pipelines"/>
    <published>2025-07-14T08:00:00.000+09:30</published>
    <summary type="html">
      <![CDATA[Template expressions are a compile-time feature of Azure Pipelines. Learn how they differ to custom conditions
and see some common examples of their usage.]]>
    </summary>
    <content type="html" xml:base="https://david.gardiner.net.au/2025/07/azure-pipeline-template-expression">
      <![CDATA[<p>In my <a href="/2025/06/azure-pipeline-conditionals">last post</a> I wrote about using custom conditions in Azure Pipelines to evaluate whether to skip a step, job or stage at runtime.</p>
<p>Sometimes we can do better though. With template expressions we can not just skip something, we can remove it entirely. We can also use it to optionally insert values in a pipeline (something you can't do with runtime custom conditions).</p>
<p>The important thing to remember is that template expression are a "compile time" feature. They can only operate on things that are available at compile time. <a href="https://learn.microsoft.com/azure/devops/pipelines/process/set-variables-scripts?view=azure-devops&amp;WT.mc_id=DOP-MVP-5001655">Variables set by scripts</a>, and <a href="https://learn.microsoft.com/azure/devops/pipelines/process/variables?view=azure-devops&amp;tabs=yaml%2Cbatch&amp;WT.mc_id=DOP-MVP-5001655#use-output-variables-from-tasks">task output variables</a> are two examples of things that are not available at compile time.</p>
<p>Compare these two Azure Pipeline runs. The first uses custom conditions to decided if the 'Publish Artifact' step is executed or not. Notice the 'Publish Artifact' step is listed, but the icon shown is a white arrow (rather than green tick)
<img src="https://david.gardiner.net.au/_astro/azure-pipelines-custom-conditions.Be6IaBQz_Z12HnJL.webp" alt="Job showing a step 'Publish Artifact' that was conditionally not executed" /></p>
<p>If we use a template expression, then if it evaluates to false then the step is not even included in the job!</p>
<p><img src="https://david.gardiner.net.au/_astro/azure-pipelines-template-expressions.kRV13og-_1Lroaa.webp" alt="Job without a 'Publish Artifact' step " /></p>
<p><a href="https://learn.microsoft.com/azure/devops/pipelines/process/template-expressions?view=azure-devops&amp;WT.mc_id=DOP-MVP-5001655">Template expressions</a> use the syntax <code>${{ }}</code></p>
<p>You can reference <code>parameters</code> and <code>variables</code> in template expressions. The latter are only variables that are defined in the YAML file and most of the <a href="https://learn.microsoft.com/en-us/azure/devops/pipelines/build/variables?view=azure-devops&amp;WT.mc_id=DOP-MVP-5001655">predefined variables</a>. (That page does list which variables can be used in template expressions, but you may need to scroll the page to the right to see that column!)</p>
<p>You can't reference variables that are created by scripts or anything else that is only available at runtime.</p>
<p>You can use <a href="https://learn.microsoft.com/azure/devops/pipelines/process/expressions?view=azure-devops&amp;WT.mc_id=DOP-MVP-5001655#functions">general functions</a> (the same ones we used previously with runtime Custom Conditions) in template expressions, as well as two special <a href="https://learn.microsoft.com/en-us/azure/devops/pipelines/process/template-expressions?view=azure-devops&amp;WT.mc_id=DOP-MVP-5001655#template-expression-functions">Template expression functions</a>.</p>
<h2>Common patterns</h2>
<p>You can see a complete pipeline demonstrating all the following patterns at <a href="https://github.com/flcdrg/azure-pipelines-template-expressions/blob/main/azure-pipelines.yml">https://github.com/flcdrg/azure-pipelines-template-expressions/blob/main/azure-pipelines.yml</a>.</p>
<h3>Conditionally include stage, job or step</h3>
<p>The official documentation calls this <a href="https://learn.microsoft.com/en-au/azure/devops/pipelines/process/template-expressions?view=azure-devops&amp;WT.mc_id=DOP-MVP-5001655#conditional-insertion">Conditional Insertion</a>.</p>
<p>Here's an example where we only want to publish build artifacts if we're building the main branch:</p>

<pre><code>- ${{ if eq(variables['Build.SourceBranch'], 'refs/heads/main') }}:
    - publish: $(Build.ArtifactStagingDirectory)
      artifact: drop
      displayName: "Publish Artifact"
</code></pre>


<h3>Conditionally set variable</h3>
<p>Using template expressions to conditionally set the values of variables is a common use case.</p>

<pre><code>variables:
  ${{ if eq(variables['Build.SourceBranch'], 'refs/heads/main') }}:
    Environment: "Production"
  ${{ else }}:
    Environment: "Not-Production"
</code></pre>


<p>Note that YAML formatting rules apply - each property must be unique, so you can't repeat the expression line more than once. Instead you would just group additional variable declarations together.</p>
<h3>Conditionally set stage or job dependency</h3>
<p>DependsOn applies to stages and jobs. We can conditionally include the entire dependency:</p>

<pre><code>${{ if ne(parameters.DependsOn, '') }}:
  dependsOn:
    - Version
</code></pre>


<p>Or when we have multiple dependencies, we can conditionally include an additional dependency. Because dependsOn in this case is referring to an array, we need to use the array syntax for our template expression.</p>

<pre><code>dependsOn:
  - Version
  - ${{ if ne(parameters.DependsOn, '') }}:
      - ${{ parameters.DependsOn }}
</code></pre>


<h3>Looping</h3>
<p>The official name for this in the documentation is <a href="https://learn.microsoft.com/azure/devops/pipelines/process/template-expressions?view=azure-devops&amp;WT.mc_id=DOP-MVP-5001655#iterative-insertion">Iterative insertion</a>.
Loop expressions are a powerful technique that can be used to reduce duplication in your pipelines.</p>
<p>If you need to define an array, the only way I'm aware of doing that is by declaring a parameter, as one of the types supported by parameters is <code>object</code>. As an object, you can use YAML to define it as an array and add sub-properties etc to that as required.</p>

<pre><code>parameters:
  - name: Environments
    type: object
    default:
      - name: Dev
        displayName: "Development"
      - name: Test
        displayName: "Testing"
      - name: Prod
        displayName: "Production"
</code></pre>


<p>I tend to add a <code>displayName</code> on those parameters just to make it clear they're just there to store the array data (and you probably shouldn't be altering the values in the Azure Pipelines web UI if you run the pipeline manually)</p>

<pre><code>- ${{ each env in parameters.Environments }}:
    - stage: DeployTo${{ env.name }}
      jobs:
        - job: DeployTo${{ env.name }}
          displayName: "Deploy to ${{ env.name }}"
          steps:
            - script: echo "Deploying to ${{ env.name }} environment..."
              displayName: "Deploy to ${{ env.name }}"
</code></pre>


<h2>Conclusion</h2>
<p>Template expressions are a powerful feature. Curiously, despite GitHub Actions having many similarities to Azure Pipelines, this is one aspect that they didn't port over.</p>
<p>Often I see custom conditions being used where template expressions would be a better fit. It's worth considering if using them more could simplify and improve your pipelines.</p>
]]>
    </content>
    <media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://david.gardiner.net.au/_astro/azure-pipelines-logo.B45UakAg.png"/>
    <media:content medium="image" xmlns:media="http://search.yahoo.com/mrss/" url="https://david.gardiner.net.au/_astro/azure-pipelines-logo.B45UakAg.png"/>
  </entry>
  <entry>
    <id>https://david.gardiner.net.au/2025/06/azure-pipeline-conditionals</id>
    <updated>2025-06-30T07:00:00.000+09:30</updated>
    <title>Azure Pipeline conditionals</title>
    <link href="https://david.gardiner.net.au/2025/06/azure-pipeline-conditionals" rel="alternate" type="text/html" title="Azure Pipeline conditionals"/>
    <category term="Azure Pipelines"/>
    <published>2025-06-30T07:00:00.000+09:30</published>
    <summary type="html">
      <![CDATA[How to use custom conditions in your Azure Pipelines to evaluate at runtime whether to execute a given step, job or stage.]]>
    </summary>
    <content type="html" xml:base="https://david.gardiner.net.au/2025/06/azure-pipeline-conditionals">
      <![CDATA[<p>There are two main ways to make parts of your Azure Pipeline conditional:</p>
<ol>
<li>Add a custom <code>condition</code> to the step, job or stage.</li>
<li>Use conditional insertion with template expressions.</li>
</ol>
<p>In this post we'll look at custom conditions.</p>
<h2>Custom conditions</h2>
<p><a href="https://learn.microsoft.com/azure/devops/pipelines/process/conditions?view=azure-devops&amp;WT.mc_id=DOP-MVP-5001655">Custom conditions</a> are evaluated at runtime.</p>
<p>Here's an example of a step with a custom condition which causes it to be skipped if the pipeline runs on a branch other than 'main':</p>
<pre><code>- script: echo "hello world"
  condition: eq(variables['Build.SourceBranchName'], 'main')
</code></pre>
<p>You can use any of the <a href="https://learn.microsoft.com/azure/devops/pipelines/process/expressions?view=azure-devops&amp;WT.mc_id=DOP-MVP-5001655#job-status-functions">Job status check functions</a> for the condition expression, or to form part of it:</p>
<ul>
<li><code>always()</code> - evaluates to <code>True</code></li>
<li><code>canceled()</code> - evaluates to <code>True</code> if the pipeline was cancelled</li>
<li><code>failed()</code> - evaluates to <code>True</code> if any previous dependent job failed.</li>
<li><code>failed(JOBNAME)</code> - evaluates to <code>True</code> if the named job failed.</li>
<li><code>succeeded()</code> - evaluates to <code>True</code> all previous dependent jobs succeeded or partially succeeded</li>
<li><code>succeeded(JOBNAME)</code> - evaluates to <code>True</code> if the named job succeeded</li>
<li><code>succeededOrFailed()</code> - evalutes to <code>True</code> regardless of any dependent jobs succeeding or failing</li>
<li><code>succeededOrFailed(JOBNAME)</code> - evalates to <code>True</code> if the job succeeded or failed</li>
</ul>
<p>Often you'll combine these with the <a href="https://learn.microsoft.com/azure/devops/pipelines/process/expressions?view=azure-devops&amp;WT.mc_id=DOP-MVP-5001655#not"><code>not()</code></a> function. For example:</p>
<ul>
<li><code>not(cancelled())</code> - evaluates to <code>True</code> if no dependent jobs were cancelled. This is often the best choice where there's a chance one of the dependent jobs may have been skipped (which means it has neither succeeded or failed)</li>
<li><code>not(always())</code> - evaluates to <code>False</code>. Useful if you wish to ensure a step, job or stage is always skipped, for example as a temporary measure while you're debugging a problem with a pipeline.</li>
</ul>
<p>You can reference predefined and custom pipeline variables in the expression. In addition to the <code>not()</code> function we've just seen, the other functions I most commonly use are:</p>
<ul>
<li><code>eq()</code> - Evaluates to <code>True</code> if the two parameters are equal (string comparisons are case-insensitive)</li>
<li><code>ne()</code> - Evaluates to <code>True</code> if the two parameters are not equal</li>
<li><code>and()</code> - Evaluates to <code>True</code> if all the parameters (2 or more) are <code>True</code></li>
<li><code>or()</code> - Evaluates to <code>True</code> if any of the parameters (2 or more) are <code>True</code></li>
</ul>
<p>There are <a href="https://learn.microsoft.com/azure/devops/pipelines/process/expressions?view=azure-devops&amp;WT.mc_id=DOP-MVP-5001655#functions">more functions documented</a> that may be useful in more unique scenarios.</p>
<p>Be aware that as soon as you add a custom condition then the evaluation of the expression will determine whether that step, job or stage is executed. This can mean it ignores any previous failures or cancellations (which may not be what you intended!)</p>
<p>eg. This step will always be executed when the current branch is 'main', even if previous steps have failed.</p>
<pre><code>- script: echo "hello world"
  condition: eq(variables['Build.SourceBranchName'], 'main')
</code></pre>
<p>To preserve the more common behaviour of skipping the step if any previous steps have failed you need to use this approach:</p>
<pre><code>- script: echo "hello world"
  condition: and(succeeded(), eq(variables['Build.SourceBranch'], 'main'))
</code></pre>
<p>This also means that the condition in the next example is effectively redundant. If you see code like this then I'd recommend deleting the condition - it's just noise!</p>
<pre><code>- script: echo "hello world"
  condition: succeeded()
</code></pre>
<p>Another common scenario is when a task creates output variables that you can then use to determine if subsequent tasks need to be run. The <a href="https://marketplace.visualstudio.com/items?itemName=JasonBJohnson.azure-pipelines-tasks-terraform">Terraform tasks</a> are a good example - if the 'Plan' task does not identify any required changes, then you can safely skip the 'Apply' task.</p>
<p>eg.</p>
<pre><code>
- task: TerraformCLI@2
  displayName: "Terraform: plan"
  inputs:
    command: plan
    workingDirectory: "$(TerraformSourceDirectory)"
    commandOptions: -no-color -input=false -detailed-exitcode
    environmentServiceName: Azure MSDN - rg-tfupgrade-australiasoutheast
    publishPlanResults: Plan
    allowTelemetryCollection: false

- task: TerraformCLI@2
  displayName: "Terraform: apply"
  condition: and(succeeded(), eq(variables['TERRAFORM_PLAN_HAS_CHANGES'], 'true'))
  inputs:
    command: apply
    workingDirectory: "$(TerraformSourceDirectory)"
    commandOptions: -no-color -input=false -auto-approve
    allowTelemetryCollection: false
</code></pre>
<p>If you want to use a condition where the expression needs to reference an output variable from a previous job or stage, then you will need to first declare that variable in the current job or stage's variable block. You can then reference it in the condition expression.</p>
<p>eg. For jobs:</p>
<pre><code>
      - job: Job1
        steps:
          - bash: echo "##vso[task.setvariable variable=my_Job1_OutputVar;isOutput=true]Variable set in stepVar_Job1"
            name: stepVar_Job1

      - job: Job2
        dependsOn: Job1
        condition: and(succeeded(), eq( variables.varFrom_Job1, 'Variable set in stepVar_Job1'))
        variables:
          varFrom_Job1: $[ dependencies.Job1.outputs['stepVar_Job1.my_Job1_OutputVar'] ]
</code></pre>
<p>and for stages (note the use of <code>stageDependencies</code>):</p>
<pre><code>  - stage: Stage1
    jobs:
      - job: Stage1Job1
        steps:
          - bash: echo "##vso[task.setvariable variable=my_Stage1Job1_OutputVar;isOutput=true]Variable set in stepVar_Stage1Job1"
            name: stepVar_Stage1Job1

  - stage: Stage3
    displayName: Stage 3
    dependsOn: Stage1
    condition: and(succeeded(), eq(variables.varFrom_Stage1DeploymentJob1, 'Variable set in stepVar_Stage1Job1'))
    variables:
      varFrom_Stage1DeploymentJob1: $[ stageDependencies.Stage1.Stage1DeploymentJob1.outputs['Stage1DeploymentJob1.stepVar_Stage1DeploymentJob1.my_Stage1DeploymentJob1_OutputVar'] ]
</code></pre>
<p>Take a look at the pipelines defined in my <a href="https://github.com/flcdrg/azure-pipelines-variables">azure-pipelines-variables GitHub repository</a> for more examples of these.</p>
<p>Here's an example of a pipeline run with custom conditions similar to the code excerpts above:</p>
<p><img src="https://david.gardiner.net.au/_astro/azure-pipeline-custom-conditions.B1_FfUuy_4xYdP.webp" alt="Screenshot of pipeline run with custom conditions. A conditional step in the first job has been executed. Stage shows that it was not executed as the condition evaluated to false" /></p>
<p>For activities that were skipped, when you select the specific task, job or stage, you can view the conditional expression and the actual parameters that were used in its evaluation to understand why it resulted in a <code>False</code> value. In the screenshot above, notice that while the <code>succeeded()</code> function evaluated to <code>True</code>, the <code>ne()</code> function did not, and because those two were combined with an <code>and()</code> then the final result was also <code>False</code>.</p>
<p>In the next post we'll look at conditional insertion with template expressions and discuss when you'd use that approach over custom conditions.</p>
]]>
    </content>
    <media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://david.gardiner.net.au/_astro/azure-pipelines-logo.B45UakAg.png"/>
    <media:content medium="image" xmlns:media="http://search.yahoo.com/mrss/" url="https://david.gardiner.net.au/_astro/azure-pipelines-logo.B45UakAg.png"/>
  </entry>
  <entry>
    <id>https://david.gardiner.net.au/2024/12/sonarcloud</id>
    <updated>2024-12-15T22:00:00.000+10:30</updated>
    <title>.NET Code Coverage in Azure DevOps and SonarCloud</title>
    <link href="https://david.gardiner.net.au/2024/12/sonarcloud" rel="alternate" type="text/html" title=".NET Code Coverage in Azure DevOps and SonarCloud"/>
    <category term="Azure DevOps"/>
    <category term="Azure Pipelines"/>
    <published>2024-12-15T22:00:00.000+10:30</published>
    <summary type="html">
      <![CDATA[Using SonarCloud to display code coverage with a .NET application managed with Azure DevOps]]>
    </summary>
    <content type="html" xml:base="https://david.gardiner.net.au/2024/12/sonarcloud">
      <![CDATA[<p>Sonar offer some really useful products for analysing the quality of your application's source code. There's a great mix of free and paid products, including SonarQube Cloud (formerly known as SonarCloud), SonarQube Server (for on-prem), and <a href="https://docs.sonarsource.com/sonarqube-for-ide/visual-studio/">SonarQube for IDE</a> (formerly SonarLint) static code analysers for IntelliJ, Visual Studio, VS Code and Eclipse.</p>
<p>I was looking to integrate an Azure DevOps project containing a .NET application with SonarQube Cloud, and in particular include code coverage data both for Azure Pipelines (so you can view the coverage in the pipeline run), but also in SonarQube Cloud.</p>
<p>This process is quite similar if you're using the self-hosted SonarQube Server product, though note that there are different Azure Pipeline tasks provided by a different extension for SonarQube Server.</p>
<p>A sample project can be found at <a href="https://dev.azure.com/gardiner/SonarCloudDemo">https://dev.azure.com/gardiner/SonarCloudDemo</a></p>
<h2>Prerequisites</h2>
<ul>
<li>You have a SonarQube Cloud account.</li>
<li>You've configured it to be integrated with your Azure DevOps organisation.</li>
<li>You've installed the <a href="https://marketplace.visualstudio.com/items?itemName=SonarSource.sonarcloud">SonarQube Cloud extension</a> (or <a href="https://marketplace.visualstudio.com/items?itemName=SonarSource.sonarqube">SonarQube Server extension</a> if you're using SonarQube Server)</li>
<li>You've created a service connection in the Azure DevOps project pointing to SonarQube Cloud.</li>
</ul>
<p>I've created a .NET solution which contains a simple ASP.NET web application and an xUnit test project.</p>
<p>By default, when you add a new xUnit test project, it includes a reference to the <a href="https://www.nuget.org/packages/coverlet.collector">coverlet.collector</a> NuGet package. This implements a 'Data Collector' for the VSTest platform. Normally you'd run this via:</p>
<pre><code>dotnet test --collect:"XPlat Code Coverage"
</code></pre>
<p>You would then end up with a <code>TestResults</code> subdirectory which contains a <code>coverage.cobertura.xml</code> file. But the problem here is that the xml file is one level deeper - VSTest creates GUID-named subdirectory under TestResults. So you will need to go searching for the file, there's no way to ensure it gets created in a known location.</p>
<p>It turns out that's a problem for Sonar, as the SonarCloudPrepare task needs to be told where the code coverage file is located, and unfortunately that property doesn't support wildcards!</p>
<p>We can solve that problem by removing the reference to <code>coverlet.collector</code>, and instead adding a package reference to <code>coverlet.msbuild</code>.</p>
<pre><code>dotnet remove package coverlet.collector
dotnet add package coverlet.msbuild
</code></pre>
<p>To collect code coverage information with this package, you run it like this:</p>
<pre><code>dotnet test /p:CollectCoverage=true
</code></pre>
<p>But more importantly, it supports additional parameters so we can now fix the location of output files. The <code>CoverletOutput</code> property lets us define the directory (relative to the test project) where output files will be written.</p>
<pre><code>dotnet test /p:CollectCoverage=true /p:CoverletOutput='./results/coverage' /p:CoverletOutputFormat=cobertura
</code></pre>
<p>Notice that I've not just set <code>CoverletOutput</code> to the directory (<code>results</code>), but also the first part of the coverage filename (<code>coverage</code>).</p>
<p>In the pipeline task, you can let SonarQube know where the file is by setting <code>sonar.cs.opencover.reportsPaths</code> like this:</p>
<pre><code>  - task: SonarCloudPrepare@3
    inputs:
      SonarQube: "SonarCloud"
      organization: "gardiner"
      scannerMode: "dotnet"
      projectKey: "Gardiner_SonarCloudDemo"
      projectName: "SonarCloudDemo"
      extraProperties: |
        sonar.cs.opencover.reportsPaths=$(Build.SourcesDirectory)/Tests/results/coverage.opencover.xml
</code></pre>
<h2>SonarQube and Azure Pipelines coverage</h2>
<p>So now we've solved the problem of where the coverage file will be saved. Can we also deliver the coverage data to both SonarQube <em>and</em> Azure Pipelines?</p>
<p>Let's review what we need to make that happen.</p>
<p>According to <a href="https://github.com/coverlet-coverage/coverlet/blob/master/Documentation/MSBuildIntegration.md">the docs for coverlet.msbuild</a>, it supports generating the following formats:</p>
<ul>
<li>json (default)</li>
<li>lcov</li>
<li>opencover</li>
<li>cobertura</li>
<li>teamcity*</li>
</ul>
<p>(The TeamCity format just generates special service messages in the standard output that TeamCity will recognise, it doesn't create a file)</p>
<p>According to <a href="https://docs.sonarsource.com/sonarcloud/enriching/test-coverage/dotnet-test-coverage/">the docs for SonarCloud</a>, it supports the following formats for .NET code coverage:</p>
<ul>
<li>Visual Studio Code Coverage</li>
<li>dotnet-coverage Code Coverage</li>
<li>dotCover</li>
<li>OpenCover</li>
<li>Coverlet (OpenCover format)</li>
<li><a href="https://docs.sonarsource.com/sonarcloud/enriching/test-coverage/generic-test-data/">Generic test data</a></li>
</ul>
<p>The docs for the Azure Pipelines <a href="https://learn.microsoft.com/azure/devops/pipelines/tasks/reference/publish-code-coverage-results-v2?view=azure-pipelines&amp;WT.mc_id=DOP-MVP-5001655">PublishCodeCoverageResults@2 task</a> don't actually mention which formats are supported (hopefully this will be fixed soon). But in the <a href="https://devblogs.microsoft.com/devops/new-pccr-task/">blog post that announced the availability of the v2 task</a> the following formats were mentioned (including ones from the v1 task):</p>
<ul>
<li>Cobertura</li>
<li>JaCoCo</li>
<li>.coverage</li>
<li>.covx</li>
</ul>
<p>So unfortunately there isn't a single format that all three components understand. Instead we will have to ask <code>coverlet.msbuild</code> to generate two output files - <strong>OpenCover</strong> for SonarQube, and <strong>Cobertura</strong> for Azure Pipelines.</p>
<p>We want to generate two outputs, but there is a <a href="https://github.com/coverlet-coverage/coverlet/blob/master/Documentation/MSBuildIntegration.md#note-for-linux-users">known problem with trying to pass in parameters to dotnet test on Linux</a>. The workaround is to set properties in the csproj file instead.</p>
<pre><code>&lt;PropertyGroup&gt;
  &lt;CoverletOutputFormat&gt;opencover,cobertura&lt;/CoverletOutputFormat&gt;
&lt;/PropertyGroup&gt;          
</code></pre>
<p>Our Azure Pipeline should look something like this:</p>
<pre><code>steps:
  - checkout: self
    fetchDepth: 0
    
  - task: SonarCloudPrepare@3
    inputs:
      SonarQube: "SonarCloud"
      organization: "gardiner"
      scannerMode: "dotnet"
      projectKey: "Gardiner_SonarCloudDemo"
      projectName: "SonarCloudDemo"
      extraProperties: |
        # Additional properties that will be passed to the scanner, put one key=value per line

        # Disable Multi-Language analysis
        sonar.scanner.scanAll=false

        # Configure location of the OpenCover report
        sonar.cs.opencover.reportsPaths=$(Build.SourcesDirectory)/Tests/results/coverage.opencover.xml

  - task: DotNetCoreCLI@2
    inputs:
      command: build

  - task: DotNetCoreCLI@2
    inputs:
      command: test
      projects: "Tests/Tests.csproj"
      arguments: "/p:CollectCoverage=true /p:CoverletOutput=results/coverage"

  - task: SonarCloudAnalyze@3
    inputs:
      jdkversion: "JAVA_HOME_17_X64"

  - task: SonarCloudPublish@3
    inputs:
      pollingTimeoutSec: "300"

  - task: PublishCodeCoverageResults@2
    inputs:
      summaryFileLocation: "$(Build.SourcesDirectory)/Tests/results/coverage.cobertura.xml"
      failIfCoverageEmpty: true
</code></pre>
<p>A few things to point out:</p>
<ul>
<li>We're doing a full Git clone (not shallow) so that SonarQube can do a proper analysis. This avoids you seeing warnings like this:<ul>
<li>[INFO]  SonarQube Cloud: Analysis succeeded with warning: Could not find ref 'main' in refs/heads, refs/remotes/upstream or refs/remotes/origin. You may see unexpected issues and changes. Please make sure to fetch this ref before pull request analysis.</li>
<li>[INFO]  SonarQube Cloud: Analysis succeeded with warning: Shallow clone detected during the analysis. Some files will miss SCM information. This will affect features like auto-assignment of issues. Please configure your build to disable shallow clone.</li>
</ul>
</li>
<li>Set <code>sonar.scanner.scanAll=false</code> to avoid this warning:<ul>
<li>[INFO]  SonarQube Cloud: Analysis succeeded with warning: Multi-Language analysis is enabled. If this was not intended and you have issues such as hitting your LOC limit or analyzing unwanted files, please set "/d:sonar.scanner.scanAll=false" in the begin step.</li>
</ul>
</li>
</ul>
<p>And now we can view our code coverage in SonarQube:</p>
<p><img src="https://david.gardiner.net.au/_astro/coverage-sonarqube1.B66ordRi_Z18g5QJ.webp" alt="SonarCloud project overview showing code coverage history" /></p>
<p><img src="https://david.gardiner.net.au/_astro/coverage-sonarqube2.B62u2ZY4_ai3Xi.webp" alt="SonarCloud pull request file code coverage summary" />
And in Azure Pipelines!</p>
<p><img src="https://david.gardiner.net.au/_astro/coverage-azure-pipelines.3J7EQ4Iq_Z16OkpR.webp" alt="Azure Pipeline run showing code coverage tab" /></p>
<p>Check out the example project at <a href="https://dev.azure.com/gardiner/_git/SonarCloudDemo">https://dev.azure.com/gardiner/_git/SonarCloudDemo</a>, and you can view the SonarQube analysis at <a href="https://sonarcloud.io/project/overview?id=Gardiner_SonarCloudDemo">https://sonarcloud.io/project/overview?id=Gardiner_SonarCloudDemo</a></p>
]]>
    </content>
    <media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://david.gardiner.net.au/_astro/azure-pipelines-logo.B45UakAg.png"/>
    <media:content medium="image" xmlns:media="http://search.yahoo.com/mrss/" url="https://david.gardiner.net.au/_astro/azure-pipelines-logo.B45UakAg.png"/>
  </entry>
  <entry>
    <id>https://david.gardiner.net.au/2024/05/docker-run-mount</id>
    <updated>2024-05-10T17:30:00.000+09:30</updated>
    <title>Docker run from an Azure Pipeline Container Job with a volume mount</title>
    <link href="https://david.gardiner.net.au/2024/05/docker-run-mount" rel="alternate" type="text/html" title="Docker run from an Azure Pipeline Container Job with a volume mount"/>
    <category term="Azure DevOps"/>
    <category term="Azure Pipelines"/>
    <published>2024-05-10T17:30:00.000+09:30</published>
    <summary type="html">
      <![CDATA[This caught me out today. I was trying to run a Docker container directly from a script task, where the pipeline job was already running in a container (as a Container Job), similar to this: The bit that was failing was the --mount, with the following error message:]]>
    </summary>
    <content type="html" xml:base="https://david.gardiner.net.au/2024/05/docker-run-mount">
      <![CDATA[<p>This caught me out today. I was trying to run a Docker container directly from a script task, where the pipeline job was already running in a container (as a <a href="https://learn.microsoft.com/azure/devops/pipelines/process/container-phases?view=azure-devops&amp;WT.mc_id=DOP-MVP-5001655">Container Job</a>), similar to this:</p>
<pre><code>  - job: MyJob
    container:
      image: my-container-job-image:latest

    steps:
      - script: |
          docker run --mount type=bind,source="$(pwd)",target=/home/src --rm -w /home/src my-container:latest
</code></pre>
<p>The bit that was failing was the <code>--mount</code>, with the following error message:</p>
<pre><code>docker: Error response from daemon: invalid mount config for type "bind": bind source path does not exist: /__w/3/s.
</code></pre>
<p>Eventually, I realised the problem - By default, when a job is running as a Container Job, all the tasks are also running in the context of that container. So <code>$(pwd)</code> was resolving to <code>/__w/3/s</code>. That happens to be the default directory, and also where your source code is mapped to (via a volume mount that you can see by viewing the output of the "Initialize containers" step).</p>
<p>But when you invoke <code>docker run</code>, Docker doesn't try and run the new container inside the existing Container Job container, rather it will run alongside it! So any paths you pass to Docker need to be relative to the host machine, not relative to inside the container job.</p>
<p>In my case, the solution was to add a <code>target: host</code> property to the script task, so that the entire script is now run in the context of the host, rather than the container. eg.</p>
<pre><code>      - script: |
          docker run --mount type=bind,source="$(pwd)",target=/home/src --rm -w /home/src my-container:latest
        target: host
</code></pre>
<p>Now when the pipeline runs, <code>$(pwd)</code> will resolve to <code>/agent/_work/3/s</code> (which is the actual directory on the host machine), and the mount will work correctly!</p>
]]>
    </content>
    <media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://david.gardiner.net.au/_astro/azure-pipelines-logo.B45UakAg.png"/>
    <media:content medium="image" xmlns:media="http://search.yahoo.com/mrss/" url="https://david.gardiner.net.au/_astro/azure-pipelines-logo.B45UakAg.png"/>
  </entry>
  <entry>
    <id>https://david.gardiner.net.au/2023/12/migrate-terraform-resources-part2</id>
    <updated>2023-12-26T16:30:00.000+10:30</updated>
    <title>Migrating deprecated Terraform resources (part 2)</title>
    <link href="https://david.gardiner.net.au/2023/12/migrate-terraform-resources-part2" rel="alternate" type="text/html" title="Migrating deprecated Terraform resources (part 2)"/>
    <category term="Azure Pipelines"/>
    <category term="DevOps"/>
    <category term="Terraform"/>
    <published>2023-12-26T16:30:00.000+10:30</published>
    <summary type="html">
      <![CDATA[In my previous post, I showed how to migrate deprecated Terraform resources to a supported resource type. But I hinted that there are some gotchas to be aware of. Here are some issues that I have encountered and how to work around them. The import statement will fail if you try to deploy to a brand new empty environment. It makes sense when you think about it - how can you import a resource that doesn't exist yet? Unfortunately, there's no way to make the import statement conditional. …]]>
    </summary>
    <content type="html" xml:base="https://david.gardiner.net.au/2023/12/migrate-terraform-resources-part2">
      <![CDATA[<p>In my <a href="/2023/12/migrate-terraform-resources">previous post</a>, I showed how to migrate deprecated Terraform resources to a supported resource type. But I hinted that there are some gotchas to be aware of. Here are some issues that I have encountered and how to work around them.</p>
<h2>Deploying to a new environment</h2>
<p>The <code>import</code> statement will fail if you try to deploy to a brand new empty environment. It makes sense when you think about it - how can you import a resource that doesn't exist yet? Unfortunately, there's no way to make the <code>import</code> statement conditional.</p>
<p>An example of this would be if your application has been in development for a while, and to assist in testing you now want to create a new UAT environment. There are no resources in the UAT environment yet, so the <code>import</code> statement will fail.</p>
<pre><code>Initializing plugins and modules...
data.azurerm_resource_group.group: Refreshing...
data.azurerm_client_config.client: Refreshing...
data.azurerm_client_config.client: Refresh complete after 0s [id=xxxxxxxxxxxxx=]
data.azurerm_resource_group.group: Refresh complete after 0s [id=/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/rg-tfupgrade-australiasoutheast]
azurerm_service_plan.plan: Refreshing state... [id=/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/rg-tfupgrade-australiasoutheast/providers/Microsoft.Web/serverfarms/plan-tfupgrade-australiasoutheast]
╷
│ Error: Cannot import non-existent remote object
│
│ While attempting to import an existing object to
│ "azurerm_service_plan.plan", the provider detected that no object exists
│ with the given id. Only pre-existing objects can be imported; check that
│ the id is correct and that it is associated with the provider's configured
│ region or endpoint, or use "terraform apply" to create a new remote object
│ for this resource.
╵
Operation failed: failed running terraform plan (exit 1)
</code></pre>
<h2>Deploying to an environment where resources may have been deleted</h2>
<p>This is similar to the previous scenario. It's one I've encountered where we have non-production environments where you want to "mothball" when they're not being actively used to save money. We selectively delete resources, and then when the environment is needed we re-provision them. Again, the <code>import</code> statement will fail if it is referencing a deleted resource.</p>
<h2>A Solution</h2>
<p>The workaround to support these scenarios is to not use the <code>import</code> statement in your HCL code, but instead use the <code>terraform import</code> command. Because we're calling the command from the command line, we can make it conditional. e.g.</p>
<pre><code>if az appservice plan show --name plan-tfupgrade-australiasoutheast --resource-group $ARM_RESOURCE_GROUP --query id --output tsv &gt; /dev/null 2&gt;&amp;1; then
  terraform import azurerm_service_plan.plan /subscriptions/$ARM_SUBSCRIPTION_ID/resourceGroups/$ARM_RESOURCE_GROUP/providers/Microsoft.Web/serverfarms/plan-tfupgrade-australiasoutheast
else
  echo "Resource plan-tfupgrade-australiasoutheast does not exist in Azure"
fi
</code></pre>
<p>Repeat this pattern for all resources that need to be imported. Also, take care when referencing the Azure resource names. They need to be correct for the detection code to work as expected!</p>
<p>It's not quite as elegant as using the <code>import</code> statement in your HCL code, but it does the job.</p>
<h2>Example output</h2>
<p>Here's an example of the output from the <code>terraform import</code> command:</p>
<pre><code>data.azurerm_resource_group.group: Reading...
data.azurerm_client_config.client: Reading...
data.azurerm_client_config.client: Read complete after 0s [id=xxxxxxxxxxxx=]
data.azurerm_resource_group.group: Read complete after 0s [id=/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/rg-tfupgrade-australiasoutheast]
azurerm_service_plan.plan: Importing from ID "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/rg-tfupgrade-australiasoutheast/providers/Microsoft.Web/serverfarms/plan-tfupgrade-australiasoutheast"...
azurerm_service_plan.plan: Import prepared!
  Prepared azurerm_service_plan for import
azurerm_service_plan.plan: Refreshing state... [id=/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/rg-tfupgrade-australiasoutheast/providers/Microsoft.Web/serverfarms/plan-tfupgrade-australiasoutheast]

Import successful!
</code></pre>
<p>And the resultant plan shows no additional changes are necessary, which is just what we like to see!</p>
<pre><code>data.azurerm_resource_group.group: Reading...
data.azurerm_client_config.client: Reading...
data.azurerm_client_config.client: Read complete after 0s [id=xxxxxxxxxxxxx=]
data.azurerm_resource_group.group: Read complete after 0s [id=/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/rg-tfupgrade-australiasoutheast]
azurerm_service_plan.plan: Refreshing state... [id=/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/rg-tfupgrade-australiasoutheast/providers/Microsoft.Web/serverfarms/plan-tfupgrade-australiasoutheast]
azurerm_linux_web_app.appservice: Refreshing state... [id=/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/rg-tfupgrade-australiasoutheast/providers/Microsoft.Web/sites/appservice-tfupgrade-australiasoutheast]

No changes. Your infrastructure matches the configuration.

Terraform has compared your real infrastructure against your configuration
and found no differences, so no changes are needed.
</code></pre>
<p>And if we apply this to an empty environment, it also runs successfully!</p>
<pre><code>azurerm_service_plan.plan: Creating...
azurerm_service_plan.plan: Still creating... [10s elapsed]
azurerm_service_plan.plan: Creation complete after 10s [id=/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/rg-tfupgrade-australiasoutheast/providers/Microsoft.Web/serverfarms/plan-tfupgrade-australiasoutheast]
azurerm_linux_web_app.appservice: Creating...
azurerm_linux_web_app.appservice: Still creating... [10s elapsed]
azurerm_linux_web_app.appservice: Still creating... [20s elapsed]
azurerm_linux_web_app.appservice: Still creating... [30s elapsed]
azurerm_linux_web_app.appservice: Still creating... [40s elapsed]
azurerm_linux_web_app.appservice: Still creating... [50s elapsed]
azurerm_linux_web_app.appservice: Creation complete after 56s [id=/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/rg-tfupgrade-australiasoutheast/providers/Microsoft.Web/sites/appservice-tfupgrade-australiasoutheast]

Apply complete! Resources: 2 added, 0 changed, 0 destroyed.
</code></pre>
<h2>Post-migration clean up</h2>
<p>Whether you use the <code>import</code> statement or the <code>terraform import</code> command, once you've migrated your Terraform resources in all existing environments, I recommend you remove the migration code from HCL and pipeline YAML files.</p>
<p>Remove the <code>import</code> statements as they have no further purpose, and you only risk encountering the issue mentioned above if you ever want to deploy to a new environment.</p>
<p>There's less risk of leaving the conditional <code>terraform import</code> commands in the pipeline YAML files, but as long as they remain they are slowing the pipeline down. Remove them to keep your pipelines running as fast as possible and to keep your YAML files as simple as possible. If you ever need to do another migration in the future, you'll have the commands in your source control history to refer to.</p>
]]>
    </content>
    <media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://david.gardiner.net.au/_astro/terraform-logo.CiRDK2M7.png"/>
    <media:content medium="image" xmlns:media="http://search.yahoo.com/mrss/" url="https://david.gardiner.net.au/_astro/terraform-logo.CiRDK2M7.png"/>
  </entry>
  <entry>
    <id>https://david.gardiner.net.au/2023/12/migrate-terraform-resources</id>
    <updated>2023-12-04T08:00:00.000+10:30</updated>
    <title>Migrating deprecated Terraform resources</title>
    <link href="https://david.gardiner.net.au/2023/12/migrate-terraform-resources" rel="alternate" type="text/html" title="Migrating deprecated Terraform resources"/>
    <category term="Azure Pipelines"/>
    <category term="DevOps"/>
    <category term="Terraform"/>
    <published>2023-12-04T08:00:00.000+10:30</published>
    <summary type="html">
      <![CDATA[One of the challenges with using Terraform for your infrastructure as code is that the providers (that interact with cloud providers like Azure) are updated very frequently, and especially with major version releases this includes deprecating specific resource types. For example, when the Azure provider (AzureRM) version 3.0 was released, it deprecated many resource types and data sources. Some of these still exist in version 3, but are deprecated, will not receive any updates, and will be removed in version 4. Others have already been removed entirely. …]]>
    </summary>
    <content type="html" xml:base="https://david.gardiner.net.au/2023/12/migrate-terraform-resources">
      <![CDATA[<p>One of the challenges with using Terraform for your infrastructure as code is that the providers (that interact with cloud providers like Azure) are updated very frequently, and especially with major version releases this includes deprecating specific resource types. For example, when the Azure provider (AzureRM) version 3.0 was released, it deprecated many resource types and data sources. Some of these still exist in version 3, but are <a href="https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/guides/3.0-upgrade-guide#removal-of-deprecated-fields-data-sources-and-resources">deprecated, will not receive any updates, and will be removed in version 4</a>. Others have already been removed entirely.</p>
<p><img src="https://david.gardiner.net.au/_astro/terraform-logo.CiRDK2M7_Z21UAf8.webp" alt="Terraform logo" /></p>
<p><em>I've created an example repo that demonstrates the migration process outlined here. Find it at <a href="https://github.com/flcdrg/terraform-azure-upgrade-resources">https://github.com/flcdrg/terraform-azure-upgrade-resources</a>.</em></p>
<p>While this post uses Azure and Azure Pipelines, the same principles should apply for other cloud providers and CI/CD systems.</p>
<p>To set the scene, here's some Terraform code that creates an Azure App Service Plan and an App Service (<a href="https://github.com/flcdrg/terraform-azure-upgrade-resources/blob/main/v2/app-service.tf">src</a>). The resource types are from v2.x AzureRM provider. Bear in mind, the last release of v2 was <a href="https://github.com/hashicorp/terraform-provider-azurerm/releases/tag/v2.99.0">v2.99.0</a> in March 2022.</p>
<pre><code># https://registry.terraform.io/providers/hashicorp/azurerm/2.99.0/docs/resources/app_service_plan
resource "azurerm_app_service_plan" "plan" {
  name                = "plan-tfupgrade-australiasoutheast"
  resource_group_name = data.azurerm_resource_group.group.name
  location            = data.azurerm_resource_group.group.location
  kind                = "Linux"
  reserved            = true
  sku {
    tier = "Basic"
    size = "B1"
  }
}

# https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/app_service
resource "azurerm_app_service" "appservice" {
  app_service_plan_id = azurerm_app_service_plan.plan.id
  name                = "appservice-tfupgrade-australiasoutheast"
  location            = data.azurerm_resource_group.group.location
  resource_group_name = data.azurerm_resource_group.group.name
  https_only          = true

  app_settings = {
    "TEST" = "TEST"
  }

  site_config {
    always_on                 = true
    ftps_state                = "Disabled"
    http2_enabled             = true
    linux_fx_version          = "DOTNETCORE|6.0"
    min_tls_version           = "1.2"
    use_32_bit_worker_process = false
  }
  identity {
    type = "SystemAssigned"
  }
}
</code></pre>
<p>The documentation for AzureRM v3.x shows that these resource types are deprecated and will be completely removed in v4.x. In addition, as these resource types are not being updated, they don't support the latest features of Azure App Services, such as the new .NET 8 runtime.</p>
<p>So how can we switch to the <a href="https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/service_plan">azurerm_service_plan</a> and <a href="https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/linux_web_app">azurerm_linux_web_app</a> resource types?</p>
<p>You might think it's just a matter of just changing the resource types and updating a few properties. But if you tried that you'll discover that Terraform will try to delete the existing resources and create new ones. This is because the resource types are different, and Terraform doesn't know that they are actually the same thing because the state representation of those resources is different.</p>
<p>Instead, we need to let Terraform know that the Azure resources that have already been created map to the new Terraform resource types we've defined in our configuration. In addition, we want to do this in a testable way using a pull request to verify that our changes look correct before we merge them into the main branch.</p>
<p>The approach we'll take is to make use of the relatively new <a href="https://developer.hashicorp.com/terraform/language/import"><code>import</code> block</a> language feature. (In a future blog post I'll cover when you might consider using the <code>terraform import</code> CLI command instead).</p>
<p>By using the <code>import</code> block, we can tell Terraform that the existing resources in Azure should be mapped to the new resource types we've defined in Terraform configuration. This means that Terraform will not try to delete the existing resources and create new ones. Instead, it will update the existing resources to match the Terraform configuration.</p>
<p>In the following example, we're indicating that the Azure resource with the resource ID <code>/subscriptions/.../resourceGroups/rg-tfupgrade-australiasoutheast/providers/Microsoft.Web/serverfarms/plan-tfupgrade-australiasoutheast</code> should be mapped to the <code>azurerm_service_plan</code> resource type. Note the use of the data block reference to insert the subscription ID, rather than hard-coding it.</p>
<pre><code>import {
  id = "/subscriptions/${data.azurerm_client_config.client.subscription_id}/resourceGroups/rg-tfupgrade-australiasoutheast/providers/Microsoft.Web/serverfarms/plan-tfupgrade-australiasoutheast"
  to = azurerm_service_plan.plan
}

resource "azurerm_service_plan" "plan" {
  name                = "plan-tfupgrade-australiasoutheast"
  resource_group_name = data.azurerm_resource_group.group.name
  location            = data.azurerm_resource_group.group.location
  sku_name            = "B1"
  os_type             = "Linux"
}
</code></pre>
<p>Often when you're adding the new resource, the property names and 'shape' will change. Sometimes it's pretty easy to figure out the equivalent, but sometimes you might need some help. One option you can utilise is <a href="https://developer.hashicorp.com/terraform/language/import/generating-configuration">generating the configuration</a>.</p>
<p>In this case, you add the <code>import</code> block, but don't add the resource block. If you then run <code>terraform plan -generate-config-out=generated_resources.tf</code>. Terraform will create a new file <code>generated_resources.tf</code> which will contain generated resources. You can then copy/paste those over into your regular .tf files. You'll almost certainly want to edit them to remove redundant settings and replace hard-coded values with variable references where applicable. If you're doing this as part of a pipeline, publish the generated file as a build artifact, so you can download it and incorporate the changes. You could make this an optional part of the pipeline that is enabled by setting a pipeline parameter to true.</p>
<p>There's still one problem to solve though. While we've mapped the new resource types to the existing resources, Terraform state still knows about the old resource types, and will try to delete them now that they are no longer defined in the Terraform configuration. To solve this, we can use the <a href="https://www.terraform.io/cli/commands/state/rm"><code>terraform state rm</code></a> command to remove the old resources from state. If they're not in state, then Terraform doesn't know about them and won't try to delete them.</p>
<p>The following script will remove the old resources from state if they exist. Note that the <code>terraform state rm</code> command will fail if the resource doesn't exist in state, so we need to check for the existence of the resource first.</p>
<pre><code># Remove state of old resources from Terraform
mapfile -t RESOURCES &lt; &lt;( terraform state list )

if [[ " ${RESOURCES[@]} " =~ "azurerm_app_service_plan.plan" ]]; then
  terraform state rm azurerm_app_service_plan.plan
fi

if [[ " ${RESOURCES[@]} " =~ "azurerm_app_service.appservice" ]]; then
  terraform state rm azurerm_app_service.appservice
fi
</code></pre>
<p>You will need to add an entry in this script for each Terraform resource type that you are removing.</p>
<h2>Testing</h2>
<p>Ok, so we have a strategy for upgrading our Terraform resources. But how do we test it? We don't want to just merge the changes into the main branch and hope for the best. We want to test it first, and do this in isolation from other changes that might be happening in the main branch (or other branches). To test our changes need to update the Terraform state. But if we update the state used by everyone else then we won't be popular when their builds start failing because Terraform will be trying to recreate resources that we've just deleted from state. Except those resources still exist in Azure!</p>
<p>What we want is a local copy of the Terraform state that we can try out our changes in without affecting anyone else. One way to do this is to copy the remote state to a local file, then reinitialise Terraform to use the 'local' backend. Obviously we won't do a real deployment using this, but it is perfect for running <code>terraform plan</code> against.</p>
<p>Here's an example script that will copy the remote state to a local file, then reinitialise Terraform to use the local backend. It assumes that your backend configuration is defined separately in a <a href="https://github.com/flcdrg/terraform-azure-upgrade-resources/blob/main/v3/backend.tf"><code>backend.tf</code> file</a>. Normally this would be pointing to a remote backend (e.g. Terraform Cloud or an Azure Storage account), within the pipeline run we replace this file with configuration to use a local backend.</p>
<pre><code>terraform state pull &gt; $(Build.ArtifactStagingDirectory)/pull.tfstate

cat &gt; backend.tf &lt;&lt;EOF
terraform {
  backend "local" {
    path = "$(Build.ArtifactStagingDirectory)/pull.tfstate"
  }
}
EOF

# Reset Terraform to use local backend
terraform init -reconfigure -no-color -input=false
</code></pre>
<p>We now should have all the pieces in place to test our changes on PR build, and then once we're happy with the plan, merge the changes and run the migration for real. If you have multiple environments (dev/test/prod) then you can roll this out to each environment as part of the normal release process.</p>
<p>For Azure Pipelines, we make use of conditional expressions, so that on PR builds we test the migration using local state, but on the main branch we modify the remote state and actually apply the changes.</p>
<p>Here's the Azure Pipeline in full (<a href="https://github.com/flcdrg/terraform-azure-upgrade-resources/blob/main/upgrade.yml">src</a>):</p>
<pre><code>trigger: none

pr:
  branches:
    include:
      - main

pool:
  vmImage: ubuntu-latest

variables:
  - group: Terraform-Token

jobs:
  - job: build
    displayName: "Test Terraform Upgrade"

    variables:
      TerraformSourceDirectory: $(System.DefaultWorkingDirectory)/v3

    steps:
      - script: echo "##vso[task.setvariable variable=TF_TOKEN_app_terraform_io]$(TF_TOKEN)"
        displayName: "Terraform Token"

      - task: TerraformInstaller@2
        displayName: "Terraform: Installer"
        inputs:
          terraformVersion: "latest"

      - task: TerraformCLI@2
        displayName: "Terraform: init"
        inputs:
          command: init
          workingDirectory: "$(TerraformSourceDirectory)"
          backendType: selfConfigured
          commandOptions: -no-color -input=false
          allowTelemetryCollection: false

      - ${{ if ne(variables['Build.SourceBranch'], 'refs/heads/main') }}:
          # Copy state from Terraform Cloud to local, so we can modify it without affecting the remote state
          - script: |
              terraform state pull &gt; $(Build.ArtifactStagingDirectory)/pull.tfstate

              # Write multiple lines of text to local file using bash
              cat &gt; backend.tf &lt;&lt;EOF
              terraform {
                backend "local" {
                  path = "$(Build.ArtifactStagingDirectory)/pull.tfstate"
                }
              }
              EOF

              # Reset Terraform to use local backend
              terraform init -reconfigure -no-color -input=false
            displayName: "Script: Use Terraform Local Backend"
            workingDirectory: $(TerraformSourceDirectory)

      - script: |
          # Remove state of old resources from Terraform
          mapfile -t RESOURCES &lt; &lt;( terraform state list )

          if [[ " ${RESOURCES[@]} " =~ "azurerm_app_service_plan.plan" ]]; then
            terraform state rm azurerm_app_service_plan.plan
          fi

          if [[ " ${RESOURCES[@]} " =~ "azurerm_app_service.appservice" ]]; then
            terraform state rm azurerm_app_service.appservice
          fi
        displayName: "Script: Remove old resources from Terraform State"
        workingDirectory: $(TerraformSourceDirectory)

      - task: TerraformCLI@2
        displayName: "Terraform: validate"
        inputs:
          command: validate
          workingDirectory: "$(TerraformSourceDirectory)"
          commandOptions: -no-color

      - ${{ if ne(variables['Build.SourceBranch'], 'refs/heads/main') }}:
          - task: TerraformCLI@2
            displayName: "Terraform: plan"
            inputs:
              command: plan
              workingDirectory: "$(TerraformSourceDirectory)"
              commandOptions: -no-color -input=false -detailed-exitcode
              environmentServiceName: Azure MSDN - rg-tfupgrade-australiasoutheast
              publishPlanResults: Plan
              allowTelemetryCollection: false

      - ${{ if eq(variables['Build.SourceBranch'], 'refs/heads/main') }}:
          - task: TerraformCLI@2
            displayName: "Terraform: apply"
            inputs:
              command: apply
              workingDirectory: "$(TerraformSourceDirectory)"
              commandOptions: -no-color -input=false -auto-approve
              allowTelemetryCollection: false
</code></pre>
<h2>Using it in practise</h2>
<p>Ideally when you migrate the resource types, there will be no changes to the properties (and Terraform will report that no changes need to be made). Often the new resource type provides additional properties that you can take advantage of. Whether you set this initially or in a subsequent PR is up to you.</p>
<p>Here's an example output from terraform plan:</p>
<pre><code>Terraform v1.6.4
on linux_amd64
Initializing plugins and modules...
data.azurerm_resource_group.group: Refreshing...
data.azurerm_client_config.client: Refreshing...
data.azurerm_client_config.client: Refresh complete after 0s [id=xxxxxxxxxxx=]
data.azurerm_resource_group.group: Refresh complete after 0s [id=/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/rg-tfupgrade-australiasoutheast]
azurerm_service_plan.plan: Refreshing state... [id=/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/rg-tfupgrade-australiasoutheast/providers/Microsoft.Web/serverfarms/plan-tfupgrade-australiasoutheast]
azurerm_linux_web_app.appservice: Refreshing state... [id=/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/rg-tfupgrade-australiasoutheast/providers/Microsoft.Web/sites/appservice-tfupgrade-australiasoutheast]

Terraform will perform the following actions:

  # azurerm_linux_web_app.appservice will be imported
    resource "azurerm_linux_web_app" "appservice" {
        app_settings                                   = {
            "TEST" = "TEST"
        }
        client_affinity_enabled                        = false
        client_certificate_enabled                     = false
        client_certificate_mode                        = "Required"
        custom_domain_verification_id                  = (sensitive value)
        default_hostname                               = "appservice-tfupgrade-australiasoutheast.azurewebsites.net"
        enabled                                        = true
        ftp_publish_basic_authentication_enabled       = true
        https_only                                     = true
        id                                             = "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/rg-tfupgrade-australiasoutheast/providers/Microsoft.Web/sites/appservice-tfupgrade-australiasoutheast"
        key_vault_reference_identity_id                = "SystemAssigned"
        kind                                           = "app,linux"
        location                                       = "australiasoutheast"
        name                                           = "appservice-tfupgrade-australiasoutheast"
        outbound_ip_address_list                       = [
            "52.189.223.107",
            "13.77.42.25",
            "13.77.46.217",
            "52.189.221.141",
            "13.77.50.99",
        ]
        outbound_ip_addresses                          = "52.189.223.107,13.77.42.25,13.77.46.217,52.189.221.141,13.77.50.99"
        possible_outbound_ip_address_list              = [
            "52.189.223.107",
            "13.77.42.25",
            "13.77.46.217",
            "52.189.221.141",
            "52.243.85.201",
            "52.243.85.94",
            "52.189.234.152",
            "13.77.56.61",
            "52.189.214.112",
            "20.11.210.198",
            "20.211.233.197",
            "20.211.238.191",
            "20.11.210.187",
            "20.11.211.1",
            "20.11.211.80",
            "4.198.70.38",
            "20.92.41.250",
            "4.198.68.27",
            "4.198.68.42",
            "20.92.47.59",
            "20.92.42.78",
            "13.77.50.99",
        ]
        possible_outbound_ip_addresses                 = "52.189.223.107,13.77.42.25,13.77.46.217,52.189.221.141,52.243.85.201,52.243.85.94,52.189.234.152,13.77.56.61,52.189.214.112,20.11.210.198,20.211.233.197,20.211.238.191,20.11.210.187,20.11.211.1,20.11.211.80,4.198.70.38,20.92.41.250,4.198.68.27,4.198.68.42,20.92.47.59,20.92.42.78,13.77.50.99"
        public_network_access_enabled                  = true
        resource_group_name                            = "rg-tfupgrade-australiasoutheast"
        service_plan_id                                = "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/rg-tfupgrade-australiasoutheast/providers/Microsoft.Web/serverfarms/plan-tfupgrade-australiasoutheast"
        site_credential                                = (sensitive value)
        tags                                           = {}
        webdeploy_publish_basic_authentication_enabled = true

        identity {
            identity_ids = []
            principal_id = "a71e1fd5-e61b-4591-a439-98bad90cc837"
            tenant_id    = "59b0934d-4f35-4bff-a2b7-a451fe5f8bd6"
            type         = "SystemAssigned"
        }

        site_config {
            always_on                               = true
            auto_heal_enabled                       = false
            container_registry_use_managed_identity = false
            default_documents                       = []
            detailed_error_logging_enabled          = false
            ftps_state                              = "Disabled"
            health_check_eviction_time_in_min       = 0
            http2_enabled                           = true
            linux_fx_version                        = "DOTNETCORE|6.0"
            load_balancing_mode                     = "LeastRequests"
            local_mysql_enabled                     = false
            managed_pipeline_mode                   = "Integrated"
            minimum_tls_version                     = "1.2"
            remote_debugging_enabled                = false
            remote_debugging_version                = "VS2019"
            scm_minimum_tls_version                 = "1.2"
            scm_type                                = "VSTSRM"
            scm_use_main_ip_restriction             = false
            use_32_bit_worker                       = false
            vnet_route_all_enabled                  = false
            websockets_enabled                      = false
            worker_count                            = 1

            application_stack {
                docker_registry_password = (sensitive value)
                dotnet_version           = "6.0"
            }
        }
    }

  # azurerm_service_plan.plan will be imported
    resource "azurerm_service_plan" "plan" {
        id                           = "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/rg-tfupgrade-australiasoutheast/providers/Microsoft.Web/serverfarms/plan-tfupgrade-australiasoutheast"
        kind                         = "linux"
        location                     = "australiasoutheast"
        maximum_elastic_worker_count = 1
        name                         = "plan-tfupgrade-australiasoutheast"
        os_type                      = "Linux"
        per_site_scaling_enabled     = false
        reserved                     = true
        resource_group_name          = "rg-tfupgrade-australiasoutheast"
        sku_name                     = "B1"
        tags                         = {}
        worker_count                 = 1
        zone_balancing_enabled       = false
    }

Plan: 2 to import, 0 to add, 0 to change, 0 to destroy.
</code></pre>
<p>In the next post, I'll cover a few things to watch out for, and some post-migration clean up steps.</p>
]]>
    </content>
    <media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://david.gardiner.net.au/_astro/terraform-logo.CiRDK2M7.png"/>
    <media:content medium="image" xmlns:media="http://search.yahoo.com/mrss/" url="https://david.gardiner.net.au/_astro/terraform-logo.CiRDK2M7.png"/>
  </entry>
  <entry>
    <id>https://david.gardiner.net.au/2023/11/terraform-init-error</id>
    <updated>2023-11-27T09:00:00.000+10:30</updated>
    <title>Terraform command &apos;init&apos; failed with exit code &apos;1&apos;</title>
    <link href="https://david.gardiner.net.au/2023/11/terraform-init-error" rel="alternate" type="text/html" title="Terraform command &apos;init&apos; failed with exit code &apos;1&apos;"/>
    <category term="Azure Pipelines"/>
    <category term="DevOps"/>
    <category term="Terraform"/>
    <published>2023-11-27T09:00:00.000+10:30</published>
    <summary type="html">
      <![CDATA[I'm setting up a new GitHub repo to demonstrate using Terraform with Azure Pipelines via the CLI and I hit a weird error right at the start. I was using Jason Johnson's Azure Pipelines Terraform Tasks extension like this: and it kept failing with the error:]]>
    </summary>
    <content type="html" xml:base="https://david.gardiner.net.au/2023/11/terraform-init-error">
      <![CDATA[<p>I'm setting up a new GitHub repo to demonstrate using Terraform with Azure Pipelines via the CLI and I hit a weird error right at the start. I was using <a href="https://marketplace.visualstudio.com/items?itemName=JasonBJohnson.azure-pipelines-tasks-terraform">Jason Johnson's Azure Pipelines Terraform Tasks</a> extension like this:</p>
<pre><code>- task: TerraformCLI@2
  inputs:
    command: init
    workingDirectory: "$(System.DefaultWorkingDirectory)/v2"
    backendType: selfConfigured
    commandOptions: -no-color -input=false
    allowTelemetryCollection: false
</code></pre>
<p>and it kept failing with the error:</p>
<pre><code>/opt/hostedtoolcache/terraform/1.6.4/x64/terraform version
Terraform v1.6.4
on linux_amd64
+ provider registry.terraform.io/hashicorp/azurerm v2.99.0
+ provider registry.terraform.io/hashicorp/random v3.5.1
/opt/hostedtoolcache/terraform/1.6.4/x64/terraform init --input=false --no-color
Usage: terraform [global options] init [options]
...
Terraform command 'init' failed with exit code '1'.
</code></pre>
<p>I tried all sorts of things. It looked identical to other pipelines I had working (and ones I'd seen online). Was there a weird invisible character being passed on the command line? Out of desperation I copied the <code>commandOptions</code> line from another pipeline (which looked identical except that the arguments were in a different order).</p>
<p>But when I looked at the diff, not only were the arguments different, I realised that the dashes were different too! Terraform CLI arguments use a single dash, not a double dash. So the correct line is</p>
<pre><code>commandOptions: -no-color -input=false
</code></pre>
<p>In hindsight, I realised that this is the first pipeline I've written on GitHub using the Terraform CLI (e.g. not in a work context) and so I did it manually, rather than copy/pasting from an existing (working) pipeline. A pity Terraform doesn't support double dashes, but there you go.</p>
]]>
    </content>
    <media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://david.gardiner.net.au/_astro/terraform-logo.CiRDK2M7.png"/>
    <media:content medium="image" xmlns:media="http://search.yahoo.com/mrss/" url="https://david.gardiner.net.au/_astro/terraform-logo.CiRDK2M7.png"/>
  </entry>
  <entry>
    <id>https://david.gardiner.net.au/2023/06/propertycollection</id>
    <updated>2023-06-20T22:30:00.000+09:30</updated>
    <title>Azure DevOps API PropertiesCollections</title>
    <link href="https://david.gardiner.net.au/2023/06/propertycollection" rel="alternate" type="text/html" title="Azure DevOps API PropertiesCollections"/>
    <category term="Azure DevOps"/>
    <category term="Azure Pipelines"/>
    <category term="PowerShell"/>
    <published>2023-06-20T22:30:00.000+09:30</published>
    <summary type="html">
      <![CDATA[I was looking at some of the Azure DevOps API documentation and noticed that some of the endpoints mention a properties object of type PropertiesCollection. Unfortunately, the details for that data structure are not particularly helpful, and I couldn't figure out how to use it. Some pages include examples, but none that I could find included an expanded properties object. To figure out how to use it, I created a simple .NET console application. I added references to the following NuGet packages: …]]>
    </summary>
    <content type="html" xml:base="https://david.gardiner.net.au/2023/06/propertycollection">
      <![CDATA[<p>I was looking at some of the Azure DevOps API documentation and noticed that some of the endpoints mention a <code>properties</code> object of type <a href="https://learn.microsoft.com/rest/api/azure/devops/build/builds/list?view=azure-devops-rest-7.0&amp;WT.mc_id=DOP-MVP-5001655#propertiescollection"><code>PropertiesCollection</code></a>. Unfortunately, the details for that data structure are not particularly helpful, and I couldn't figure out how to use it. Some pages include examples, but none that I could find included an expanded <code>properties</code> object.</p>
<p>To figure out how to use it, I created a simple .NET console application. I added references to the following NuGet packages:</p>
<ul>
<li>Microsoft.TeamFoundationServer.Client</li>
<li>Microsoft.VisualStudio.Services.InteractiveClient</li>
<li>Microsoft.VisualStudio.Services.Release.Client</li>
</ul>
<pre><code>using Microsoft.VisualStudio.Services.Common;
using Microsoft.VisualStudio.Services.ReleaseManagement.WebApi.Clients;
using Microsoft.VisualStudio.Services.WebApi;

const string collectionUri = "https://dev.azure.com/organisation";
const string projectName = "MyProject";
const string pat = "YOUR-PAT-HERE";
const int releaseId = 20;

var creds = new VssBasicCredential(string.Empty, pat);

// Connect to Azure DevOps Services
var connection = new VssConnection(new Uri(collectionUri), creds);

using var client = connection.GetClient&lt;ReleaseHttpClient&gt;();

// Get data about a specific release
var release = await client.GetReleaseAsync(projectName, releaseId);

release.Properties.Add("Thing", "hey");

// Send the updated release back to Azure DevOps Services
var result = await client.UpdateReleaseAsync(release!, projectName, releaseId);

Console.WriteLine();
</code></pre>
<p>This allowed me to create a property key and value, that I could then examine by querying the item (in this case a 'classic' release), by calling the GET endpoint. eg.</p>
<pre><code>https://vsrm.dev.azure.com/{organization}/{project}/_apis/release/releases/{releaseId}?propertyFilters=Thing&amp;api-version=7.0
</code></pre>
<p>Note that you need to specify the <code>propertyFilters</code> parameter. Otherwise the `properties`` object will not be included in the response.</p>
<p>And in doing that, we can see the JSON data structure!</p>
<pre><code>    "properties": {
        "Thing": {
            "$type": "System.String",
            "$value": "hey"
        }
    }
</code></pre>
<p>So, to add a property, you need to add a new key/value pair to the <code>properties</code> object, where the key is the name of the property, and the value is an object with two properties: <code>$type</code> and <code>$value</code>. The <code>$type</code> property is the type of the value, and the <code>$value</code> property is the value itself.</p>
<p>The documentation clarifies the types supported:</p>
<blockquote>
<p>Values of type Byte[], Int32, Double, DateType and String preserve their type, other primitives are retuned as a String. Byte[] expected as base64 encoded string.</p>
</blockquote>
<p>(I think 'DateType' is a typo, and should be 'DateTime')</p>
<p>Now that we know the shape of the data, I can jump back to PowerShell and use that to add a new property:</p>
<pre><code>$uri = "https://vsrm.dev.azure.com/$($organisation)/$($project)/_apis/release/releases/$($releaseId)?api-version=7.0&amp;propertyFilters=Extra"

$result = Invoke-RestMethod -Uri $uri -Method Get -Headers $headers

if (-not ($result.properties.Extra)) {
    $result.properties | Add-Member -MemberType NoteProperty -Name "Extra" -Value @{
        "`$type" = "System.String"
        "`$value" = "haaaa"
    }
}
$body = $result | ConvertTo-Json -Depth 20

"Updating via PUT"

Invoke-RestMethod -Uri $uri -Method Put -Headers $headers -Body $body -ContentType "application/json"
</code></pre>
]]>
    </content>
  </entry>
  <entry>
    <id>https://david.gardiner.net.au/2023/02/list-azure-pipelines-and-yaml</id>
    <updated>2023-02-17T12:00:00.000+10:30</updated>
    <title>Get a list of Azure Pipelines and YAML files</title>
    <link href="https://david.gardiner.net.au/2023/02/list-azure-pipelines-and-yaml" rel="alternate" type="text/html" title="Get a list of Azure Pipelines and YAML files"/>
    <category term="Azure DevOps"/>
    <category term="Azure Pipelines"/>
    <category term="PowerShell"/>
    <published>2023-02-17T12:00:00.000+10:30</published>
    <summary type="html">
      <![CDATA[I wanted to document the pipelines in a particular Azure DevOps project. Rather than manually write up the name of each pipeline and the corresponding YAML file, I figured there must be a way to query that data. I've done something similar in the past using the Azure DevOps REST API, but this time I'm using the Azure CLI.]]>
    </summary>
    <content type="html" xml:base="https://david.gardiner.net.au/2023/02/list-azure-pipelines-and-yaml">
      <![CDATA[<p>I wanted to document the pipelines in a particular Azure DevOps project. Rather than manually write up the name of each pipeline and the corresponding YAML file, I figured there must be a way to query that data.</p>
<p>I've done <a href="/2021/06/azure-devops-list-pipelines">something similar in the past using the Azure DevOps REST API</a>, but this time I'm using the <a href="https://learn.microsoft.com/cli/azure/what-is-azure-cli?view=azure-cli-latest&amp;WT.mc_id=DOP-MVP-5001655">Azure CLI</a>.</p>
<p>Make sure you have the <a href="https://learn.microsoft.com/cli/azure/service-page/azure%20devops?view=azure-cli-latest&amp;WT.mc_id=DOP-MVP-5001655"><code>devops</code></a> extension installed (<code>az extension add --name azure-devops</code> if you don't have it already). The commands provided by this extension use the same REST API under the hood that we used directly last time.</p>
<p>I can get a list of pipelines for the current project with <a href="https://learn.microsoft.com/cli/azure/pipelines?view=azure-cli-latest&amp;WT.mc_id=DOP-MVP-5001655#az-pipelines-list"><code>az pipelines list</code></a>.</p>
<p>This command returns a list of objects corresponding to the <a href="https://learn.microsoft.com/rest/api/azure/devops/build/definitions/list?view=azure-devops-rest-7.1&amp;WT.mc_id=DOP-MVP-5001655#builddefinitionreference">BuildDefinitionReference</a> data structure. While it has the pipeline name, I noticed that doesn't include any information about the YAML file. To get that you need query an individual pipeline using:</p>
<pre><code>az pipelines show --name PipelineName
</code></pre>
<p>This produces a <a href="https://learn.microsoft.com/rest/api/azure/devops/build/definitions/get?view=azure-devops-rest-7.1&amp;WT.mc_id=DOP-MVP-5001655#builddefinition">BuildDefinition</a> object, which happens to include a <code>process</code> property. While it isn't documented in the <a href="https://learn.microsoft.com/en-us/rest/api/azure/devops/build/definitions/get?view=azure-devops-rest-7.1&amp;WT.mc_id=DOP-MVP-5001655#buildprocess">BuildProcess</a> data structure, if you look at the actual data you'll see not only the <code>type</code> property but a <code>yamlFilename</code> property, which is just what we want.</p>
<pre><code>"process": {
    "type": 2,
    "yamlFilename": "release.yaml"
  }
</code></pre>
<p>Putting it all together, and taking advantage of the <a href="https://learn.microsoft.com/cli/azure/query-azure-cli?WT.mc_id=DOP-MVP-5001655">JMESPath query</a> to limit which fields we get back, I can produce a comma-separated list of pipeline names and their corresponding YAML files with the following:</p>
<pre><code>(az pipelines list --query "[].name" --query-order NameAsc -o tsv) | % { (az pipelines show --name $_ --query "[name, process.yamlFilename]" | ConvertFrom-Json) -join "," }
</code></pre>
<p>So for this project:</p>
<p><img src="https://david.gardiner.net.au/_astro/azure-pipelines-all.wXh0uce2_1q3KMw.webp" alt="Screenshot of Azure Pipelines in a project" /></p>
<p>You get this:</p>
<pre><code>Alternate,alternate.yml
ReleaseTest,release.yaml
</code></pre>
<p>This will be more useful in a project with many more pipelines. If the project has multiple repositories you could also include the repository name as well.</p>
<p>e.g.</p>
<pre><code>(az pipelines list --query "[].name" --query-order NameAsc -o tsv) | % { (az pipelines show --name $_ --query "[name, process.yamlFilename, repository.name]" | ConvertFrom-Json) -join "," }
</code></pre>
<p>Such a project would produce something similar to this:</p>
<pre><code>Custom Git,azure-pipelines.yml,Repro
Repro,azure-pipelines.yml,Repro
task-test,azure-pipelines.yml,task-test
</code></pre>
<p>You can include extra columns of data as needed.</p>
]]>
    </content>
    <media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://david.gardiner.net.au/_astro/azure-pipelines-logo.B45UakAg.png"/>
    <media:content medium="image" xmlns:media="http://search.yahoo.com/mrss/" url="https://david.gardiner.net.au/_astro/azure-pipelines-logo.B45UakAg.png"/>
  </entry>
</feed>
