Title January 2021

ByNigel Meakins

Terraform and Azure DevOps Tips and Tricks

This entry is part 9 of 9 in the series Terraform on Azure

In this final post in the series I thought it’d be useful to share some little tips and what may be thought of as tricks that I’ve come across when using Azure DevOps and Terraform. So without delay, in no particular order, here are those that are my favourites.

Capturing Terraform Values within DevOps Variables and Obfuscating

You may find yourself in the situation where you need to capture a Terraform resource configuration value to an Azure DevOps variable. This tends to be for using in a separate downstream task outside of Terraform.

Setting DevOps pipeline variables from within Terraform can be easily achieved using PowerShell and the local-exec provisioner. The following Terraform resource will capture the configuration values for you into DevOps variables.

resource "null_resource" "terraform-to-devops-vars" {

   triggers = {
        // always execute
        uuid_trigger = "${uuid()}"        
   }

   provisioner "local-exec" {
    command = <<EOT
        Write-Host "##vso[task.setvariable variable=ServicePrincipleId]${azuread_service_principal.my_app.id}"
        Write-Host "##vso[task.setvariable variable=ServicePrinciplePassword]${azuread_service_principal_password.my_app.value}"
        EOT

    interpreter = ["PowerShell", "-Command"]
  }
}

The trigger will always fire, as it uses the uuid() function that will always generate a changed value.

Somebody Call Security…

If we leave things as they are, we will unfortunately expose our variables from within the DevOps outputs, such as the pipeline execution log, which for some values, such as secrets is going to create a security concern.

There is a native DevOps solution to this, and that is to use the IsSecret flag on the task.setvariable call, as below.

Write-Host "##vso[task.setvariable variable=DatabricksSecret;IsSecret=true]${azuread_service_principal_password.databricks.value}"

This will avoid any ‘leaky values’ and allow variables to capture Terraform values safely for use within the pipeline with no unwanted exposure.

Tracking File Changes using MD5 Hashes

Terraform does a great job of determining which resources have changed and need to be updated whenever a Plan or ‘Apply‘ action is made. There are times however when you will want to include other files in your Terraform configurations, such as when using a JSON file to store a REST payload for use in a script. In order to determine whether resources that use these files need to be included in the deployment, we can check against the MD5 hash of the file to see whether the file has changed. To redeploy the resource when the file has changed, we use a trigger that employs the filemd5() function, as below:

resource "null_resource" "exec_some_rest_with_payload_file" {
  triggers = {
    some_payload_changed                = "${filemd5("${path.module}\\..\\Payloads\\SomePayload.json")}"
  }

  provisioner "local-exec" {
    command = <<EOT
      .'${path.module}\..\Scripts\REST\ExecuteSomeRest.ps1' `
      -ApiRootUrl "https://${var.location}.${var.some_api_root_url_suffix}" `
      -SubscriptionId "${var.subscription_id}" `
      -TenantId "${var.tenant_id}" `
      -ApplicationId "${var.client_id}" `
      -Secret "${var.client_secret}" `
      -Payload ""${path.module}\..\Payloads\SomePayload.json"
      EOT

      interpreter = ["PowerShell", "-Command"]
    }
  }

This ensures that changes to related files used within our deployment are treated in a similar manner to changes to Terraform resource definitions. Nothing too clever this one of our tips and trick, and not really Azure DevOps-specific, just out-of-the-box Terraform native stuff. All the same something very useful that you may not be aware of.

Substituting Resource-Specific Outputs into Non-Terraform Files

This is one of the Terraform Azure DevOps tips and tricks that I find most useful. I’ve used it a lot when there are elements of API calls involved in the deployment. There are plenty of occasions where we find ourselves using scripts for elements of our deployment. Often we will supply a script with a JSON file or similar that contains a number of Terraform resource attributes for use in the script. A classic example of this is as with the above payload for a REST request body. These values may not be available however until deployment time, such as when needing resource module outputs that contain values generated at creation time, such as platform-specific unique ids. Hmmm, what’s a Terraformer to do?

Detokenising to the Rescue

A common technique often used with application or web .config files in the DotNet world is to use placeholder tokens in the config files and then replace these with the required configuration values that are passed in at deployment time. This ‘detokenising’ approach can be employed within Terraform as well. Here’s a simple example of a placeholder from such a file,

"some_platform_resource_id": "#{some_resource_id_as_output}#"

where here we have used ‘#{‘ and ‘}#’ characters to demarcate our placeholders.

We can replace the placeholder tokens in the file using a simple script such as the PowerShell shown below.

param(
    [string] $BaseFilePath,
    [string] $FileFilters,
    [PSCustomObject] $TokenValues
)


Function Update-FileTokens {
    param(
        [string] $BaseFilePath,
        [string] $FileFilters,
        [PSCustomObject] $TokenValues
    ) 
    <#
        .SYNOPSIS
        Updates placholder values in a group of files with their replacements.
        .DESCRIPTION
        Calls the Update-FileToken procedure for files at the base path included based on the names filters.
        .PARAMETER BaseFilePath
        The path from which to include files, including all subfolders
        .PARAMETER FileFilters
        A CSV string of the filters to apply to file names.
        .PARAMETER TokenValues
        A hashtable of tokens and the values to replace them with.
#>
    foreach ($filter in $FileFilters.Split(',')) {    
        $fileNames = Get-ChildItem -Path $BaseFilePath -Recurse -Filter $filter | Select-Object FullName `
   
        foreach ($fileName in $fileNames) {              
            Write-Host "Started replacing tokens in $($fileName.FullName)."
            Update-Tokens -FilePath $fileName.FullName -TokenValues $TokenValues
            Write-Host "Finished replacing tokens in $($fileName.FullName)."
        }     
    }
}

Function Update-Tokens {
    param(
        [string] $FilePath,
        [PSCustomObject] $TokenValues
    )    
    <#
        .SYNOPSIS
        Updates placholder token values in a group of files with their replacements.
        .DESCRIPTION
        Calls the Update-FileToken procedure for files at the base path included based on the names filters.
        .PARAMETER FilePath
        The path from of the file for token replacements
        .PARAMETER TokenValues
        A hashtable of tokens and the values to replace them with.
    #>
    $content = (Get-Content -Path $FilePath)
    
    $TokenValues.GetEnumerator() | ForEach-Object {
        $content = $content -replace $_.Key, $_.Value		
    } 

    Set-Content -Value $content -Path $FilePath
}

Update-FileTokens -BaseFilePath $BaseFilePath -FileFilters $FileFilters -TokenValues $TokenValues

We pass in a hash table object keyed on the placeholder tokens that we want to replace, such as ‘#{some_resource_id_as_output}#‘ above, with the values of the hash table entries being the replacements we want to substitute for. The above script will update the placeholders with their values in all files that match the BaseFilePath and FileFilters. Pretty straight-forward stuff.

In order to execute this within Terraform, with the required substitutions made at runtime, we can again use the local-exec provisioner with a PowerShell interpreter, constructing the hash table parameter from our resource attributes and variables and passing this in to the script call. The referencing of the module resource attributes will ensure that the replacements are triggered after these values have been made available so we don’t need any ‘depends_on’ clauses. The following resource snippet shows an example of these placeholders in action.

resource "null_resource" "update-file-tokens-payload-json" {
    triggers = {
        // always execute
        value = "${uuid() }"
    }

   provisioner "local-exec" {
       command = <<EOT
       .'${path.module}\..\scripts\util\Update-FileTokens.ps1' `
       -BaseFilePath '${path.module}\..' `
       -FileFilters '*.payload.json' `
       -TokenValues @{ 
            '#{some_config_from_var}#' = "${var.dbr_executable_storage_mount_name}" 
            '#{some_resource_id_as_output}#' = "${azurerm_template_deployment.some-arm.some-id-as-output}"
            '#{some_config_from_secret}#' = "${var.some-secret-value}"
       }
       EOT
      
       interpreter = ["PowerShell", "-Command"]
   } 
}

Once our required file has been processed using our Update-FileTokens.ps1 script, we can use the filemd5() trigger approach shown above to determine whether any resources that use this file need to be redeployed. If the file content has been changed by the detokenising, the resource will be redeployed as required.

Adopting this approach is very useful when using Rest API calls with JSON payloads for some elements of the Terraform deployment process. We can keep the payloads in their own JSON files, with any references to Terraform resource outputs and the like as placeholders,. Providing we call our Update-FileTokens.ps1 script before these JSON files are used we are able to treat these API calls like other resource definitions.

Summing Up

Thanks for reading. Quite a long one this time but I do hope the above Terraform and Azure DevOps tips and tricks prove to be of use to you Terraformers out there. Adding these strings to your bow may just help with those situations where Terraform doesn’t immediately offer up an obvious solution to realising your infrastructure management needs.

If you have any helpful techniques or simple tricks and tips to add or any questions on the above I’d love to hear about them in the comments below.

And That’s a Wrap

That winds up this series on Terraform on Azure. I’ve really enjoyed sharing my thoughts, opinions and experiences of this great combination of tooling that really empowers you on your Azure journeys. Over to you to stake your claim in the Cloud. May your deployments be idempotent, your Infrastructure as Code transparent and your solutions, well, just plain amazing.

Interested in our Data Services?

To find out more regarding any of the above, please email us, give us a call or use our enquiry form via the button below.