Howdy folks,
if you have recently attended one of my talks or workshops you know that in my opinion, DevOps, infrastructure as code, and automated deployments are essential for security in cloud environments. For example, you can only access an Azure KeyVault secret during your VM deployment if you do not use Azure portal. You can chose whatever tool you want, however, in this post I’m going to focus on PowerShell, ARM templates and Terraform.
PowerShell
Some time ago, I have published a blog post about how to securely deploy an Azure VM using PowerShell. During the deployment process you can access a KeyVault secret and use it as local admin password for the virtual machine.
ARM templates
With ARM templates, the process is getting a bit more complicated. If you have an Azure KeyVault and a respective secret you need to find a way to first read the secret and then pass it into the VM creation process. In order to achieve that you have to work with linked templates. You need a main template which is used to access the KeyVault secret and then pass it as parameter to the linked template in which your infrastructure is deployed. You can find my example templates in my Azure Security Github repository.
Terraform
Now, here’s the part I’m most enthusiastic about: Secure resource deployments with Terraform.
Terraform is an open-source toolkit for infrastructure-as-code deployments. The beauty is that it comes with some advantages over ARM templates:
- the ability to test deployments before applying changes
- the ability to destroy former resource deployments.
- the ability to change existing deployments
- easier template language
Test your configuration
With the command
terraform plan
you can let terraform perform a difference check between what you already have and what your new configuration will do in your Azure subscription.
Remove old resources
When you remove resource information from your template files, Terraform will remove the respective Azure resources as soon as you apply the new config. So it’s getting quite easy to get rid of old, no longer needed, resources. With the command
terraform destroy
you can even remove (destroy) destroy whole deployments.
Change existing deployments
Imagine you have an existing deployment and want to change only parts of it. Do you want to destroy it just to rebuild the environment? With
terraform apply
you can not only deploy new environments, you can also apply changes in existing deployments.
Easier template language
Lots of administrators and operators I have talked with so far have complained about the difficult JSON syntax ARM templates come with. This is why most of them chose PowerShell to easily deploy Azure environments. The creation of an Azure resource group in ARM compared to Terraform is quite an effort. In Terraform it’s only this:
resource "azurerm_resource_group" "myterraformresourcegroup" { name = "myResourceGroupName" location = "westeurope" }
You can add more information such as tags, however, the code above is all you need. Simply store it in a .tf-file, run the Terraform command and you’re done. Well, almost.
How to begin with Terraform
Terraform needs to “know” how to access your Azure subscription. At the same time it will save your Azure environment’s state in a local .tfstate-file by default. The disadvantage here is that passwords you use in your deployment are saved in this .tfstate-file, too.
So, first thing we need to do is to prepare our local computer for using terraform. I am using a MacBook but on a Windows machine you will have to conduct similar steps. Terraform needs an Azure AD service principal that is created using the following bash/Azure CLI commands:
ARM_SUBSCRIPTION_ID=yourSubscriptionID az ad sp create-for-rbac --role="Contributor" --scopes="/subscriptions/$ARM_SUBSCRIPTION_ID"
The service principal is used for Terraform to authenticate against your Azure environment. To enable Terraform to use this information, you need to copy some of the above command’s output:
{ "appId": "yourServicePrincipalID", "displayName": "azure-cli-2019-01-24-11-58-24", "name": "http://azure-cli-2019-01-24-11-58-24", "password": "yourServicePrincipalPassword", "tenant": "yourAzureADTenantID" }
Now you can configure environmental variables for Terraform with the information above and either export the following environment variables or configure a Terraform provider:
echo "Setting environment variables for Terraform" export ARM_SUBSCRIPTION_ID=$ARM_SUBSCRIPTION_ID export ARM_CLIENT_ID=yourServicePrincipalID export ARM_CLIENT_SECRET=yourServicePrincipalPassword export ARM_TENANT_ID=yourAzureADtenantID # Not needed for public, required for usgovernment, german, china export ARM_ENVIRONMENT=public
To export the variables you run the code above in you bash shell session or store it in your ./bash_profile file (on macOS). Alternatively, you can configure a Terraform provider to define access to your Azure subscription. The provider section within a template file tells Terraform to use an Azure provider:
provider "azurerm" { subscription_id = "xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" client_id = "xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" client_secret = "xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" tenant_id = "xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" }
As I’ve mentioned above, Terraform stores environmental information including passwords that is needed in a deployment in the .tfstate-file. Of course, we do not want to have passwords stored locally on any DevOps engineer’s device so we need to put some more effort in it. What we can do as a first step is to configure an Azure storage account as a Terraform remote backend. The advantage of a remote backend is that DevOps engineers can use a common .tfstate file for a single environment instead of having a separate one on every engineer’s machine. Another advantage is that, by default, storage account content is encrypted at rest. The following bash code creates the new Azure resource group terraformstate and a new storage account with a random name in it:
RESOURCE_GROUP_NAME=terraformstate STORAGE_ACCOUNT_NAME=tfstate$RANDOM CONTAINER_NAME=tfstate # Create resource group az group create --name $RESOURCE_GROUP_NAME--location westeurope # Create storage account az storage account create --resource-group $RESOURCE_GROUP_NAME--name $STORAGE_ACCOUNT_NAME--sku Standard_LRS --encryption-services blob # Get storage account key ACCOUNT_KEY=$(az storage account keys list --resource-group $RESOURCE_GROUP_NAME--account-name $STORAGE_ACCOUNT_NAME--query [0].value -o tsv) # Create blob container az storage container create --name $CONTAINER_NAME--account-name $STORAGE_ACCOUNT_NAME--account-key $ACCOUNT_KEY
Now, you have a storage account and a storage container and you need to make Terraform using this container as a remote backend. What you need to do is to add the following code to your Terraform configuration:
terraform { backend "azurerm" { storage_account_name = "tfstatexxxxxx" container_name = "tfstate" key = "terraform.tfstate" } }
Of course, you do not want to save your storage account key locally. I have created an Azure Key Vault secret with the storage account key as the secret’s value and then added the following line to my .bash_profile file:
export ARM_ACCESS_KEY=$(az keyvault secret show --name mySecretName --vault-name myKeyVaultName --query value -o tsv)
The export command creates an environment variable for as long as the bash terminal is running. Every time I start a new terminal, the storage account key is read from the Azure Key Vault and then exported into the bash session. When I close my bash, the key is removed from memory.
Access Key Vault Secrets during deployments
In order to access a secret from an Azure Key Vault within your deployment template you simply need to add a data source in the template file:
data "azurerm_key_vault_secret" "mySecret" { name = "mySecretName" vault_uri = "https://myKeyVaultName.vault.azure.net/" }
In the VM deployment part of the template file you can then reference this secret like this:
resource "azure_virtual_machinge" "myAzureVM" { os_profile { computer_name = "myvm" admin_username= "labuser" admin_password= "${data.azurerm_key_vault_secret.mySecret.value}" } }
You see, it’s really much easier than working with ARM templates. For further reference please have a look at my GitHub repository where I’ve uploaded all the Terraform related code I used in this article. In my next article I will show how to deploy an entire Azure environment using Terraform.
So long, happy testing!
Cheers,
Tom
Thanks for this article! It is similar to Microsoft’s walk through on using Terraform with Azure, but I was hoping for some remedial learning (for those of us who have never used Terraform!). Quick question: In the section on setting up Terraform to use the service principle that we setup, (Dumb question coming up) where or how is the following information used? Is this saved in a file and then run using terraform or do I need to have a “bash” utility to run code similar to how PowerShell would work? I know this is a rudimentary question, but there seems to be a gap on most write-ups on this topic that assumes the reader is some sort of bash\terraform expert already, which is not my case. Thanks!
{
“appId”: “yourServicePrincipalID”,
“displayName”: “azure-cli-2019-01-24-11-58-24”,
“name”: “http://azure-cli-2019-01-24-11-58-24”,
“password”: “yourServicePrincipalPassword”,
“tenant”: “yourAzureADTenantID”
}
LikeLike
My bad, I meant this set of code… where is this run or saved to?
echo “Setting environment variables for Terraform”
export ARM_SUBSCRIPTION_ID=$ARM_SUBSCRIPTION_ID
export ARM_CLIENT_ID=yourServicePrincipalID
export ARM_CLIENT_SECRET=yourServicePrincipalPassword
export ARM_TENANT_ID=yourAzureADtenantID
# Not needed for public, required for usgovernment, german, china
export ARM_ENVIRONMENT=public
LikeLike
Hi there,
the following passage is an Azure CLI script to create the service principal which is used for Terraform later:
ARM_SUBSCRIPTION_ID=yourSubscriptionID
az ad sp create-for-rbac –role=”Contributor” –scopes=”/subscriptions/$ARM_SUBSCRIPTION_ID”
The section you refer to (the export commands) is saved in your ./bash_profile file in your user’s home directory on macOS. The “export” command on Unix and Linux operating systems is used for storing values to environment variables in your shell session. So if you save the section in your ./bash_profile these variables are exported to your shell environment every time you start a new shell session. From there, you call Terraform which will recognise those variables and use their values for logging in to your Azure environment.
You could also manually run the section in your bash shell but storing those values in you profile will make it even easier.
Cheers and best,
Tom
LikeLike
Thanks again!
LikeLike
This is a really interesting article, but doesn’t solve (for me, anyway) the chicken-and-egg problem of service principals and Terraform. Specifically, we want to be able to use certificate-based authentication, which the TF Provider block supports, but retrieve the certificate from the key vault (not supported by the Provider block).
Even in the above scenario, how do you provision the user who runs terraform at that point? Our goal is to make it as least-privilege as possible, with the exception of the service principal account referenced in the provider blocks. We also want any of our developers to be able to use Terraform, but have none of the provider information available to them. Ideally, the person running the ‘terraform plan’ and ‘terraform apply’ commands wouldn’t need and rights within Azure.
LikeLiked by 1 person
Hi network geek and thank you for your feedback.
What you could do is to have a CI/CD pipelining tool such as Azure DevOps in place. You create a service principal for Terraform with the respective rights needed on Azure (it might be a highly privileged service principal depending on what you deploy via Terraform) and configure Azure DevOps to use this service principal every time there is a Terraform deployment. So your end user accounts are not privileged but eligible to log on to Azure DevOps and start the deployment process from there.
I guess I’ll write another blog post about role-based access control in a DevOps world soon so I can further explain it to you guys.
LikeLike