Learning Kubernetes part 3 - build Keyvault

Welcome back to another installment of learning Kubernetes… I realize that learning AKS might have been better since this is all happening against Azure, but I am working to learn how to run things in Kubernetes in Azure… so… there is that.

In previous installments, I have been through:

Learning Kubernetes part 1 - the Terraform setup
Learning Kubernetes part 2 - building AKS

This time, I am going to build a key vault. Nope, this piece isn’t needed to learn Kubernetes, but it sure is a convenient place to store the kubeconfig for an AKS cluster, just in case. Also - down the road, I might decide to use it as a secret store for the cluster. We will see.

There are a couple of items needed to put Key Vault in Azure with Terraform. The vault, Access Policies, and Secrets. The code for each is outlined below:

Building the key vault resource is pretty quick. Generally it will have a name, location, resource group and tags to get started.

resource "azurerm_key_vault" "this" {
  name                = "${module.naming.key_vault.name}-${var.location}-${var.environment}"
  resource_group_name = azurerm_resource_group.this.name
  location            = azurerm_resource_group.this.location
  tenant_id           = data.azurerm_client_config.current.tenant_id
  sku_name            = "standard"

	# Access Policy blocks go here if configured inside KV resource
}

Once that is built, I want to be sure to give myself access to the vault so I can get to whatever is stored there. In addition, any other managed service identities that might need to get to things stored here should also get access policies.

There are two ways to handle Access Policies, inside the Key Vault Resource:

  access_policy {
    object_id = "9a######-####-####-####-########92c0"
    tenant_id = data.azurerm_client_config.current.tenant_id

    secret_permissions = [
      "Get", "List", "Purge", "Delete", "Backup", "Recover", "Restore", "Set"
    ]
  }
  access_policy {
    object_id = "8c######-####-####-####-########bd1a"
    tenant_id = data.azurerm_client_config.current.tenant_id

    secret_permissions = [
      "Get", "List", "Purge", "Delete", "Backup", "Recover", "Restore", "Set"
    ]
  }

Or as separate terraform resource blocks:

resource "azurerm_key_vault_access_policy" "one" {
  key_vault_id = azurerm_key_vault.this.id
  tenant_id = data.azurerm_client_config.current.tenant_id
  object_id = "insert object id here"
  secret_permissions = [
    "Get", "List", "Purge", "Delete", "Backup", "Recover", "Restore", "Set"
  ]
}
resource "azurerm_key_vault_access_policy" "two" {
  key_vault_id = azurerm_key_vault.this.id
  tenant_id = data.azurerm_client_config.current.tenant_id
  object_id = "insert object id here"
  secret_permissions = [
    "Get", "List", "Purge", "Delete", "Backup", "Recover", "Restore", "Set"
  ]
}

Currently, either way works, but they are mutually exclusive. You have to pick one. I am generally of the mindset that resources should be separated when possible, but access policies are a bit of a special beast in my opinion. Generally, using RBAC for Key Vault in a real environment is preferred. This way, access comes from the same type of IAM managed roles as other resources use. In this case, I will usually configure access policies within Key vault, just as a back stop in the event RBAC is disabled or not desired.

One thing to note


When deploying Access Policy as a separate resource, destroying key vault with terraform can get tricky. The reason has to do with how Terraform takes things down.

If an Access Policy is created within the Key Vault itself the access policy will go away only when the vault is removed. If the Access Policy is a separate resource, it is possible that Terraform will remove the resource in parallel with other items. If the Access Policy gets removed before a secret, then problems will pop up when trying to delete the Key Vault resource itself since a non-empty vault cannot be removed.

Just something to consider when deciding how to handle access policy in Key Vault. Similar situations can arise with RBAC based vault access. I am hoping to find a way to sort this out to ensure clean destruction of resources. Please comment if you have solved this.

With a vault setup and access enabled, I can consider what to put in the key vault as a secret. This is a separate resource in Terraform that can draw on any of the outputs from other things created for its value. In this case, the admin Kubeconfig is the target:

resource "azurerm_key_vault_secret" "this" {
  name         = "kubeconfig"
  key_vault_id = azurerm_key_vault.this.id
  value        = azurerm_kubernetes_cluster.this.kube_config_raw

}

That’s all there is to it. A key vault for storing secrets, configs, etc. for use with Kubernetes. Remember, this is high level and does not consider any private link endpoints for resources in Azure. The idea is for a lab to mess around with. As I continue on my journey with AKS, I will add things like this to the configuration and update with posts about them accordingly.

For now, it’s “Just the Facts, Ma’am” keeping things as simple and cheap as possible.

Next Time

Next time, I will be looking at creating a Virtual Network for Kubernetes and some other things to use.

Written on July 12, 2022