Terraform Methods for Ciinabox Pipelines

We’ve added 3 new Terraform methods into the ciinabox-pipelines Jenkins shared library. This will help us to statndardise the process of managing and deploying terraform stacks.

Plan

terraformPlan(
  workspace: 'dev', // (required, workspace name)
  variables: [ // (optional, key/value pairs to pass in to terraform as variable overrides)
    key: 'value'
  ],
  plan: 'plan.out' // (optional, defaults to tfplan-${workspace})
)

Docs

Apply

terraformApply(
  workspace: 'dev', // (required, workspace name)
  plan: 'plan.out' // (optional, defaults to tfplan-${workspace})
)

Docs

Destroy

terraformDestroy(
  workspace: 'dev', // (required, workspace name)
)

Docs

Terraform Concepts

Before we start building the pipeline we need to understand a couple of Terraform conecpts these methods are using with these pipeline methods.

Backend

Terraform manages the state of a stack with a state file which is stored on the local machine if no backend is provided. To solve this problem Terraform has backends to store the state in a central location and we’ll use the S3 backend for this example.

State Locking

So now we have a central place to store our state file, what happens if multiple pipelines attempt to update the stack at the same time? To overcome this Terraform has state locking and using the s3 backend we can achieve this using a dynamodb table.

Workspaces

Now we have a central place to store our state and can lock it to prevent multiple updates, what about handing multiple environments with the same stack? Terraform has a solution for this too called workspaces. Workspaces are the logical equivelent of an environment. Based upon the workspace we can perform certain actions or variables inside our terraform code.

When we use workspaces with the s3 backend, it will also separate our state files for each workspace and our state locking in dynamodb.

terraform-bucket
└── env:
    ├── dev
    │   └── terraform
    │       └── state
    ├── test
    │   └── terraform
    │       └── state
    ├── uat
    │   └── terraform
    │       └── state
    └── prod
        └── terraform
            └── state

As well as provide the mechanisms for cross account deployments allowing fro the backend to be in a different account than the deployed stack. This allows us to follow the same modle as cloudformation by controling aretifacts in the ops account and assuming a role from Jenkins to deploy into the dev and prod accounts.

Building a Terraform Pipeline

Lets build a new terraform pipeline, YAY!

Backend Setup

Create our backend bucket

aws s3api create-bucket --bucket terraform-bucket --region ap-southeast-2

Enable bucket versioning incase something happens to our state files

aws s3api put-bucket-versioning -bucket terraform-bucket --region ap-southeast-2 --versioning-configuration Status=Enabled

Create our dynamodb table

aws dynamodb create-table \
  --attribute-definitions AttributeName=LockID,AttributeType='S' \
  --table-name terraform-state \
  --key-schema AttributeName=LockID,KeyType=HASH \
  --billing-mode PAY_PER_REQUEST

Terraform

Now to our terraform code, we’ll add our backend and workspace code

terraform {
  backend "s3" {
    encrypt = true
    bucket = "terraform-bucket"
    region = "ap-southeast-2"
    key = "terraform/state"
    dynamodb_table = "terraform-state"
  }
}

Next we’ll add our iam roles for the cross account access

variable "workspace_iam_roles" {
  default = {
    dev = "arn:aws:iam::<account-id>:role/ciinabox"
    prod = "arn:aws:iam::<account-id>:role/ciinabox"
  }
}

And finally the aws providor

provider "aws" {
  version = "~> 2.65.0"
  region = "ap-southeast-2"
  assume_role {
    role_arn = var.workspace_iam_roles[terraform.workspace]
    session_name = "ciinaboxterraformdeployment" // only [a-zA-Z0-9] characters allowed in the session_name
  }
}

As you can see role_arn = var.workspace_iam_roles[terraform.workspace] selects the required iam role based upon the current workspace.

Pipeline

Finally the Jenkins pipeline to deploy our environments

Create a base pipeline with our terraform docker agent.

@Library('ciinabox')

pipeline {

  agent {
    docker {
      image 'base2/terraform:0.12.20'
      label 'linux'
    }
  }

  stages {

  }

}

First off we’ll plan all of our environments. This will take the changes and print out the changes if they are apllied. The plans are artifacted so they can be viewed without having to trawl though the console logs. In this step we’ll pass through any runtime variables we need to apply using key:value pairs.

    stage('plan') {
      steps {
        terraformPlan(
          workspace: 'dev', 
          variables: [
            environment: 'dev'
          ]
        )
        terraformPlan(
          workspace: 'prod', 
          variables: [
            environment: 'prod'
          ]
        )
      }
    }

Now we have our plan and we’re happy with it we can apply the changes

    stage('deploy dev') {
      input {
        message "apply terraform dev changes?"
      } 
      steps {
        terraformApply(workspace: 'dev')
      }
    }

then we can deploy the next environment

    stage('deploy prod') {
      input {
        message "apply terraform prod changes?"
      } 
      steps {
        terraformApply(workspace: 'prod')
      }
    }

To see the full demo visit the demo-terraform-infrastructure repo


PR - https://github.com/base2Services/ciinabox-pipelines/pull/120 Demo - https://github.com/base2Services/demo-terraform-infrastructure