05 Oct 2015 | terraform, provisioning, aws, hashicorp, devops
In my last post I covered the basics of provisioning a single EC2 instance with terraform. This time we’re going to go further and explore the provisioners and some other features. I’m doing some pretty funky things just to show the power of terraform. As you’ll see later there are other (better) ways.
So last time we provisioned an AWS box with a ubuntu base image. This is not really very useful on it’s own. I’m going to assume you’d like to at least install some applications on there. Of course we could hop onto the box and run some commands, but one of the main points of terraform is reproducibility and by manually setting up the box we end up with a box that becomes like a ‘snowflake’. Snowflakes are not always beautiful, they fall apart when you touch them which is what the box would become without configuration management. So now we introduce the concept of provisioners.
Provisioners are a pretty common concept in most hashicorp products. As we saw in the last example a builder built an AWS instance, but now it’s time for the provisioner to install all the required software on our box. A provisioner is basically something that runs something to build the box. This could be anything from a couple of shell commands to running something like puppet. In contrast to other hashicorp applications terraform only includes a fairly small set of provisioners. Packer and Vagrant contain rather a lot. For the time being terraform is limited to:
So we we take our original terraform file:
provider "aws" {
access_key = "mykey"
secret_key = "imobviouslynotgoingtoputmyrealkeyhere:)"
region = "us-east-1"
}
resource "aws_instance" "example" {
ami = "ami-408c7f28"
instance_type = "t1.micro"
key_name = "mykey"
}
We’ll add some steps, lets copy a shell script to the box, run the shell script and then provision via ansible-pull. Normally ansible runs in push mode i.e. changes are pushed out to a set of boxes. So we could add the following shell script assuming we had some things in a github repo:
sudo apt-get -yy update
sudo apt-get -yy install ansible git
cd /tmp
# Assuming here we don't rely on any ssh keys
git clone https://github.com/someone/somerepo.git
cd somerepo
sudo ansible-pull -U /tmp/somerepo -i hosts
Then we could have a simple playbook like this:
- hosts: 127.0.0.1
connection: local
tasks:
- apt: name=python-httplib2 update_cache=yes
- name: Add core users.
user: name= shell=/bin/bash groups=admin state=present
with_items:
- bob
So this is the basis of some really simple automation
Then we just plugin in the provisioners so our complete file now looks like this:
provider "aws" {
access_key = "myaccesskey"
secret_key = "notgivingmyrealkeyaway:)"
region = "us-east-1"
}
resource "aws_instance" "example" {
ami = "ami-408c7f28"
instance_type = "t1.micro"
key_name = "mykey"
provisioner "file" {
connection {
user = "ubuntu"
host = "${aws_instance.example.public_ip}"
timeout = "1m"
key_file = "/path/to/ssh_key"
}
source = "go.sh"
destination = "/home/ubuntu/go.sh"
}
provisioner "remote-exec" {
connection {
user = "ubuntu"
host = "${aws_instance.example.public_ip}"
timeout = "1m"
key_file = "/path/to/ssh_key"
}
script = "go.sh"
}
}
So there’s some really cheap dirty automation.
With anything beyond a basic single machine I wouldn’t use terraform provisoners to do the actual machine build as illustrated here. A much nicer way would be to build an AMI with packer giving us the following work flow:
terraform plan
Terraform seems good for:
But there are still some issues:
Many people like to completely trash their servers with every deployment (phoenix server pattern). It seems like doing this in a blue/green deployment way could be fiddily with terraform. This is something I’d like to investigate further