If you liked what you've learned so far, dive in!
Subscribe to get access to this tutorial plus
video, code and script downloads.
With a Subscription, click any sentence in the script to jump to that part of the video!
Login SubscribeAlas, this is our final chapter. So, I want to do something fun, and also talk about how Ansible can be pushed further.
First, our Ansible setup could be a lot more powerful. We already learned that instead of having one big play in our playbook, we could have multiple plays. One play might setup the web servers, another play might provision the database servers, and one final play could configure all of the Redis instances. We're using one play because - in our smaller setup - all of that lives on the same host.
Also, each host group can have multiple servers below it. We could launch 10 EC2 instances and provision all of them at once.
And finally, Ansible can even be used to launch the instances themselves! A few minutes ago, we manually launched the EC2 instance through the web interface. Lame! Let's teach Ansible to do that.
How? A module of course! The ec2
module. This module is really good at interacting with EC2 instances. Actually, if you click the Cloud Modules section on the left, you'll find a ton of modules for dealing with EC2 and many other services, like IAM
, RDS
and S3
. And of course, modules exist for all of the major cloud providers. Ansible rocks!
So far, our playbook has been executing commands on the remote hosts - like our virtual machine. But, in this case... we don't need to do that. Yea, we can run the ec2
module locally... because the purpose of this module is to talk to the AWS API. In other words, it doesn't matter what host we execute it from!
Wherever you decide to execute these tasks, you need to make sure that something called Boto is installed. It's an extension for Python... which you might also need to install locally. So far, Python has already come pre-installed on our VM and EC2 instances.
If you're not sure if you have this Boto thing, just try it. If you get an error about Boto, check into installing it.
Since these new tasks will run against a new host - localhost - we can organize them as a new play in our playbook... or create a new playbook file entirely. To keep things simple, I'll create a new playbook file - aws.yml
.
Inside, you know the drill: start with the host, set to local
. Below that, set gather_facts
to false
:
- hosts: local | |
gather_facts: False | |
... lines 4 - 31 |
What's that? Each time we run the playbook, the first task is called "Setup". That task gathers information about the host and creates some "facts"... which is cool, because we can use those facts in our tasks.
But since we're simply running against our local machine, we're not going to need these facts. This saves time.
For the EC2 module to work, we need an AWS access key and secret key. You can find these inside of the IAM section of AWS under "Users". I already have mine prepared. Let's use them!
But wait! We probably don't want to hardcode the secret key directly in our playbook. Nope, let's use the vault!
ansible-vault edit ansible/vars/vault.yml
Type in beefpass
. Then, I'll paste in 2 new variables: vault_aws_access_key
and vault_aws_secret_key
:
# ansible/vars/vault.yml
---
# ...
vault_aws_access_key: "AKIAJAWKEZQ6S7LM3EKQ"
vault_aws_secret_key: "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"
Save and quit!
Just like before, open vars.yml
and create two new variables: aws_access_key
set to vault_aws_access_key
and aws_secret_key
set to vault_aws_secret_key
:
... lines 2 - 7 | |
aws_access_key: "{{ vault_aws_access_key }}" | |
aws_secret_key: "{{ vault_aws_secret_key }}" |
Finally, open up playbook.yml
so we can steal the vars_files
section. Paste that into the new playbook:
- hosts: local | |
gather_facts: False | |
vars_files: | |
- ./vars/vault.yml | |
- ./vars/vars.yml | |
... lines 8 - 31 |
To use the keys, you have two options: pass them directly as options to the ec2
module, or set them as environment variables: AWS_ACCESS_KEY
and AWS_SECRET_KEY
. In fact, if those environment variables are already setup on your system, you don't need to do anything! The module will just pick them up!
Let's set the environment variables... because it's a bit more interesting. Just like before, use the environment
key. Then set AWS_ACCESS_KEY
to {{ aws_access_key }}
. Repeat for AWS_SECRET_KEY
set to {{ aws_secret_key }}
:
- hosts: local | |
... lines 3 - 8 | |
environment: | |
AWS_ACCESS_KEY: "{{ aws_access_key }}" | |
AWS_SECRET_ACCESS_KEY: "{{ aws_secret_key }}" | |
# or use aws_access_key/aws_secret_key parameters on ec2 module instead | |
... lines 13 - 31 |
Boom! We are ready to start crushing it with this module... or any of those AWS modules.
And actually, using the module is pretty simple! We're just going to give it a lot of info about the image we want, the security group to use, the region and so on.
Add a new task called "Create an Instance". Use the ec2
module and start filling in those details:
- hosts: local | |
... lines 3 - 13 | |
tasks: | |
- name: Create an instance | |
ec2: | |
... lines 17 - 31 |
For instance_type
, use t2.micro
and set image
to ami-41d48e24
:
- hosts: local | |
... lines 3 - 13 | |
tasks: | |
- name: Create an instance | |
ec2: | |
instance_type: t1.micro | |
image: ami-a15e0db6 | |
... lines 19 - 31 |
That's the exact image we used when we launched the instance manually.
Next, set wait
to yes
- that's not important for us, but it tells Ansible to wait until the machine gets into a "booted" state. If you're going to do more setup afterwards, you'll need this:
- hosts: local | |
... lines 3 - 13 | |
tasks: | |
- name: Create an instance | |
ec2: | |
instance_type: t1.micro | |
image: ami-a15e0db6 | |
wait: yes | |
... lines 20 - 31 |
Then, group:
web_access_testing, count: 1
, key_name: Ansible_AWS_tmp
, region:
us-east-2 and instance_tags
with Name: MooTube instance
:
- hosts: local | |
... lines 3 - 13 | |
tasks: | |
- name: Create an instance | |
ec2: | |
instance_type: t1.micro | |
image: ami-a15e0db6 | |
wait: yes | |
group: web_access | |
count: 1 | |
key_name: KnpU-Tutorial | |
region: us-east-1 | |
instance_tags: | |
Name: MooTube instance | |
... lines 26 - 31 |
Obviously, tweak whatever you need!
Just like any other module, we can register the output to a variable. I wonder what that looks like in this case? Add register: ec2
to find out:
- hosts: local | |
... lines 3 - 13 | |
tasks: | |
- name: Create an instance | |
ec2: | |
... lines 17 - 25 | |
# Could be useful further to get Public IP, DNS, etc. | |
register: ec2 | |
... lines 28 - 31 |
Then, debug it: debug: var=ec2
:
- hosts: local | |
... lines 3 - 13 | |
tasks: | |
- name: Create an instance | |
ec2: | |
... lines 17 - 25 | |
# Could be useful further to get Public IP, DNS, etc. | |
register: ec2 | |
# debug the output to see what AWS returns back | |
- debug: var=ec2 |
Give it a try!
ansible-playbook ansible/aws.yml -i ansible/hosts.ini --ask-vault-pass
Cool, it skipped the setup task and went straight to work! If you get an error about Boto - either it doesn't exist, or it can't find the region - you may need to install or upgrade it. I did have to upgrade mine - I could use the us-east-1
region, but not us-east-2
. Weird, right? Upgrading for me meant running:
easy_install -U boto
And, done! Yes! It's green! And the variable is awesome: it gives us an instance id and a lot of other great info, like the public IP address. If I refresh my EC2 console, and remove the search filter... yes! Two instances running.
I can feel the power!
We now have 2 playbooks: one for booting the instances, and another for provisioning them. If you wanted Ansible to boot the instances and then provision them, that's totally possible! Ultimately, we could take this public IP address and add it as a new host under the aws
group:
... lines 1 - 6 | |
[aws] | |
54.205.128.194 ansible_user=ubuntu ansible_ssh_private_key_file=~/.ssh/KnpU-Tutorial.pem | |
... lines 9 - 13 |
Of course... with our current setup, the hosts.ini
inventory file is static: each time we launch a new instance, we would need to manually put its IP address here:
... lines 1 - 6 | |
[aws] | |
54.205.128.194 ansible_user=ubuntu ansible_ssh_private_key_file=~/.ssh/KnpU-Tutorial.pem | |
... lines 9 - 13 |
But, there are ways to have a dynamic hosts file. Imagine a setup where Ansible automatically looks at the servers booted in the cloud and uses them for your inventory. That's beyond the scope of this tutorial, but if you need that, go for it!
Woh, we're done! Thanks for sticking with me to cover this huge, but super powerful tool! When you finally figure out how to get Ansible to do your laundry for you, send me your playbook. Or better, create a re-usable role and share it with the world.
All right guys, seeya next time.
Hey Ricardo,
I'm happy you like it! ;)
Btw, you may be interested in our another Ansible courses called Ansistrano: https://knpuniversity.com/s... . Check it out if you're interested in deploying apps with Ansible.
Cheers!
Dammit, Victor, I got my hands dirty around with deploy helper and I have done deploy and rollback, anyhow I'll check Ansistrano
BTW: I use it to deploy my Laravel app
Hey Ricardo,
Haha, oh man, we used to have an old creppy deploy on KnpU using rsync, but then finally migrated to the Ansistrano - now deploys are much much easier and clearer. And now we have zero-downtime deploys!
It should fit just fine for any web app, very similar to Capistrano strategy in case your heard about it.
Cheers!
Great tutorial ! But it miss some stuff IMO :
- Clone a git repo that's private
- Secure some stuff (Iptables, Fail2ban, etc...)
- create the mysq user instead of the root one
What do you think ?
Hey Numerogeek,
Thanks! About topics you mentioned:
> Clone a git repo that's private
We covered it in the Ansistrano tutorial, see: https://knpuniversity.com/s...
> Secure some stuff (Iptables, Fail2ban, etc...)
Yeah, those topics are were not covered, I think those are slightly pro level and not all our users know what is Iptables, Fail2ban, etc. And we didn't want to explain server terminology too much here to avoid blowups of this course which is about Ansible, not about administrating Linux servers :) BTW, Ansible has Iptables module, see: https://docs.ansible.com/an... - and we do explain what is Ansible module and how to use it in your playbooks. So if you know what is Iptables exactly - you can easily handle it by yourself.
> create the mysq user instead of the root one
We mostly talk about MySQL on VirtualBox so it's not a big deal to use root user - makes life easier during development, and thanks to expanded permissions we're able to create DB and schema with Symfony console commands. But yeah, that's a good point to keep in mind when deploy to production.
Cheers!
Thanks Victor for your generosity. This has removed the ansible 'cloud' from my eyes... completely. Let me head straight to Ansistrano
So I got this error...
InvalidParameterCombination: VPC security groups may not be used for a non-VPC launch
Fixed by adding a vpc_subnet_id from the vpc of the security group.
// composer.json
{
"require": {
"php": ">=5.5.9",
"symfony/symfony": "3.1.*", // v3.1.4
"doctrine/orm": "^2.5", // v2.7.2
"doctrine/doctrine-bundle": "^1.6", // 1.6.4
"doctrine/doctrine-cache-bundle": "^1.2", // 1.3.0
"symfony/swiftmailer-bundle": "^2.3", // v2.3.11
"symfony/monolog-bundle": "^2.8", // 2.11.1
"symfony/polyfill-apcu": "^1.0", // v1.2.0
"sensio/distribution-bundle": "^5.0", // v5.0.12
"sensio/framework-extra-bundle": "^3.0.2", // v3.0.16
"incenteev/composer-parameter-handler": "^2.0", // v2.1.2
"doctrine/doctrine-migrations-bundle": "^1.2", // v1.2.0
"snc/redis-bundle": "^2.0", // 2.0.0
"predis/predis": "^1.1", // v1.1.1
"composer/package-versions-deprecated": "^1.11" // 1.11.99
},
"require-dev": {
"sensio/generator-bundle": "^3.0", // v3.0.8
"symfony/phpunit-bridge": "^3.0", // v3.1.4
"doctrine/data-fixtures": "^1.1", // 1.3.3
"hautelook/alice-bundle": "^1.3" // v1.4.1
}
}
A fantastic course, entertaining and too useful. Thank you very much there