Elastic Beanstalk with Docker using Terraform
Posted on April 13, 2017 (Last modified on July 11, 2024) • 7 min read • 1,397 wordsI just investigate AWS Elastic Beanstalk. And I want to use terraform for this. This is what I’ve done, and how I’ve got it running. I basically do this because the docs for this are either super-long (and are still missing critical points) or super-short (and are also missing critical points), at least what I’ve found.
This should get you up and running in very little time. You can also get all the code from a demo github repository.
The Architectural Overview is a good page to read to get an idea of what you’re about to do. It’s not that long.
In short, Elastic Beanstalk runs a version of an application in an environment. So the process is: Declaring an application, defining a couple of versions and environments, and then combine one specific version with one specific environment of an app to create an actually running deployment.
The environment is just a set of hosts configured in a special way (autoscaling & triggers, subnets, roles, etc.), whereas the application version is the info about how to deploy the containers on that environment (ports, env variables, etc.). Naturally, you think of having a DEV environment which runs “latest”, and a PROD environment which runs “stable” or so. Go crazy.
For the example here you need a couple of things & facts:
The files in the repository have way more parameters, but this is the basic set which should get you running (I tried once, then added all that stuff). The main.tf
file below will create the application and an environment associated with it.
# file: main.tf
resource "aws_elastic_beanstalk_application" "test" {
name = "ebs-test"
description = "Test of beanstalk deployment"
}
resource "aws_elastic_beanstalk_environment" "test_env" {
name = "ebs-test-env"
application = "ebs-test"
cname_prefix = "mytest"
# the next line IS NOT RANDOM, see "final notes" at the bottom
solution_stack_name = "64bit Amazon Linux 2016.09 v2.5.2 running Docker 1.12.6"
# There are a LOT of settings, see here for the basic list:
# https://is.gd/vfB51g
# This should be the minimally required set for Docker.
setting {
namespace = "aws:ec2:vpc"
name = "VPCId"
value = "${var.vpc_id}"
}
setting {
namespace = "aws:ec2:vpc"
name = "Subnets"
value = "${join(",", var.subnets)}"
}
setting {
namespace = "aws:autoscaling:launchconfiguration"
name = "IamInstanceProfile"
value = "${var.instance_role}"
}
setting {
namespace = "aws:elasticbeanstalk:environment"
name = "ServiceRole"
value = "${var.ebs_service_role}"
}
}
If you run this, at least one host and one ELB should appear in the defined subnets. Still, this is an empty environment, there’s no app running in it. If if you ask yourself, “where’s the version he talked about?” - well, it’s not in there. We didn’t create one yet. This is just the very basic platform you need to run a version of an app.
In my source repo you can now just use the script app_config_create_and_upload.sh
, followed by deploy.sh
. You should be able to figure out how to use them, and they should work out of the box. But we’re here to explain, so this is what happens behind the scenes if you do this:
Dockerrun.aws.json
" with the information about the service (Docker image, etc.) to deployThat obviously was app_config_create_and_upload.sh
. The next script, deploy.sh
, does this:
This is the Dockerrun.aws.json
file which describes our single-container test application:
{
"AWSEBDockerrunVersion": "1",
"Image": {
"Name": "flypenguin/test:latest",
"Update": "true"
},
"Ports": [
{
"ContainerPort": "5000"
}
],
"Volumes": [],
"Logging": "/var/log/flypenguin-test"
}
See “final notes” for the “ContainerPort” directive.
I also guess you know how to upload a file to S3, so I’ll skip that. If not, look in the script. The Terraform declaration to add the version to Elastic Beanstalk looks like this: (if you used my script, a file called app_version_<VERSION>.tf
was created for you automatically with pretty much this content):
# define elastic beanstalk app version "latest"
resource "aws_elastic_beanstalk_application_version" "latest" {
name = "latest"
application = "${aws_elastic_beanstalk_application.test_app.name}"
description = "Version latest of app ${aws_elastic_beanstalk_application.test_app.name}"
bucket = "my-test-bucket-for-ebs"
key = "latest.zip"
}
Finally, deploying this using the AWS cli:
$ aws elasticbeanstalk update-environment \
--application-name test-app \
--version-label latest \
--environment-name test-env
All done correctly, this should be it, and you should be able to access your app now under your configured address.
My repo works, at least for me (I hope for you as well). I did not yet figure out the autoscaling, for which I didn’t have time. I will catch up in a 2nd blog post once I figured that out. First tests gave pretty weird results 🙂 .
The reason why I did this (when I have Rancher available for me) is the auto-scaling, and the host-management. I don’t need to manage any more hosts and Docker versions and Rancher deployments just to deploy a super-simle, CPU-intensive, scaling production workload, which relies on very stable (even pretty conservative) components in that way. Also I learned something.
Finally, after reading a lot of postings and way to much AWS docs, I am surprised how easy this thing actually is. It certainly doesnt look that way if you start reading up on it. I tried to catch the essence of the whole process in that blog post.
aws_elastic_beanstalk_configuration_template
Terraform resource is for. I would like to understand it, but the documentation is rather … sparse.$ aws elasticbeanstalk list-available-solution-stacks
**
** … or on the AWS documentation. Whatever is to your liking.aws:autoscaling:launchconfiguration
- “SecurityGroups
“) one will be created for you automatically. That might not be convenient because this means that on “terraform destroy” this group might not be destroyed automatically. (which is just a guess, I didn’t test this)For some reason on the first test run this did not seem to be necessary. On all subsequent runs it was, though. This is the way to create this. Sorry that I couldn’t figure out how to do this with Terraform.
Now you can use (if you chose that name) “aws-elasticbeanstalk-service-role” as your ServiceRole parameter.