Setting up a public containerized service in AWS in 30–45 minutes

Mike Taylor
9 min readMay 31, 2022

AWS makes things incredibly easy when it comes to getting something deployed. As a part of our devops interviews, we’ve been trying to make sure someone has enough linux/AWS experience to get through things generally speaking, so we came up with a “study ahead” test:

  1. Create a VPC with 3 public subnets that have internet connectivity
  2. Deploy 3 EC2 instances with your preference of linux flavor with public IP addresses and appropriate security group rules
  3. Install docker on the EC2 instances
  4. Initialize a swarm cluster with these three instances
  5. Deploy the containous/whoami image replicated across all three nodes in the swarm cluster
  6. Configure an ALB that points to these nodes
  7. Configure route53 to point a hostname of your choosing to this swarm cluster

This should not be incredibly difficult if you’re familiar with AWS EC2, route53, and can google a bit about docker. Let’s walk through how to do these 7 steps and deploy things to the cloud.

Creating the VPC

Creating the VPC is actually probably one of the easiest steps thanks to the AWS VPC building walkthrough.

In our example, we only want public subnets, and we want them across 3 AZ’s, so we’re actually doing more work to simplify it than is necessary.

Once you select the correct settings, you should see something similar to the UI above, and can click create. You’ll be able to view your VPC after the creation completes:

That’s it for the VPC creation — it’s really that simple.

Deploy 3 EC2 instances in our public subnets

If you’re using the EC2 deployment wizard, you’ll need to create a key pair for your swarm instances:

Once you’ve created the key pair, you can create your instances through the launch wizard. I like Ubuntu because it’s familiar, but there are lots of great linux flavors to choose from, and the t2.micro is super cheap and will still run swarm for most simple applications as long as you don’t get too crazy:

Rather than use all the default settings for our network, we’re going to edit the network settings in the launch wizard to make sure that we DO assign a public IP on creation of our instances:

We’ll also want to add the default swarm ports

  • TCP port 2377 for cluster management communications
  • TCP and UDP port 7946 for communication among nodes
  • UDP port 4789 for overlay network traffic

We’re going to set the source to our VPC CIDR for simplicity, but you can and should get more aggressive if you decided to take this more seriously:

Once you’ve created your first instance with it’s security group and key, you can re-use those for the next two — just make sure you select a new subnet and auto-assign the public IP!

After a few minutes you should see three happy swarm instances:

Installing Docker

Thankfully since docker has become mainstream, you can install it with most standard package managers. We’ll need to SSH to our instances and install docker. You’ll need to update the permissions on your SSH key if you’re in linux:

chmod 600 swarm.pem

Then you can SSH to your instance:

ssh -i swarm.pem ubuntu@18.215.185.105

Then install docker:

sudo -i
curl -fsSL https://get.docker.com -o get-docker.sh
sh get-docker.sh

Rinse and repeat the ssh/install on your other two nodes, and you’re ready to install docker swarm. For other distros, check out the docker documentation at https://docs.docker.com/engine/install/

Installing Docker Swarm

Don’t worry — when you hear the word orchestration it’s not as complicated as it sounds to set up. Larger and more complex devops teams will want flexibility and scalability offered by some of the more robust systems like Kubernetes — but if you’re just getting a project off the ground and don’t require the more robust requirements that kubernetes and other orchestration engines support, swarm is a great way to get started.

The setup for swarm is simple — on your manager node, initialize a swarm cluster:

root@ip-10-0-10-19:~# docker swarm init
Swarm initialized: current node (vqbk5fzvfem3xweb6peq8hekd) is now a manager.
To add a worker to this swarm, run the following command:docker swarm join --token SWMTKN-1-1zbhgqutz3ta51rky1snzlz4wh11qxemhwk8w98gygp42imms9-1hfd7t2vr9373dx6caio9mgqt 10.0.10.19:2377To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.root@ip-10-0-10-19:~#

On your other two nodes, run the docker swarm join command that gets printed out:

root@ip-10-0-35-129:~# docker swarm join --token SWMTKN-1-1zbhgqutz3ta51rky1snzlz4wh11qxemhwk8w98gygp42imms9-1hfd7t2vr9373dx6caio9mgqt 10.0.10.19:2377
This node joined a swarm as a worker.
root@ip-10-0-35-129:~#

Once that’s complete you should be able to run docker node ls and see all three nodes:

root@ip-10-0-10-19:~# docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION
vqbk5fzvfem3xweb6peq8hekd * ip-10-0-10-19 Ready Active Leader 20.10.16
m2wiswe3i7i3nxw5di9057iy1 ip-10-0-25-83 Ready Active 20.10.16
jecd8405lemrevl0jimypsttr ip-10-0-35-129 Ready Active 20.10.16
root@ip-10-0-10-19:~#

If you’re having trouble connecting, make sure your security group has all the correct ports opened and make sure that your instances are in the correct subnets and have public IP addresses assigned — verifying that’s all correct will be important for the next few steps to work.

Deploy the containous/whoami image

This is just a demo service — I highly recommend defining your swarm services in a compose-file format and using that for both local development and deployment — for today we’re just going to deploy it with a command:

The command docker service create --name echo-server --replicas 3 --publish 80:80 containous/whoami will generate an echo-server service and distribute the replicas across your three nodes. It will publish the image on port 80 of the hosts, and forward that to port 80 of the containers. It should look like this:

root@ip-10-0-10-19:~# docker service create --name echo-server --replicas 3 --publish 80:80 containous/whoami
ijx7de2alinjvqrktu6r2m173
overall progress: 3 out of 3 tasks
1/3: running [==================================================>]
2/3: running [==================================================>]
3/3: running [==================================================>]
verify: Service converged
root@ip-10-0-10-19:~#

You can also verify that the service is running appropriately with docker service ps echo-server:

root@ip-10-0-10-19:~# docker service ps echo-server
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
f293cgl3cbn8 echo-server.1 containous/whoami:latest ip-10-0-35-129 Running Running 2 minutes ago
3zbyzxbwd2sg echo-server.2 containous/whoami:latest ip-10-0-25-83 Running Running 2 minutes ago
0abf2ao8r7pe echo-server.3 containous/whoami:latest ip-10-0-10-19 Running Running 2 minutes ago
root@ip-10-0-10-19:~#

Configuring an Application Load Balancer

Application load balancing is just a fancy way of routing between servers. We’re not going to go into detail on how these work — we’re just going to get you started with one that you can use that will route to any of the three swarm nodes.

We’re going to create an application load balancer:

In the launch wizard, make sure you select Internet-facing as the Scheme, and in the network mapping, select your new VPC and the three subnets. It should look something like this:

Next, select your security group that we created for our swarm cluster:

In the next section, we need to create a target group to forward to. This target group will be our three EC2 instances that make up our swarm cluster. We will need to create it.

Select the Create target group pop-out, name the group, and make sure it’s pointed at your swarm VPC — the rest of the defaults are fine for now:

Next you’ll be given the option to register targets — we’re going to select all our instances in this VPC, and then click the Include as pending below option. You should see something like this:

Once your target group is created, you should see it in the target groups:

Now we can switch back to our load balancer creation — hit the refresh button and you should see your new target group as an option:

Select the target group and then scroll to the bottom and create the load balancer. You’ll see a new load balancer provisioning for a minute or two:

Configure route53

Our last step is to configure route53 — if you don’t already have a domain set up, take some time to walk through and find a neat domain you’d like to use — maybe you’ll get lucky and find a great one for $10 / year!

Within route53 you’ll pick your hosted zone of choice — for mine, I’m creating this in my bubtaylor.com zone:

We’re going to pick a name (I picked swarm, so it would be swarm.bubtaylor.com), A record type, and then toggle the alias toggle. That will let us point it to an Application Load Balancer. Select your region that this VPC is created in, and select your Application Load Balancer — it should look something like this:

After it’s created you should now see your A record in that zone:

Now you should be able to go to your service and see the whoami container printing out information about itself:

Celebration Time!

Congratulations —You’ve done all the steps we said we were going to accomplish — you have:

  1. Created a VPC with 3 public subnets that have internet connectivity
  2. Deployed 3 EC2 instances with your preference of linux flavor with public IP addresses and appropriate security group rules
  3. Installed docker on the EC2 instances
  4. Initialized a swarm cluster with these three instances
  5. Deployed the containous/whoami image replicated across all three nodes in the swarm cluster
  6. Configured an ALB that points to these nodes
  7. Configured route53 to point a hostname of your choosing to this swarm cluster

Now that you’ve got all your infrastructure set up, I’d recommend learning about how to add firewalls, and how to deploy docker-compose style swarm definitions from a compose file so your local development and your deployment are identical! It’s not as robust as kubernetes, but for most stateless applications (Or stateful applications where you don’t have to do volume management within the swarm cluster, like when you have an external RDS database) you’re ready to rock and roll with a containerized development and deployment environment!

Enjoy!

--

--

No responses yet