Deploying Your Web App To Amazon Web Service

Deploying Your Web App To Amazon Web Service
A Complete Guide To Deploying Your Web App To Amazon Web Service

This a comprehensive guide for how to containerize your Mongo-Express-React-Node (MERN) app with Docker and deploy it to Amazon Web Service (AWS) Elastic Container Service (ECS). I will share my research and lessons learned deploying a MERN app, including what worked, what didn’t work, how I prepared the app for deployment and accomplished the deployment.

There was a lot of lessons learned when I was figuring out how to get my app, LooseLeaf, launched. The tutorial is written from the perspective of someone who had very little DevOps experience. I hope this article is good resource for anyone trying to deploy their app to AWS.

Part 1 — Prep App for Deployment

Step 1. Optimize Build

I have an isomorphic app which leverages Webpack with code splitting.

The motivation for optimizing production build is twofold: increase performance and decrease build time and size.

When running the Webpack build script in production mode, the following warning messages are provided:

WARNING in asset size limit: The following asset(s) exceed the recommended size limit (244 KiB).

This can impact web performance.

and

WARNING in entrypoint size limit: The following entrypoint(s) combined asset size exceeds the recommended limit (244 KiB). This can impact web performance.

This Google Web Fundamentals post discusses a few strategies for decreasing the size and build time of your production app bundles, including using url-loader, utilizing css-loader with the minimize option, making sure you use the —-production flag when building your production bundles, and a few optimizations your can include in your Webpack configuration file. These optimizations provide marginal improvements. But through my other research, it became apparent that the splitChunks plugin is a necessary optimization, especially if you use a lot of big node modules like react and your app is set up with code splitting.

According to the Webpack docs, code splitting has a pitfall in which common vendor code used in all your bundles are duplicated in your bundles. This makes all your bundles big and increases overall build time. We can remove the duplicated modules with splitChunksPlugin. To use this optimization, add the following to your webpack.config.js:

// webpack.config.js
module.exports = {
  mode: // …
  entry: // …
  output: // …
  optimization: {
    splitChunks: {
      cacheGroups: {
        default: false,
        vendors: false,
        vendor: {
          name: 'vendor',
          chunks: 'all',
          test: /node_modules/
        }
      }
    }
  }
}

Step 2. Automate scripts

I added a few npm run scripts in order to allow different ways of spinning up the servers.

// package.json
{
  "name": "looseleaf-node",
  "private": true,
  "version": "1.0.0",
  "scripts": {
    "build:prod": "./node_modules/.bin/webpack --mode production",
    "build:dev": "./node_modules/.bin/webpack --mode development",
    "prestart": "npm run build:prod",
    "start": "node server/run-prod.js",
    "start-server": "node server/run-prod.js",
    "start-server-dev": "nodemon server/run-dev.js --watch server --watch client/iso-middleware",
    "start-client": "node server/start-client.js",
    "start-dev": "concurrently \"npm run start-server-dev\" \"npm run start-client\"",
    // ...
  },
"dependencies": {
    // …
    "concurrently": "^4.1.0",
    // ...
  }
}

Since npm prestart is always executed with npm start, we want to create a separate run script for when we just want to fire up our server without rebuilding bundles. Thus, start-server was added.

Additionally, we want a version which can be used to spin up servers with nodemon and concurrently run our isomorphic app on one server (npm run start-server) and create-react-app with hot module reload and react hot loader on another server.

Note, start-client is defined in another file:

// start-client.js
const args = [‘start’];
const opts = { stdio: ‘inherit’, cwd: ‘client’, shell: true };
require(‘child_process’).spawn(‘npm’, args, opts);

Step 3. Dockerize Your App

Gokce Yalcin and Jake Wright provide a great primer for Docker and ECS.

Docker is a way to manage and run containers — it is an abstraction that lets you share host resources with your application by process isolation.

The motivation for Docker is portability. Apps come with a lot of environmental configurations that would run on one computer but break on another computer which does not have the right configuration.

As this article puts it:

Modern DevOps practices demand the ability to quickly build servers and ship code to be run in different environments. Welcome to the world of containers: extremely lightweight, abstracted user space instances that can be easily launched on any compatible server and reliably provide a predictable experience.

For our MERN app, the environmental configuration is our Mongo database.

A lot of tutorials I found show you how to create container for simple apps which do not depend on other images like Mongo. The build definition provided by Dockerfile was not adequate in associating the app build with Mongo. The app ends up failing upon initial launch due to failing to bind with Mongo.

To dockerize our MERN app for running locally, we need to create two files: Dockerfile and docker-compose.yml.

Dockerfile is required if you want to use Docker to containerize your app but docker-compose is optional but useful if you want to fire up a Docker container locally to use Mongo.

Before when I run the server locally, I’d have to make sure to run mongod and mongo in the command line to fire up the Mongo daemon and Mongo shell.

But with containerization, I can create an image that includes the app and the mongodb image and driver mapping that the app depends on.

Dockerfile:

FROM node:8.11.1
# Set image metadata
LABEL version=”1.0"
LABEL description=”LooseLeaf Node”
# Create app directory
WORKDIR /usr/src/app
# install dependencies
COPY package*.json ./
RUN npm cache clean --force && npm install
# copy app source to image _after_ npm install so that
# application code changes don’t bust the docker cache of 
# npm install step
COPY . .
# set application PORT and expose docker PORT, 80 is what Elastic Beanstalk expects
EXPOSE 3001
CMD [ "npm", "run", "start" ]

docker-compose.yml

version: “2”
services:
  web:
    container_name: looseleaf-node-app
    build: .
    ports:
      - “3001:3001”
    depends_on:
      - mongo
  mongo:
    container_name: mongo
    image: mongo:latest
    ports:
      - "27017:27017"
    volumes:
      - data-volume:/data/db
  volumes:
    data-volume:

To run both our app and Mongo in the container, we run this command:

$ docker-compose up

Part 2 — Deploy Docker Image to AWS ECS

The official docs from Amazon provides detailed information on how AWS ECS leverages Docker.

Amazon ECS uses Docker images in task definitions to launch containers on EC2 instances in your clusters.

Quick Primer on ECS

Amazon EC2 Container Service (Amazon ECS) is a highly-scalable, high performance container management service that supports Docker containers and allows you to run applications easily on a managed cluster of EC2 instances. The ECS service scheduler places tasks — groups of containers used for your application — onto container instances in the cluster, monitors their performance and health, and restarts failed tasks as needed. ~AWS Blog

There are four parts to ECS:

  1. Repository or Elastic Container Registry (ECR)
  2. Cluster
  3. Task Definition
  4. Service

The Repository is where you store the app image created using docker. You can push updated image of your app to the ECR using the AWS CLI. Alternatively, if you don’t want to use ECR to store your image, you could push your image to DockerHub.

A Cluster is where AWS runs containers.

Task Definition is where you tells AWS how to create your containers. Each container has is associated with an image, which is used to start your container and can be mapped to ports and volumes.

A service is essentially a collection of containers that will run on EC2 instances (ECS container instances) and auto scale according to specified rules.

Step 1. AWS CLI and SSH Keypair

It’s a good idea to be able to ssh into your EC2 instance for troubleshooting. Use the AWS CLI to create a new ssh keypair (See docs for ssh and for connecting to your container instance).

Your private key is not stored in AWS and can only be retrieved when it is created. Therefore, we are creating the MyKeyPair.pem to be stored locally.

Generally, the correct place to put your .pem file is in your .ssh folder, in your user directory. The .ssh folder is a hidden folder, to open it in finder open terminal and execute the open command.

First we need to install AWS CLI and configure it.

Then we use the AWS CLI to create keypair in the .ssh folder.

$ mkdir .ssh
$ cd .ssh
$ aws ec2 create-key-pair --key-name MyKeyPair --query 'KeyMaterial' --output text | out-file -encoding ascii -filepath MyKeyPair.pem
$ chmod 400 MyKeyPair.pem

Step 2. Create ECS Cluster, Task, and Service

You could get started with AWS ECS using the official docs. I suggest following this tutorial from Node University which walks you through (with screenshot) an actual example deploying a node app containerized with Mongo to ECS. This video walks you through how to use the Amazon ECS console to create repository, a task definition, and a cluster. [This Gist provides step-by-step procedure for creating a ECS cluster, task definition and containers, as well as load balancer.

In short, we need to take the following steps:

  1. Create ECS Container Registry.
  2. Create a repository under Amazon ECS. This is where you push your docker images using the aws cli.
  3. Create new task definition, which includes port mapping and your two containers (node app and Mongo). See aws gist on creating a task definition.
  4. Create a Cluster (specify the ssh keypair you created in the previous step). I use EC2 instance type m4.large. When prompt, create a new VPC.
  5. Create a Service.

Amazon Virtual Private Cloud (VPC) provisions a logically isolated section of AWS where you can launch AWS resources in a virtual network that you define. When you create a cluster you should create a new VPC.

A EC2 instance is automatically created by ECS and associated with the VPC. Write the VPC ID down. You are going to need to use it a lot later when creating EFS, Security group, and load balancer.

As a side note, for all these steps, you can use Amazon’s console and AWS CLI. There’s an open source cli called Coldbrew that you can download from Github which automates your Docker container deployment process. I couldn’t figure out how the configuration file suppose to look if I wanted to fire up the app container with Mongo. Also, Coldbrew seem to have a lot of “magic” that when my deployed app failed to launch, I couldn’t figure out how to troubleshoot.

Once you created the ECR and repository, use the Docker CLI to build our image:

$ docker build -t <image-name> .

Don’t forget the “dot” at the end of the command.

Then tag our built image and use the Docker CLI to push the image to our AWS ECS repository. You use two commands which you can copy-paste into your Terminal from the AWS ECS console.

For example:

Before you push, make sure to to authenticate Docker to an Amazon ECR registry with get-login.

$ $(aws ecr get-login --no-include-email --region us-east-1)

Note, the $(command) expression is called command substitution, which is a shortcut that essentially tells bash to execute the standard output of command. If we simply execute the aws ecr get-login --no-include-email --region us-east-1 command, the stdout is docker login -u AWS -p <really really long hash string>. We’d have to copy-paste the whole string into the command line to login. Executing the $(aws ecr get-login --no-include-email --region us-east-1) command saves us from that extra step.

For our example, the complete sequence of commands for first and subsequent deploys is:

$ $(aws ecr get-login --no-include-email --region us-east-1)
$ docker build -t looseleaf-node .
$ aws ecr get-login
$ docker tag looseleaf-node:latest 767822753727.dkr.ecr.us-east-1.amazonaws.com/looseleaf-node:latest
$ docker push 767822753727.dkr.ecr.us-east-1.amazonaws.com/looseleaf-node:latest

You could put all this script in a shell script and automate the whole process of deployment.

$ touch deploy.sh
$ # put all the scripts above into deploy.sh
$ chmod +x deploy.sh
$ ./deploy.sh # run this everytime you want to deploy

To make sure the image is running in our container instance, go to the EC2 console and click on the container instance that was fired up by your ECS.

Let’s ssh into this container instance by going to the ECS console ➡️select your instance ➡️click connect. You’ll see a command like this:

ssh -i “MyKeyPair.pem” root@ec2–18–386–245–264.compute-1.amazonaws.com

If you try to run the above command, you’ll get an error:

Please login as the user “ec2-user” rather than the user “root”.

So we try this instead:

$ cd .ssh
$ ssh -i “MyKeyPair.pem” ec2-user@ec2–18–386–245–264.compute-1.amazonaws.com

And it should work! You can use this to connect to the EC2 instance.

You will use the following command to check if the container instance is running our docker image:

Step 3. Add SSL

The AWS Tutorial guides you through setting up your AWS Certificate Manager.

SSL/TLS certificates are used to secure network communications and establish the identity of websites over the Internet as well as resources on private networks.

SL/TLS certificates provisioned through AWS Certificate Manager are free.

The AWS Tutorial linked above is actually a little out-dated.

Here are the steps I took:

  1. Login to your AWS Console and click to Services ➡️ Certificate Manager.
  2. Get started on Provision Certificates.
  3. Request a Public Certificate.
  4. On the domain names, add your domain name (e.g., looseleafapp.com and *.looseleafapp.com). Click next.
  5. Validation method: DNS.
  6. On the review page, click “Confirm and request” once you are satisfied.
  7. On the Validation page, use Amazon Route 53 to validate CNAME for you. If everything goes well, you’ll get this message:

The DNS record was written to your Route 53 hosted zone. It can take 30 minutes or longer for the changes to propagate and for AWS to validate the domain and issue the certificate.

It doesn’t take 30 minutes. I refreshed the page and Validation status changed to Success.

Next, go to the EC2 dashboard. For the container instance, click on the Security Group. Add an inbound rule with Type HTTPS and port 443.

Step 4. Create Load Balancer

The Network Load Balancer is the best option for managing secure traffic as it provides support for TCP traffic pass through, without decrypting and then re-encrypting the traffic. ~AWS Compute Blog

I followed this video tutorial to create a classic load balancer.

You can also Check out the AWS official tutorial on how to create a classic Load Balancer with an HTTPS Listener.

Make sure your load balancer settings are:

  • VPC ID: the VPC ID for your ECS container
  • Scheme: internet-facing
  • Listeners: (1) load balancer port HTTPS 443 → instance port HTTP 80 and (2) load balancer port HTTP 80 → instance port HTTP → 80
  • Health Check: Ping Target HTTP:80/<filename> where <filename> is a file that your website serves from the root. For my site, it’s index.css.

Step 5. Link to Domain Name

Use AWS Route 53 to associate your container instance with a domain name.

I purchased my domain name from Google Domains. I got on chat with the customer service representative from Google Domains who walked me through how to set up route for AWS Route 53 with:

  • CNAME (canonical name)
  • MX (mail exchange if you use G Suite)
  • NS (Name server)
  • SOA (Start of authority)

After you’ve done all that, Create Record, select “Type A-IPv4 address”, select Yes for Alias, and select the load balancer from Alias Target. This gives you https://yourdomain.com and forwards http://yourdomain.com to https://yourdomain.com.

Optional: You may be able to create another Alias for www.yourdomain.com to forward to https://yourdomain.com (I don’t know how to do that yet).

Part 3 — Persisting Data

Following the tutorial from Node University helped me get the app up and running but there was a big problem with the setup: whenever the server goes down for whatever reason, all the data is lost.

By default, the Amazon ECS-optimized AMI ships with 30 GiB of total storage. You can modify this value at launch time to increase or decrease the available storage on your container instance. This storage is used for the operating system and for Docker images and metadata.

The docker info command also provides information about how much data space it is using, and how much data space is available.

SSH into your instance and run this command:

[ec2-user ~]$ docker info | grep “Data Space”

Output for a m4.large:

Data Space Used: 3.326GB
Data Space Total: 23.33GB
Data Space Available: 20GB

This might seem like a lot storage but this storage is ephemeral, i.e., something that is temporary in nature.

If you run your MongoDB in a container, the data is hosted in the instance’s ephemeral disk. This means the data stored in the container is gone the moment your container is deleted or restarted.

Per the AWS Blog:

Using task definitions, you can define the properties of the containers you want to run together and configure containers to store data on the underlying ECS container instance that your task is running on. Because tasks can be restarted on any ECS container instance in the cluster, you need to consider whether the data is temporary or needs to persist.

If your container needs access to the original data each time it starts, you require a file system that your containers can connect to regardless of which instance they’re running on. That’s where Elastic File System (EFS) comes in.

AWS EFS is a storage service that can be used to persist data to disk or share it among multiple containers; for example, when you are running MongoDB in a Docker container, capturing application logs, or simply using it as temporary scratch space to process data.

EFS allows you to persist data onto a durable shared file system that all of the ECS container instances in the ECS cluster can use.

EFS Primer

EFS is one of three main cloud storage solutions offered by AWS and is a relatively new service compared to S3 and EBS. Like S3, EFS grows with your storage needs. You don’t have to provision the storage up front. Like EBS, EFS can be attached to an EC2 instance but EFS can be attached to multiple EC2 instances while EBS can only be attached to one. Amazon provides a nice comparison table for the three services and a pretty good summary of the three options and what they are good for:

See this article for a discussion on when to use what.

In general, EFS is ideal if your web app is set up as microservices deployed to ECS in Docker containers. EFS is a fully managed file storage solution that can provide persistent shared access to data that all containers in a cluster can use. EFS is container friendly. If you need a network filesystem solution that can allow multiple EC2 instances to access the same data at the same time, use EFS.

What we want to do is to have containers that gets access to the original data each time it starts. The original data comes from EFS.

It’s worth noting some important constraints of EFS:

  • Only available for Linux EC2. Fargate or Windows EC2 not supported.
  • The EC2 Instance must be in the same subnet as the EFS Mount Target.

Set Up EFS With Your Containers

I followed the gist in combination with this aws Compute blog post and this AWS official doc for ECS+EFS to:

Step 1. Create a KMS Encryption Key

Step 2. Create a security group for the EFS filesystem

Make sure the security group allows 2049 inbound with the source being the EC2’s security group.

Name the security group EFS-access-for-sg-dc025fa2. Replace “sg-dc025fa2” with the security group id of your EC2 instance.

Step 3. Create a new EFS filesystem

  • Enable encryption
  • Use the VPC ID of your ECS cluster.
  • Use the security group ID created in the previous step for the mount target security group.

Step 4. update ECS cluster’s CloudFormation template

Follow the gist instruction.

Step 5. Updated the Task Definition:

  • Create the mapping to the volume mapping:
  • volume name: efs
  • source path: /efs/<yourDatabase>

Update the mongo container’s mount point to include:

  • container path: /data/db
  • source volume: efs

The two volume mapping is needed because we want the container to access the file system is mounted on the host instance in.

Step 6. Update service in ECS to use the updated Task Definition.

Step 7. Scale instances down to 0, then to 1 again.

This terminates the existing EC2 instance and then spins up a new EC2 instance using the new Task Definition and CloudFormation script. EFS mounting should be automatically done by the start script in CloudFormation. The Task Definition volumes mapping ensures the EFS mount target is hooked up to the mongo container.

Step 8. Verify EFS is properly mounted

Before you try to restarting your instance and run the automated script in the CloudFormation template, it’s a good idea to try to ssh into your instance and do make sure you can do everything manually.

After you restarts the EC2 instance with the updated task definition, ssh into the instance and make sure the file system volume is mounted and is mapped to the mongo container volume:

Run the following command.

[ec2-user ~]$ cat /etc/fstab

If you see a similar output on the last line, that’s a good sign.

Then run this command:

[ec2-user ~]$ cd /efs/<yourDatabase> && ls

The output of the above of the above command should be a bunch of .wt files and some other mongo files.

Next, verify that the volume is mapped correctly to the mongo container’s volume:

[ec2-user ~]$ docker ps
[ec2-user ~]$ docker exec -it <containerID> /bin/bash
root@mongo:/$ cd data/db
root@mongo:/data/db$ ls

The output should be the same files you see in the /efs/<yourDatabase> directory.

Try writing to the directory:

root@mongo:/data/db$ echo ‘hello’ >> test
root@mongo:/data/db$ cat test
hello
root@mongo:/data/db$ exit
[ec2-user ~]$ cd /efs/<yourDatabase> && cat test
hello

This proves that the two volumes are syncing.

To verify the mount status of the Amazon EFS File System on host volume, execute the following command:

[ec2-user ~]$ df -T

You should see output like this:

Where efs-dns has the following form:

file-system-id.efs.aws-region.amazonaws.com:/

If you see efs-dns in the output, that shows the file system is properly mounted.

Alternatively, you can check mount status using the following commands:

[ec2-user ~]$ df -h | egrep '^Filesystem|efs'
[ec2-user ~]$ mount | grep efs

Update Your App

To update your application, you may have to update the task definition and then update

How to update ECS container to update service task to reflect latest image in ECR:

AWS recommends registering a new task definition and updating the service to use the new task definition. The easiest way to do this is to:

  1. Use the deploy script discussed earlier in this guide to deploy an updated image to the ECR
  2. Navigate to Task Definitions, select the latest task, choose create new revision
  3. Update Service to use the latest task revision.
  4. Scale up your cluster to 2*n instances. You will see that a new running task is created to use the latest revision.
  5. Wait for new the service to be restarted
  6. Scale down your cluster to n instances.

If you need to change your instance type:

  1. Click on the Stack corresponding to your ECS-Cluster.
  2. Click “Update Stack”
  3. Use current template, Next
  4. Change EcsInstanceType to your preferred instance type
  5. Next, Next, Update
  6. Scale up your cluster to 2*n instances
  7. Wait for the n new instances of the new type being created
  8. Scale down your cluster to n

Thanks for reading!

This concludes the complete tutorial on how to deploy a MERN app to AWS. This article was originally published on my blog, which includes an additional Gotcha section.

If you enjoyed this post and want to see more like it, please check out my other posts on web development:

30s ad

Docker & Docker Compose for Beginners

Docker Mastery: The Complete Toolset From a Docker Captain

Essential Docker for Python Flask Development

Introduction to Docker

Docker and Containers: The Essentials

Selenium WebDriver with Docker

Docker Compose in Depth

Suggest:

Docker Tutorial A Full DevOps Course on How to Run Applications

Web Development Trends 2020

What To Learn To Become a Python Backend Developer

Using Python to Automate AWS Services | Lambda and EC2

40+ Online Tools & Resources For Web Developers & Designers

A Guide to Web Scraping with NodeJS