Building a NodeJS App with MongoDB Atlas and AWS Elastic Container Service – Part 2 (Sponsored)

It’s that time of year again! This post is part of our Road to AWS re:Invent 2017 blog series. In the weeks leading up to AWS re:Invent in Las Vegas this November, we’ll be posting about a number of topics related to running MongoDB in the public cloud. See all posts here.

In my last post, we started preparing an application built on Node.js and MongoDB Atlas for simple CRUD operations. We’ve completed the initial configuration of the code and are now ready to launch this into production.

As mentioned in part one, I want to minimize the long-term maintenance of this app’s hosting environment. Much like we used MongoDB Atlas to offload many of the operational responsibilities for our database, we can make use of Amazon EC2 Container Service to deploy our Docker apps on AWS. By reducing the amount of patching, systems maintenance, and long term security concerns for both our database and our application front-end, we’re able to dedicate more time to application development.

Ready Docker and coldbrew-cli for deploy

Docker and coldbrew-cli have simple configuration files we’ll review and write in our repo’s root directory.

Docker

Let’s take a look at the Dockerfile:

FROM node:boron
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY . /usr/src/app
EXPOSE 3000
CMD ["npm”, "start”]

This file will tell Docker to follow the instructions to use the “boron” LTS version of Node.js, meaning we won’t need to install it manually to work with our application. After establishing the appropriate version of Node.js to use, we’ll then make a working directory on the container telling it where the app will live. The app will be copied into the working directory, "/usr/src/app", and finally be started using the npm start command stated in our package.json file.

Place the contents of the Dockerfile in the root of the code repository and save it.

echo "FROM node:boron
>
> RUN mkdir -p /usr/src/app
>WORKDIR /usr/src/app
>
>COPY . /usr/src/app
>
>EXPOSE 3000
>
>CMD ["np["npm", "start"]t; Dockerfile

Next, we’ll start working with coldbrew-cli. Much like Docker, we’ll create a plain text config file that will contain basic instructions on how to configure our infrastructure for our app.

coldbrew-cli

Let’s create a file called coldbrew.conf in the root of our repository directory and then store the following contents in it:

touch coldbrew.conf

name: mern-demo
cluster: mern-demo
port: 3000
units: 2
cpu: 1.0
memory: 500m
load_balancer:
enabled: true
health_check:
path: /
status: 200

For a full breakdown of all the terms in this file, have a look at the coldbrew-cli docs. I have provided a simple configuration file that will set up our elastic load balancing (ELB) with a compute instance for our container and 500 MB of available memory per app. We can even configure a simple health check for our ELB to ensure our instances are online.

We can now create our environment with coldbrew-cli. For the purposes of this walkthrough, let’s say we want to deploy our ECS cluster in the us-east-2 region and that we don’t need to access our nodes via SSH; we can launch using the following command:

$ coldbrew --aws-region="us-east-2" cluster-create mern-demo --disable-keypair

Once executing this, we’ll be shown a list of resources that will be created:

Determining AWS resources to create... ECS Cluster: coldbrew-mern-demo IAM Role for ECS Services: coldbrew-mern-demo-ecs-service-role EC2 Launch Configuration for ECS Container Instances: coldbrew-mern-demo-lc EC2 Auto Scaling Group for ECS Container Instances: coldbrew-mern-demo-asg IAM Instance Profile for ECS Container Instances: coldbrew-mern-demo-instance-profile EC2 Security Group for ECS Container Instances: coldbrew-mern-demo-instance-sg Do you want to create these resources? [y/N[y/N]pre>

Answer “Yes”, and your resources will start building in the background.

[+] [+]ating IAM Instance Profile [col[coldbrew-mern-demo-instance-profile][+] [+]ating EC2 Security Group [col[coldbrew-mern-demo-instance-sg][*] [*]ing inbound rule [tcp[tcp:22:0.0.0.0/0]EC2 Security Group [col[coldbrew-mern-demo-instance-sg][+] [+]ating EC2 Launch Configuration [col[coldbrew-mern-demo-lc](this may take long)
[+] [+]ating EC2 Auto Scaling Group [col[coldbrew-mern-demo-asg](this may take long)
[+] [+]ating ECS Cluster [col[coldbrew-mern-demo][+] [+]ating IAM Role [col[coldbrew-mern-demo-ecs-service-role]

We can query the status with the following command:

$ coldbrew --aws-region="us-east-2" cluster-status mern-demo

Here is the command output similar to what we would get:

Cluster Name: mern-demo
AWS Region: us-east-2 VPC: vpc-7935db10 Subnets: subnet-70f0df3a subnet-58e01531 subnet-49d4db31
ECS ECS Cluster: coldbrew-mern-demo IAM Role for ECS Services: coldbrew-mern-demo-ecs-service-role ECS Services: 0 ECS Tasks (running/pending): 0/0 ECS Container Instances: 1
Auto Scaling EC2 Launch Configuration: coldbrew-mern-demo-lc IAM Instance Profile: coldbrew-mern-demo-instance-profile Instance Type: t2.micro Image ID: ami-bd3e64d8 Key Pair: Security Groups: coldbrew-mern-demo-instance-sg EC2 Auto Scaling Group: coldbrew-mern-demo-asg Instances (current/desired/min/max): 1/1/0/1
ECS Container Instance ID: a56d40d1-7095-45a0-af81-5f309bbbd728 Status: ACTIVE Tasks (running/pending): 0/0 CPU (remaining/registered): 1.00/1.00 Memory (remaining/registered): 995M/995M, EC2 Instance ID: i-03fdc038f3d1c71c8 Private IP: 172.31.44.140  Public IP: 18.221.72.130

Wow, that’s a lot of saved work. Everything from our VPC all the way to the ELB was created for us. See the public IP for our compute instance? Let’s make sure it’s whitelisted in our Atlas cluster so our data can be saved.

If you were to replace the M0 free cluster with one of Atlas’s dedicated clusters, you’d have access to our VPC peering module for AWS, giving you the ability to whitelist the entire range of host servers via a security group entry in the whitelist.

Now it’s time to build our Docker image with coldbrew-cli and deploy our app.

First a Docker image is created and saved:

$ coldbrew --aws-region="us-east-2" deploy [*] [*]cking cluster availability [mer[mern-demo][+] [+]ating ECR Repository [col[coldbrew/mern-demo][*] [*]lding Docker image [722[722245653955.dkr.ecr.us-east-2.amazonaws.com/coldbrew/mern-demo:latest](this may take long) > docker build -t 722245653955.dkr.ecr.us-east-2.amazonaws.com/coldbrew/mern-demo:latest -f /Users/jaygordon/work/mern-crud/Dockerfile /Users/jaygordon/work/mern-crud

Then the image is pushed to the appropriate nodes in the cluster:

[*] [*]hing Docker image [722[722245653955.dkr.ecr.us-east-2.amazonaws.com/coldbrew/mern-demo:latest](this may take long) > docker push 722245653955.dkr.ecr.us-east-2.amazonaws.com/coldbrew/mern-demo:latest
The push refers to a repository [722[722245653955.dkr.ecr.us-east-2.amazonaws.com/coldbrew/mern-demo]1685d46cb: Pushed
e71eccb6eee4: Pushed
b7f1d9d858aa: Pushed
246ae56dbdbd: Pushed
e271ac6d0c18: Pushed
682e7cee9d37: Pushed
d359ab38b013: Pushed
latest: digest: sha256:c58153d1fe62dacb1644966ffe4acca6b76cb383aee1f76e0efd97ceaa1a306e size: 2425 [*] [*]ating ECS Task Definition [mer[mern-demo][+] [+]ating ELB Target Group [mer[mern-demo-elb-tg][+] [+]ating EC2 Security Group [mer[mern-demo-elb-sg][*] [*]ing inbound rule [tcp[tcp:80:0.0.0.0/0]EC2 Security Group [mer[mern-demo-elb-sg][*] [*]ing inbound rule [tcp[tcp:0:sg-f8ab7190]EC2 Security Group [col[coldbrew-mern-demo-instance-sg][+] [+]ating ELB Load Balancer [mer[mern-demo-elb][+] [+]ing listener (HTTP) for ELB Load Balancer [mer[mern-demo-elb][+] [+]ating ECS Service [mer[mern-demo]Application deployment completed.

Our app is now deployed. Let’s get the ELB and verify:

coldbrew --aws-region="us-east-2" status |egrep elb ELB Target Group: mern-demo-elb-tg ELB Load Balancer: mern-demo-elb Endpoint: http://mern-demo-elb-2131866240.us-east-2.elb.amazonaws.com:80

The ELB now provides us with an http endpoint to access our app. There’s no need to use nginx or any other http server to reverse proxy the Node.js port. The coldbrew-cli deploy process will set up your port forwarding based on the information in the coldbrew.conf file. The cluster creation and deployment takes about five minutes. To test that we’re online and running, simply go to the URL provided by coldbrew and add a record to our app (first load may take a minute):

Congratulations, we now have all the tools to build and configure our own ECS cluster using MongoDB Atlas! Destroying the app is pretty simple as well — just run these two commands to terminate all the resources associated with ECS.

$ coldbrew --aws-region="us-east-2" delete
Determining AWS resources that need to be deleted... ECS Service: mern-demo ECR Repository: coldbrew/mern-demo ELB Target Group: mern-demo-elb-tg ELB Load Balancer: mern-demo-elb EC2 Security Group for ELB Load Balancer: mern-demo-elb-sg
> Do you want to delete these resources? [y/N[y/N]

After we answer “yes”, the created resources will begin terminating:

[*] [*]ating ECS Service to stop all tasks [mer[mern-demo][-] [-]eting ELB Load Balancer [mer[mern-demo-elb][-] [-]eting ELB Target Group [mer[mern-demo-elb-tg](this may take long)
[-] [-]oving inbound rule [tcp[tcp:0:sg-f8ab7190]m EC2 Security Group [col[coldbrew-mern-demo-instance-sg][-] [-]eting EC2 Security Group for ELB Load Balancer [mer[mern-demo-elb-sg](this may take long)
[-] [-]eting ECR Repository [col[coldbrew/mern-demo][-] [-]eting (and draining) ECS Service [mer[mern-demo](this may take long)

And now delete the remaining cluster elements:

$ coldbrew --aws-region="us-east-2" cluster-delete mern-demo Determining AWS resources that need to be deleted... ECS Cluster: coldbrew-mern-demo IAM Role for ECS Services: coldbrew-mern-demo-ecs-service-role EC2 Launch Configuration for ECS Container Instances: coldbrew-mern-demo-lc EC2 Auto Scaling Group for ECS Container Instances: coldbrew-mern-demo-asg IAM Instance Profile for ECS Container Instances: coldbrew-mern-demo-instance-profile EC2 Security Group for ECS Container Instances: coldbrew-mern-demo-instance-sg > Do you want to delete these resources? [y/N[y/N]

Answer “yes” and we’ll see the remaining compute cluster elements terminated:

[*] [*]minating instances in EC2 Auto Scaling Group [col[coldbrew-mern-demo-asg](this may take long)
[-] [-]eting EC2 Auto Scaling Group [col[coldbrew-mern-demo-asg](this may take long)
[-] [-]eting EC2 Launch Configuration [col[coldbrew-mern-demo-lc][-] [-]eting IAM Instance Profile [col[coldbrew-mern-demo-instance-profile][-] [-]eting EC2 Security Group [col[coldbrew-mern-demo-instance-sg][-] [-]eting ECS Cluster [col[coldbrew-mern-demo][-] [-]eting IAM Role [col[coldbrew-mern-demo-ecs-service-role]

What’s next?

Try this process out with your own application or even with a dedicated MongoDB Atlas cluster to enable VPC peering. You can get a full tutorial on how to configure VPC Peering in MongoDB Atlas by viewing the “Peering your MongoDB Atlas Cluster to AWS” video. You’ll be able to use a completely service-based deployment of your application that requires no operating systems to be managed, no kernels to update, and less overall manual work.

If you’d like to sign up for a free MongoDB Atlas cluster, check out our signup page here!

Related Post

Rojenx is a leading concept artist who work appears in games and publications

Check out his personal gallery here

In other news …