System Design: Create a url shortening service (Part 4): Deploy to AWS

Abhinav Dhasmana
5 min readMar 29, 2018

--

This is part of a blog series where we design, develop, optimize, deploy and test a URL shortener service from scratch.

Overall architecture

Our tiny service is now ready to be deployed. Our architecture would look like this

Overall design on the tinyURL service

We’ll go for horizontal scaling instead of vertical scaling. For this small app , we would not use read replicas or master-slave configuration.

We’ll use AWS for setting up our boxes with t2 medium instance type with Ubuntu OS for Node and t2 micro instances for postgres and redis.

Installation steps

Redis: Start an EC2 instance with Ubuntu image. ssh into the box and install redis.

sudo apt-get update
sudo apt-get install redis-server

Now redis server is running but does not accept incoming connections from other IPs. Lets edit the conf file sudo vi /etc/redis/redis.conf . Change bind to bind 0.0.0.0 and restart the redis server sudo /etc/init.d/redis-server restart .

Postgres: Start an EC2 instance with Ubuntu image. ssh into the box and install postgres.

1  sudo apt-get update
2 sudo apt-get install postgresql postgresql-contrib

Create a role in postgres which our app would be using to connect from our node server. Make sure to change config/config.json in the node code file so that the app can connect to the database.

Like redis, change the config to accept incoming traffic. Edit sudo vi /etc/postgresql/9.5/main/postgresql.conf and change listen_address to listen_address = '*' and restart.

P.S: These are not the best way to do as we are opening our servers to connect from any IP. Ideally it should only be a subset of IPs on which our node app is running.

Node server: Start an EC2 instance with Ubuntu image. ssh into the box and follow the below steps

  1. git clone https://github.com/abhinavdhasmana/tinyUrl.git
  2. Ensure that the changes in config/config.json matches the host entry, username and password entry for the postgres server . For example, my config looks like this
{
"development": {
"username": "abhinavdhasmana",
"password": "test",
"database": "tinyurl_development",
"host": "172.31.17.70",
"dialect": "postgres",
"pool": {
"max": 20
}
},

I am using private IP of the box. Public IP would work as well. Prefer private over public as restarts changes the public IP but not the private.

3. Change the redis config in src/server.js to point to the right IP. Again, this is the private IP

const redisClient = redis.createClient({host: ‘172.31.17.28’});

We can also create redis.yml and define different settings for connection based on different envs.

4. Next, lets setup our database with the help of seqeulize and seed the data

./node_modules/.bin/sequelize db:create

./node_modules/.bin/sequelize db:seed:all

If the above command works, we are sure our database connection is working fine.

5. Do npm start and our server is running.

Create load balancer

Create load balancer on AWS

Select all the subnet. In the above image I have left us-east-2c or make sure you instance is launched in 2a or 2b only.

Next step, security group. All we need is port 80 from anywhere.

Security group configuration for load balancer

Next step: Create a new target group. We will come to this when we do auto scaling. Node server is running on port 8080, hence its 8080. Also we have a route /ping for our health check up which I have entered.

Routing configuration for the load balancer

Once we have a target group, any EC2 instance that launches in that group and once the instance passes the Healthy threshold, load balancer will start sending request to this instance.

Create auto scaling groups

Let’s try to launch the EC2 instance and in step 3 of the configuration, click launch into Auto Scaling Group

Setting up auto scaling group in AWS

In the Create Launch Configuration screen, fill in the name and add the below script inside User data . This will get our server up and running as part of the boot process. Note that this script is run only once per instance and not on every restart. If you want to run it on every restart, delete config_scripts_user from below location

sudo rm /var/lib/cloud/instances/*/sem/config_scripts_user

Install the source code as Ubuntu user in AWS as user data

We need to install nvm and the app as the ubuntu user. So the above script creates a file and run this as a ubuntu user.

Go through the flow and complete the configuration. Once done, it will redirect to the creation of auto scaling group.

Fill in the details as below. Make sure to fill Target Groups in the Advanced Details

AWS load balancer settings

Next is we need to decide on when should EC2 instances should be added and removed. This is pretty simple in AWS. In the image below, we want the system to add a new machine when average CPU utilization goes beyond 70%.

Scale policy for AWS EC2 instance

Go through the flow to complete this and we have created an auto scaling group.

Give it some time and a new instance would be up and running which should be visible in the EC2 console.

That’s it. Now we need to know the DNS of the load balancer and we are good to go. In my case its tinuUrlLoadBalancer-269014099.us-east-2.elb.amazonaws.com , so I hit http://tinuurlloadbalancer-269014099.us-east-2.elb.amazonaws.com/ping and get pong as a response.

If you found this story interesting or useful, please support it by clapping it👏 .

--

--

Abhinav Dhasmana

Senior Staff Engineer @freshworks. Ex-McKinsey/Microsoft/Slideshare/SAP, Tech Enthusiast, Passionate about India. Opinions are mine