How to get your server up and running on HTTPS (using Nginx) in 10 minutes

Abhishek Raj
10 min readAug 8, 2020

--

Nginx is an open-source, highly-concurrent, high-performance web server which can also be used as a reverse proxy. Today, it serves the majority of the world’s top websites. In this article, I will talk about how to set up Nginx on an Ubuntu machine, how to configure the configuration file to utilise the reverse-proxy feature (and serve your APIs or website) and lastly how to enable SSL with a certificate obtained (for free!) from a public certificate authority. By the end of this article, you will be able to host your API services or websites securely using Nginx.

Before diving into the setup, let me first explain what we actually mean by terms like a web server and reverse proxy, and how Nginx is different from Apache at its core. (If you are already aware of these and the basics of Nginx, you can skip to the section where I talk about building Nginx and enabling SSL).

A web server is simply a software (and hardware) that follows some protocol to serve requests made by clients on the world wide web. Apache and Nginx are two of the most popular web servers.

Apache is configured to work in a prefork mode — this means that it has a set number of processes configured and each process can handle 1 request at a time irrespective of the type of request (in something like synchronous, blocking fashion). At a time, Apache will only accept requests up to that preconfigured number of processes and then reject the rest of the incoming requests.

Every process in an Apache server handles one request at a time.

In contrast to this, Nginx works asynchronously, meaning a single Nginx process can handle multiple concurrent requests. Because of the asynchronous design, Nginx can’t embed server-side programming languages (like PHP in Apache) into its own processes. This means that any request for a dynamic content has to be served using a completely different and independently running process (for example a running node server or a PHP server) and then reverse-proxied using Nginx. Not running an embedded server is actually a huge advantage and one of the reasons why Nginx consumes low resources. With Apache, every process has an embedded PHP process running. So, even while serving static content, Apache server has that overhead which leads to high resource consumption. But with Nginx, if say a static file is requested, it can serve it directly without the need to involve any server-side language. Nginx is also faster than Apache because of its asynchronous nature (for e.g. a time-consuming request will not block another request for a static file which can be served immediately) and thus can serve a larger number of clients.

A single Nginx process can handle multiple requests concurrently. Dynamic content is served through independently running processes and then reverse-proxied back to the client through Nginx in an asynchronous manner.

Now that we know the basics of Nginx, let us see how we can set it up on a Linux machine.

Building Nginx from source

For the demonstration, I’ll be using an EC2 machine (with Ubuntu 18.04 installed) on AWS. I have a public IP attached with the instance, and currently, if I try to access it from any web browser, it shows a “Site can’t be reached error”. This is expected as we don’t have a web server running on the machine and so there is no way to handle a request sent by the browser (client).

With no web server, our instance cannot handle requests made by our browser.

First, let us download the latest source code of Nginx from its official website and uncompress it.

wget https://nginx.org/download/nginx-1.19.0.tar.gz
tar -xzgf -f nginx-1.19.0.tar.gz

Next, we need to install some libraries.

sudo apt-get update
sudo apt-get install libpcre3 libpcre3-dev zlib1g zlib1g-dev libssl-dev

Next, we will step into the Nginx folder and build the source code. We will first run the configure script with some common custom flags (Check https://nginx.org/en/docs/configure.html for the exhaustive list) and some custom modules (Check the Module References section on https://nginx.org/en/docs/).

sudo apt-get install gcccd nginx-1.19.0.tar.gz./configure --sbin-path=/usr/bin/nginx --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --with-pcre --pid-path=/var/run/nginx.pid --with-http_ssl_module

Now, we will compile the source code using the make command. If make is not found, we need to install it first.

sudo apt install makemake

This might take a while. Once that’s done, we need to install the compiled source.

sudo make install

After the installation completes, we can run nginx -v to check if everything was completed successfully. Now, we just need to start Nginx.

sudo nginx

When we go to the browser and reload the page, we would see Nginx’s default page being served.

The default “Welcome to Nginx” page being served on port 80 of the machine.

To stop the server, we can use:

sudo nginx -s stop

Running Nginx as a service

Running Nginx as a systemd service helps us start, stop, restart and reload the server in a more standard way. It also helps us ensure that the server is automatically started on system boot. We will use an example from the official Nginx documentation to set up the required script (Check out https://www.nginx.com/resources/wiki/start/topics/examples/systemd/)

Let’s start by creating the file where we will add the script.

sudo touch /lib/systemd/system/nginx.service

Next, we need to edit this file and add the following:

[Unit]
Description=The NGINX HTTP and reverse proxy server
After=syslog.target network-online.target remote-fs.target nss-lookup.target
Wants=network-online.target

[Service]
Type=forking
PIDFile=/var/run/nginx.pid
ExecStartPre=/usr/bin/nginx -t
ExecStart=/usr/bin/nginx
ExecReload=/usr/bin/nginx -s reload
ExecStop=/bin/kill -s QUIT $MAINPID
PrivateTmp=true

[Install]
WantedBy=multi-user.target

That’s all there is for configuring the service. Let us now start Nginx as a systemd process.

sudo systemctl start nginx

We can also check the status of the running process.

sudo systemctl status nginx

It should show us that the server is in an active state, with a master and a worker process running. If we check back in the browser and reload, we should see the default landing page.

To enable starting Nginx automatically on system reboot:

sudo systemctl enable nginx

Finally, to stop the server:

sudo systemctl stop nginx

Enabling HTTPS (SSL)

At the moment, if we look at the address bar of our browser, we will see that our connection is not secure because we are using HTTP. We will now enable HTTPS and redirect all HTTP traffic (port 80) to the faster and more secure HTTPS (port 443).

We will be using Let’s Encrypt (https://letsencrypt.org/) with Nginx for this. Let’s Encrypt is a provider of free automated SSL certificates. In order to generate certificates and automate their renewal, we will use Certbot (https://certbot.eff.org/).

An important thing to note here is that Let’s Encrypt doesn’t issue certificates for IP addresses — It needs a valid domain name. I have configured a domain for the server I’m using in this example (If you’re using AWS, you can simply do this by creating an A-type record in your hosted zone in Route 53).

Accessing the domain over HTTP

Let’s dive into the nginx.conf file, remove all the existing contents of the file and add this:

events {}http {    server {        listen 80;

location / {
return 200 "Nginx is running"; } }}

We have created a simple server block inside the HTTP context listening to port 80 for now and returning a static text on the root path (‘/’). If we save this and reload Nginx, we should be able to see the message on our browser.

The static message from nginx.conf is served on the domain’s root path.

If we try to access the domain on HTTPS, we will get an error. So let’s generate some SSL certificates and get it working with the help of Let’s Encrypt and Certbot.

We will start by installing Certbot. We need to navigate to https://certbot.eff.org/ and select the software and the system on which our HTTP server is running to get the detailed set of installation instructions (For my case this is Nginx and Ubuntu 18.04).

sudo apt-get updatesudo apt-get install software-properties-commonsudo add-apt-repository universesudo add-apt-repository ppa:certbot/certbotsudo apt-get update

Next, we will install Certbot. Please make sure you opt for “keep your currently-installed version” option (the default option) for any configuration file changes suggested during the installation.

sudo apt-get install certbot python3-certbot-nginx

Now, we will use Certbot to install certificates. Certbot automatically reads the nginx.conf file to check for server_name directive which it then uses in the process of issuing certificates. So let’s first add that to our nginx.conf file.

http {    server {        listen 80;        server_name yourDomainName.com;        

location / {
return 200 "Nginx is running"; } }}

As a rule of thumb, after any modification to the conf file, we should always reload Nginx. After reloading, we run the certbot command to generate certificates.

sudo certbot --nginx

We will be prompted with some questions during the process. Once we complete those, certificates will be generated for our domain.

Generating SSL certificates for your domain using Certbot and Let’s Encrypt

Certbot will automatically make some changes to the nginx.conf file to enable SSL for our domain. Now, if we go to the browser and access https://<yourdomain>, it will load just fine and we should see our static text message on the screen server over a secure HTTPS connection.

Renewing certificates

Let’s Encrypt’s SSL certificates are valid for 90 days only. We can use Certbot to manually renew the certificates.

sudo certbot renew

As we have just installed our certificates, we will get a message saying Cert not yet due for renewal. We can test renewal using the command

sudo certbot renew --dry-run

If the test renewal runs fine, it means that any of the actual renewals would also run successfully. For renewal, the certificates don’t need to be already expired (This is good as we wouldn’t want to have expired certificates on the server before we go forward and renew those). We can even trigger renewals when the certificates are close to expiry. To automate the renewal process, we can run a cron job on the server that executes daily and triggers renewal. Unnecessary renewals would be skipped without any errors, but when the certificates are about to expire (let’s say on the 88th day), the cron will automatically renew the certificates for us and we wouldn’t have to worry about expired certificates.

crontab -e

We can then select a suitable editor, add the following command to the file and save it.

@daily sudo certbot renew

That’s it! This cronjob will now run at midnight daily and renew our certificates for us!

Redirecting HTTP requests to HTTPS

Right now, we can access our domain on HTTP as well. We would ideally want to redirect all our traffic on HTTP (port 80) to HTTPS (port 443). To do that, we need to make some changes to our nginx.conf file. Right now, it should look something like this:

events{}http {  server {    listen 80;    server_name <yourDomain>;    location / {      return 200 "Nginx is running";    }    listen 443 ssl; # managed by Certbot    ssl_certificate /etc/letsencrypt/live/<yourDomain>/fullchain.pem; # managed by Certbotssl_certificate_key /etc/letsencrypt/live/<yourDomain>/privkey.pem; # managed by Certbot    include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot    ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot  }}

We need to break this into 2 server blocks, one that server requests on port 443 and the other that will just listen to port 80 and redirects it to 443.

events {}http {  server {
listen 80;
server_name <yourDomainName>;
return 301 https://$host$request_uri;
} server {
server_name <yourDomainName>;
location / {
return 200 "Nginx is running";
}
listen 443 ssl; # managed by Certbot ssl_certificate /etc/letsencrypt/live/<yourDomainName>/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/<yourDomainName>/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}}

Now, if we reload Nginx and try accessing our domain on HTTP, we will be automatically redirected to HTTPS!

Serving static and dynamic content

As I mentioned at the beginning of this article, Nginx can be used to serve static content (text, image, files etc.) as well as dynamic content from any server-side application running on the machine (reverse-proxy).

Suppose, for example, we have a Nodejs application running on port 5000 on the machine. Let’s say we have a GET endpoint /getMessage on this application. If we run a cURL command on the server:

> curl -XGET http://localhost:5000/getMessageHello from Nodejs server!

We want to expose the APIs of this Nodejs app and accept client request on HTTPS. That means, a GET cURL request like:

curl -XGET https://<yourdomain>/api/getMessage

should be served by the Nodejs app and return ‘Hello from Nodejs server!’. This is how Nginx is used as a reverse proxy — It accepts incoming requests on a particular location (here /api), passes it to another location on the local machine, and relays the response back to the client once the request is processed.

To get this running, let us make a few changes to the nginx.conf file.

events {}http {

server {
listen 80;
server_name <yourDomainName>;
return 301 https://$host$request_uri;
}

server {

server_name <yourDomainName>;

location /api {
proxy_pass 'http://localhost:5000/';
}
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/<yourDomainName>/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/<yourDomainName>/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot

}
}

And we’re done! We can now access all the API endpoints of the Nodejs server on https://<yourDomain/api.

Conclusion

In this article, we saw an overview of Nginx’s architecture and its advantages over an Apache webserver. We then built Nginx from source code, ran it as a service on an Ubuntu machine and enabled SSL for our server. Lastly, we successfully served our Nodejs application APIs on a secure HTTPS connection using Nginx as a reverse proxy. Throughout the article, we used relatively basic Nginx features. There are a lot of other features (like load-balancing, rate-limiting, limiting file upload size on server etc.) as well which we can utilize as per our use case.

Thank you for reading! I hope you enjoyed the article.

--

--

Abhishek Raj

Developer with a Business acumen. Builder at heart. Experimenting with new products in gaming (Try https://gamereel.in). Ex-Rivigo, Sprinklr.