Moving to DigitalOcean

Moving to DigitalOcean

6 min read

Last weekend I've moved my website and blog to DigitalOcean. At the time of building this website 2015 I choosed Heroku as the platform to host my application, because I didn't wanted to deal with server provisioning and maintenance.

Heroku is probably the easiest way to ship your application into production šŸš€. In my use case, the GitHub repository that hosts the code for my website, was connected to Heroku and magically I've managed to ship my application using continuous deployment through Travis CI and GitHub.

However I knew that I would be switching at some point to an IaaS considering I would need more control over the infrastructure of my application.

The problem

My website is a Node.js application built with Express, Pug and SCSS. The blog runs on a self-hosted Ghost.

Since the start of using Heroku, I wanted to use a single Dyno to keep it simple. But every dyno is attatched to a process, so I managed to start Ghost as a module from the Express application. This workaround wasn't the best solution, but it worked more than a year and a half.

Recently, Ghost went out of beta and released the 1.0.0 with a ton of breaking changes. Since then it was nearly impossible with my needs to keep using Heroku.

Switching to DigitalOcean

I decided to make the move considering it an opportunity to learn and improve my infrastructure.

If you don't have a DigitalOcean account, use this link to register and get $10 for free!


  • Single droplet.
  • Basic security.
  • SSL certificate using LetsEncrypt & Certbot with auto-renew.
  • Independent blog and web applications through Nginx.

Basic security

After spinning up a 5$ droplet, the first thing I did was disabling root login and password authentication. That means only SSH can be used to connect to the server.


$ sudo ufw allow 'Nginx Full' && sudo ufw allow OpenSSH
$ sudo ufw enable && sudo ufw status
To                         Action      From
--                         ------      ----
OpenSSH                    ALLOW       Anywhere
Nginx Full                 ALLOW       Anywhere
OpenSSH (v6)               ALLOW       Anywhere (v6)
Nginx Full (v6)            ALLOW       Anywhere (v6)


When I was on Heroku, I used Cloudflare to obtain SSL šŸ”’. But LetsEncrypt is way better. Just because you get end to end encryption.

To get your SSL certificate, first, you need to install certbot.

$ sudo add-apt-repository ppa:certbot/certbot
$ sudo apt-get update && sudo apt-get install python-certbot-nginx

Then, open your Nginx configuration file find server_name directive and set your domain.


Verify the configuration and if you have no errors, reload it.

$ sudo nginx -t
$ sudo systemctl reload nginx

Now it's time to obtain our SSL certificates for the domain specified at the Nginx config file.

$ sudo certbot --nginx -d -d

āš ļø Before obtaining the certs, you'll have to point the domain to your DigitalOcean IP. That's the way Certbot verifies that you control the domain you're requesting a certificate for.

If that's successful, certbot will ask how you'd like to configure your HTTPS. Finally the certificates will be downloaded, installed, and loaded.

Auto renewal

LetsEncrypt certificates are only valid for ninety days, however the Certbot cli includes an option to renew our SSL certificates and we can automate this process with a crontab.

$ sudo crontab -e

Add a new line inside the crontab file and save it. Basically you're asking to your server to run the certbot renew --quiet command every day at 04:00.

0 4 * * * /usr/bin/certbot renew --quiet

Apps management

Both applications are started as a daemon on the server. So even though the server is restarted, automatically both apps will go up. Uses PM2 ā€“ production process manager for Node.js. Ghost uses ghost-cli to update and run the blog.


I use Nginx as a reverse proxy against the applications that are running on localhost.

The first block of my configuration file, redirects all the requests to https.

server {
  listen         80;
  listen    [::]:80;
  return         301 https://$server_name$request_uri;

After enforcing HTTPs, we use another server block to set our locations. Those locations will define how Nginx should handle the requests to specific resources.

As an example, if your make a request to, Nginx will match our / location and is going to proxy_pass the request to my http://localhost:PORT where the Node.js application is started.

Also, we're enabling HTTP2 and SSL for our server, providing the certificates and keys needed.

server {
  listen 443 ssl http2 default_server;
  listen [::]:443 ssl http2 default_server;
  ssl_certificate ...;
  ssl_certificate_key ...;
  ssl_dhparam ...;
  location / {
    proxy_pass http://localhost:PORT;
    proxy_set_header Connection "upgrade";
    proxy_set_header Host $http_host;
    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Proto $scheme;
    proxy_set_header X-NginX-Proxy true;
    proxy_set_header X-Real-IP $remote_addr;
  location ^~ /blog {
     # Same as / with another port

Enjoyed the article? šŸ˜

Subscribe šŸ‘Øā€šŸ’»