In this post, I document my step by step install of strapi on an Ubuntu 18.04 virtual machine, using MySQL, Nginx, Amazon S3 file uploads, S3 database backups, and touch-free automated deployments.
Background
Setting up strapi
is a first step in migrating my wife’s site & blog from WordPress to a Headless CMS, and re-designing it using a static site generator. If you’re interested in hearing about my upcoming adventures in doing the re-design, let me know in the comments, and I’ll write a blog post or two about that experience as well.
When referring to her website from now on, I’ll just refer to it as her-site
.
Ok, let’s get on with the strapi install, I’m going to write this post in real-time as I go through the steps myself.
Picking an IaaS provider
For those that are new to IaaS, the acronym stands for Infrastructure as a Service. It’s basically a vendor that provides the infrastructure for you (virtual machines, private networks, disk storage, firewalls, etc…), often with extra benefits that make managing an infrastructure more or less a pain-free experience.
The first choice I have to make is to pick a provider for the virtual machine itself. I’ve used many providers over the years, and one I really like is vultr. It’s also where I host my wife’s WordPress site right now, and I’ve had zero issues with them and their infrastructure in the past. Other IaaS vendors I would recommend include DigitalOcean, UpCloud, Linode, and of course AWS. If you’re going to follow along, use whatever platform you’d like. I personally almost always prefer to go with vendors that are less mainstream
, but that’s a subject for another post.
I use a variety of operating systems on my dev machines, so I’ve already registered a number of SSH keys in my Vultr account. If you are doing this for the first time, I suggest you register your public ssh key in your IaaS vendor’s control panel of choice (in Vultr, it’s under Account > SSH Keys > Add SSH Key
). It’s one less step to configure after you provision a server. If you are on Windows, and are not familiar with ssh
, jump to the SSH
section in this post, I’ll plan on making a few comments about setting ssh up on a Windows machine.
So I’ve now logged into Vultr, and I’m provisioning a VM using the following options:
- Server: Cloud Compute
- Server location: New York (NJ)
- Server type: 64-bit, Ubuntu 18.04
- Server size: $20/mo (80GB SSD, 2 CPU, 4096MB Memory)
- Auto backups: enabled (these are full server snapshots)
- Block storage: enabled
- SSH Keys: (picked a few of the keys I foresee using in the future)
- Firewall: (picked a firewall that allows all traffic on ports 80/443, and only allows traffic on port 22 for specific IPs I use personally from home and my office on a daily basis)
- Server hostname & label:
her-strapi-server
Once I hit the Deploy Now
button, I’ll just wait a minute or two roughly, and the server should be ready, IP and all.
SSH and necessary first steps
From here-on-out, I’ll use the IP 45.45.45.45
as the server’s IP.
To SSH into the server, I jump into my command prompt/terminal and type:
ssh root@45.45.45.45
Windows Users
If you’re on windows like I am as I write this post, and you’re not sure about this ssh
thing, an easy way to get going is to use a tool called chocolatey. In your cmd
prompt, install git
and gow
which give you some unix
tools, including ssh
.
@"%SystemRoot%\System32\WindowsPowerShell\v1.0\powershell.exe" -NoProfile -InputFormat None -ExecutionPolicy Bypass -Command "[System.Net.ServicePointManager]::SecurityProtocol = 3072; iex ((New-Object System.Net.WebClient).DownloadString('https://chocolatey.org/install.ps1'))" && SET "PATH=%PATH%;%ALLUSERSPROFILE%\chocolatey\bin"
# install git and gow
choco install git
choco install gow
refreshenv
You should then create a key-pair to authenticate to your server(s) without a password. To create the necessary keys, in the command prompt type:
ssh-keygen -t rsa
Keep the defaults, which are to save the keys in your ~/.ssh directory (on Windows, this will be your user’s directory, for example, C:\Users\mattjcowan.ssh), in a private key file id_rsa
and public key file id_rsa.pub
. The contents of the id_rsa.pub
file are what you will input into the IaaS vendor’s control panel when registering your SSH key with them.
Updating the server
So I’m now on the server, and I’m going to update the server prior to doing anything else:
sudo apt update -y
sudo apt upgrade -y
Creating a new user
Allowing root
login to a server is an attack vector that’s simply unnecessary to worry about. So I’m going to create a new user with sudo
privileges, and disallow the root
login all-together. I’ll call my new user easy-going
.
sudo adduser easy-going
It prompts me for a password, I’ll pick something Cr$AZzyyYu*Knw2WHat_Ime#An!!
.
Now I can give my user super powers.
usermod -aG sudo easy-going
If you are going to be annoyed like I am having to type the new user’s password every time you first use the sudo
command as that user, there’s a way to disable that. Obviously, jump forward in the post if this is all old news to you. When you use the sudo
command, the server looks at the /etc/sudoers
file to see if the current user is allowed to use the command, which it is because we added the easy-going
user to the sudo group in the previous step, and that group is given permissions to use sudo in the /etc/sudoers
file with the line %sudo ALL=(ALL:ALL) ALL
.
To create a more specific rule for the easy-going
account, we can add a line to the file using sudo nano /etc/sudoers
, or with the following one-liner:
sudo sh -c 'echo "easy-going ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers'
From now on, this user will no longer have to enter a password every time it uses sudo
.
Since this is a new user and we’re on a new virtual machine, I’ll just give my new user the same public keys as my root
user, so I’ll write those steps next.
First, I’ll switch over to the easy-going
user:
# switch to the new user's account (while logged in as root)
sudo su - easy-going
As the easy-going
user, I’ll copy over the root
keys and ensure permissions are set correctly:
mkdir -p ~/.ssh/
sudo cp /root/.ssh/authorized_keys ~/.ssh/
chmod 700 ~/.ssh
sudo chown easy-going ~/.ssh/authorized_keys
sudo chgrp easy-going ~/.ssh/authorized_keys
chmod 600 ~/.ssh/authorized_keys
# switch back to the root user
exit
Lets now logout of the server (just type exit
once more), and login as the new user.
ssh easy-going@45.45.45.45
I’m in, without typing a password, awesome! I can now disable the root
login completely.
To edit the ssh
config, I’ll use nano, a great little editor for enthusiasts like myself:
sudo nano /etc/ssh/sshd_config
Once in there, I change the PermitRootLogin
flag line to PermitRootLogin no
, then I save and exit the file (Ctrl+X
, Y
, and Enter
). To enable the changes, I restart the ssh
process, using the following command:
sudo service ssh restart
I can now no longer use the root
user to login to the server, awesome! Every little step in security helps. Which means I want to do 2 more things that are easy to do and might just be enough to dissuade some hackers from trying to get on my server, opting for easier targets instead. Those 2 things are a neat little tool called fail2ban
and the very handy ufw
local server firewall.
Installing fail2ban
fail2ban will monitor ssh logins and ban IP addresses based on a set of rules and failed logins. After we install the service, we will create a fail2ban.local
file should we want to override the default settings for the service. We’ll also create a jail.local
file to setup rules governing what to do should login failures occur.
sudo apt install fail2ban -y
sudo cp /etc/fail2ban/fail2ban.conf /etc/fail2ban/fail2ban.local
sudo cp /etc/fail2ban/jail.conf /etc/fail2ban/jail.local
# lets make sure it's started
sudo fail2ban-client start
You can read more about fail2ban
here.
Installing ufw
Now, I want to install a firewall on the server, where I can configure rules to control traffic to and from the server. We only need 3 ports open on the server for outside access for my purposes right now:
- Port 22: the SSH port which allows us to login to the server and configure it
- Port 443: the
https://
port, and - Port 80: the
http://
port, which we will setup later in this article to automatically redirect to port 443
Because I’m using the vultr firewall as well in front of the server, I’m going to leave port 22 open completely, which ensures that I will always be able to login to the server, and I can setup more specific rules (for specific IP ranges and such) in the vultr firewall which is accessible through their control panel. Setting up ufw then is a secondary safeguard that will help me sleep a little better at night.
On Ubuntu, there’s a great and simple firewall called ufw
, which stands for Uncomplicated Firewall. You can learn about it here.
I’ll install it now, even though it’s possible it’s already installed on the system. IaaS vendors use a variety of base images when they provision a virtual machine, and often include a number of popular packages by default in those distributions.
sudo apt install ufw -y
Sure enough, looks like it was already installed. I’ll now enable the 3 ports above and deny all other incoming traffic:
sudo ufw disable
sudo ufw --force reset
sudo ufw default deny incoming
sudo ufw default allow outgoing
sudo ufw allow ssh
sudo ufw allow http
sudo ufw allow https
# enable ufw
sudo ufw --force enable
# inspect the entries
sudo ufw status verbose
sudo ufw reload
Patching and maintaining the server
The last thing I’d like to do is keep the server on auto-pilot. I love logging into my servers and seeing:
0 packages can be updated.
0 updates are security updates.
To update and patch the server, in it’s simplest form, I could periodically run the following.
sudo apt-get update -y
sudo apt-get upgrade -y
sudo apt-get dist-upgrade -y
sudo apt-get autoremove -y
# only reboot when it makes sense
sudo reboot
Let’s turn on unattended upgrades for security related features, which will run daily by default (you can read more about this here):
sudo apt install unattended-upgrades -y
sudo dpkg-reconfigure --priority=low unattended-upgrades
To allow the server to reboot automatically when a reboot is required, I’ll turn that feature on by editing the following file:
sudo nano /etc/apt/apt.conf.d/50unattended-upgrades
I’ll set the `Automatic-Reboot` flag to “true”, and set an appropriate reboot time as well, after I check the server clock timezone with the command `timedatectl` (which usually is UTC, but not always).
Unattended-Upgrade::Automatic-Reboot "true";
Unattended-Upgrade::Automatic-Reboot-Time "07:00";
That’s enough with pre-requisite stuff. Let’s move on to installing a database.
MySQL setup
Strapi supports a number of databases. I love relational database, and I’m picking MySQL
for this instance, as it’s easy to patch and maintain, but PostgreSQL
or MariaDB
would also be fine choices.
First I’ll pick a nice password for the root
user, and export it to a variable (obviously, this isn’t the actual password I’m going to use).
export MYSQL_PASSWORD=JKownY-yky1!puSfPGhPL0Y-FTSPIRtG3P5sfbI
Then it’s just a matter of running a few commands.
# force of habit
sudo apt update -y
sudo apt upgrade -y
# check for the latest download url at:
# https://dev.mysql.com/downloads/repo/apt/
curl -OL https://dev.mysql.com/get/mysql-apt-config_0.8.15-1_all.deb
sudo dpkg -i mysql-apt-config*
sudo apt update -y
# INTERACTIVE install, accept defaults and recommendations, and use a strong password
# sudo apt install mysql-server -y
# MINIMALLY-INTERACTIVE install
sudo debconf-set-selections <<< "mysql-community-server mysql-community-server/root-pass password $MYSQL_PASSWORD"
sudo debconf-set-selections <<< "mysql-community-server mysql-community-server/re-root-pass password $MYSQL_PASSWORD"
sudo DEBIAN_FRONTEND=noninteractive apt install mysql-server -y
# install shared client libraries
sudo apt install libmysqlclient21 -y
Next, I’ll check the MySQL status:
sudo systemctl status mysql.service
# start and stop the service
# sudo systemctl start mysql.service
# sudo systemctl stop mysql.service
Great, that’s working. To make the MySQL install even more secure, I’ll run the following recommended tool which takes care of a number of vectors that could potentially be exploited. While I’ve already setup a firewall that blocks traffic on the outside to MySQL ports, running this protects me from future me
and other mistakes I could make inadvertently.
sudo mysql_secure_installation
Patching MySQL
When future releases of MySQL come out, I will run the following commands to keep my version of MySQL up to date.
sudo apt update -y
sudo apt upgrade -y
# get the latest download url at:
# https://dev.mysql.com/downloads/repo/apt/
curl -OL https://dev.mysql.com/get/mysql-apt-config_0.8.15-1_all.deb
sudo dpkg -i mysql-apt-config*
# remove the old installation of MySQL by running:
sudo dpkg -P mysql
# install mysql from the updated package repository
sudo apt update -y
sudo apt install mysql-server -y
# upgrade client libraries
sudo apt install libmysqlclient21 -y
Authentication
Lots of software doesn’t yet support the new authentication scheme introduced in MySQL 8. In order to be able to leverage phpMyAdmin
(which I’ll spare you from covering as part of this post), and to avoid issues with strapi
which from what I can tell might object to the new scheme, I’ll set the default authentication to the original MySQL v5 authentication scheme mysql_native_password
. To do that, I’ll edit the following file:
sudo nano /etc/mysql/my.cnf
At the bottom of the file, I’m adding the following 2 lines:
[mysqld]
default_authentication_plugin=mysql_native_password
I then check to make sure MySQL restarts without issue:
sudo service mysql status
sudo service mysql restart
Creating a MySQL user for the strapi application
At this point I can create the database and a MySQL user that’s specifically designed to be used by the strapi application, with permissions on that database. To do that, I need to login to MySQL as root
, and create the user. Here’s how I log in:
mysql -u root -p
Here’s how I create the user ( feel free NOT to use this password 😊 ):
create database strapi_db;
create user 'strapi_db_user'@'localhost' identified with mysql_native_password BY 'G$RblyG00kPghawd';
grant SELECT, INSERT, UPDATE, DELETE, CREATE, INDEX, ALTER,
CREATE TEMPORARY TABLES, LOCK TABLES, EXECUTE, CREATE VIEW, SHOW VIEW,
CREATE ROUTINE, ALTER ROUTINE, EVENT, TRIGGER
on strapi_db.* to 'strapi_db_user'@'localhost';
Ok, we’re done with the database. Moving on to strapi
itself.
Strapi setup
NodeJS & Git
Strapi is a NodeJS application, so I need to install NodeJS on the server.
curl -sL https://deb.nodesource.com/setup_12.x | sudo -E bash -
sudo apt install nodejs -y
sudo apt install build-essential -y
# also install git
sudo apt intall git -y
I then make sure I’m running the right version of NodeJS
node -v
Awesome, v12.16.3
is what I wanted.
Strapi User
At this point, to ensure that the strapi app doesn’t do something undesired to the virtual machine, I’ll create a strapi specific user that will be responsible for serving the app.
sudo adduser strapi
Then I’ll create a directory to host the app in. I read somewhere that srv
is a great root directory for this sort of thing, I won’t question it, ‘cause I just like the sound of it.
sudo mkdir -p /srv/strapi
With that out of the way, I’ll set strapi
as the owner of the directory and set some permissions on the directory.
sudo chown strapi:strapi /srv/strapi
sudo chmod 755 /srv/strapi
The strapi documentation also recommends making a few changes to support npm
better.
First I’m going to switch over to the strapi
user, because if it’s strapi
specific, I want strapi
to be in charge.
su - strapi
As the strapi
user, I create a .npm-global
directory and set the path to this directory for node_modules
:
cd ~
mkdir ~/.npm-global
npm config set prefix '~/.npm-global'
I then edit the ~/.profile
file with nano ~/.profile
, and add the following at the end of the file:
export PATH=~/.npm-global/bin:$PATH
At this point, I chose to exit from the strapi user
with the command exit
and reboot the server (using sudo reboot
). I could have also just sourced the environment: source ~/.profile
.
Strapi install
Back on my local Windows machine, I created a local install of strapi. I do this so that I can develop locally and setup automatic deployments to the server on git push
to a GitHub master branch. I did that as follows:
cd c:\code\
npx create-strapi-app strapi-app --quickstart
Now I can configure the production database to use MySQL. I even use nano
on my Windows machine, what’s wrong with me?
cd c:\code\strapi-app
nano config\environments\production\database.json
Then I just switch the "``client``"``:
"``sqlite``"
setting to "``client``"``:
"``mysql``"
.
Because I want to store images on Amazon S3 as well, I’ll go ahead and install that plugin as well at this time, and configure it later.
npm i strapi-provider-upload-aws-s3@beta
Now I can push my local dev environment to a GitHub repo.
cd c:\code\strapi-app
git init
git add .
git commit -m "First commit"
git remote add origin https://github.com/her-account/strapi-app.git
git push -u origin master
Back on the server, as the strapi
user, I navigate to the strapi directory:
su - strapi
cd /srv/strapi
Then I clone the repo down:
git clone https://github.com/her-account/strapi-app.git
At this point, I’ll install the node modules and build the app.
cd /srv/strapi/strapi-app
npm install
NODE_ENV=production npm run build
At this point, I’m double checking the strapi documentation for tips and pointers, and it seems like it’s time to install PM2
.
PM2 setup
While still logged in as the strapi
user, I can install PM2 without sudo
privileges. PM2
will serve as the process manager for the strapi nodejs process:
npm install pm2@latest -g
I’ll go ahead now and create a pm2 ecosystem file which is where I can setup some environment variables for each pm2 app I want to install and run.
cd ~
pm2 init
nano ecosystem.config.js
My ecosystem.config.js
file looks as follows:
module.exports = {
apps: [
{
name: 'strapi',
cwd: '/srv/strapi/strapi-app',
script: 'npm',
args: 'start',
env: {
NODE_ENV: 'production',
DATABASE_HOST: 'localhost',
DATABASE_PORT: '3306',
DATABASE_NAME: 'strapi_db',
DATABASE_USERNAME: 'strapi_db_user',
DATABASE_PASSWORD: 'G$RblyG00kPghawd'
},
},
]
};
With that in place, I can start the app:
cd ~/
pm2 start ecosystem.config.js
I see App [strapi] launched (1 instances)
. And the status of the app is set to online
.
By default, strapi
runs on port 1337
, but I can easily change that in the server.json
file in strapi’s config directory tree.
A few more things to do now before I can access the app over the web. I need to make sure that pm2
starts up when the server reboots, and I need to setup a proxy to map incoming requests over port 80/443 to the strapi app.
To make sure pm2
starts up on reboot, I’ll hook it up with systemd
, the Ubuntu native service manager.
While still logged in as the strapi
user:
cd ~
pm2 startup systemd
I’m given instructions on how to elevate privileges for the command and have the command run as the strapi
user. I’ll go ahead and copy the output to the clipboard, and exit back out to easy-going
, since the strapi user doesn’t have sudo
powers.
exit
I paste the following:
sudo env PATH=$PATH:/usr/bin /home/strapi/.npm-global/lib/node_modules/pm2/bin/pm2 startup systemd -u strapi --hp /home/strapi
I then switch back to the strapi
user with su - strapi
, and save my pm2 process list:
pm2 save
I’m ready to reboot and check to see if it worked.
exit
sudo reboot
I log back into the server, and run:
su - strapi
pm2 list
Sure enough, my app is showing as online
, I’m good to go. Now it’s time to setup a proxy so that the app can be made available on the internet.
Nginx setup
Installing and configuring nginx is pretty simple. I just love nginx
, it’s awesome!
I make sure I’m logged in as easy-going
:
whoami
Then, I install nginx
and a few other tools:
sudo apt install nginx openssl dnsutils -y
Force of habit, I’ll open a browser to the url of my server at http://45.45.45.45
, and I make sure I can see the default nginx website. Cool, I’m good.
I may want to install other sites on this server, so I’ll make sure to map the appropriate domain hostname to the strapi install.
First, I’ll get rid of the default nginx site and reload nginx, which makes the url above unavailable.
sudo rm /etc/nginx/sites-enabled/default
sudo service nginx reload
I want to host the site on port 443, which means I need to have a certificate. One way to do this is to use letsencrypt. But I don’t want to broadcast the server’s IP to the world, and I’d like to be able to have the flexibility to easily relocate the server with minimal downtime if needed, and also to take advantage of a suite of capabilities built into the free Cloudflare service. Cloudflare will handle the formal certificate, and I’ll still be able to ensure traffic between Cloudflare and the server is over SSL. So for this reason, I’m good with just using self-signed certificates.
[[ -f /etc/ssl/openssl.cnf ]] && sudo sed -i 's/^RANDFILE/#&/' /etc/ssl/openssl.cnf
if [[ ! -f /etc/ssl/private/nginx-selfsigned.key || ! -f /etc/ssl/certs/nginx-selfsigned.crt ]]; then
sudo openssl req -x509 -nodes -days 2000 -newkey rsa:2048 -keyout /etc/ssl/private/nginx-selfsigned.key -out /etc/ssl/certs/nginx-selfsigned.crt -subj /C=US/ST=Illinois/L=Chicago/O=Startup/CN=strapi
fi
if [[ ! -f /etc/ssl/certs/dhparam.pem ]]; then
sudo openssl dhparam -dsaparam -out /etc/ssl/certs/dhparam.pem 2048 > /dev/null 2>&1
fi
Then I create an nginx
snippet that uses the self-signed certs above:
if [[ ! -f /etc/nginx/snippets/ssl-params.conf ]]; then
sudo bash -c 'cat >/etc/nginx/snippets/ssl-params.conf' <<EOL
ssl_protocols TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH";
ssl_ecdh_curve secp384r1;
ssl_session_timeout 10m;
ssl_session_cache shared:SSL:10m;
ssl_session_tickets off;
ssl_stapling on;
ssl_stapling_verify on;
ssl_dhparam /etc/ssl/certs/dhparam.pem;
ssl_certificate /etc/ssl/certs/nginx-selfsigned.crt;
ssl_certificate_key /etc/ssl/private/nginx-selfsigned.key;
resolver 8.8.8.8 8.8.4.4 valid=30;
resolver_timeout 5s;
EOL
fi
Then I create a common header snippet file:
if [[ ! -f /etc/nginx/snippets/common-add-headers.conf ]]; then
sudo bash -c 'cat >/etc/nginx/snippets/common-add-headers.conf' <<EOL
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
add_header X-Frame-Options SAMEORIGIN;
add_header X-Content-Type-Options nosniff;
add_header X-XSS-Protection "1; mode=block";
EOL
fi
Then I create a common proxy header snippet file:
if [[ ! -f /etc/nginx/snippets/common-proxy-headers.conf ]]; then
sudo bash -c 'cat >/etc/nginx/snippets/common-proxy-headers.conf' <<EOL
proxy_set_header X-Real-IP \$remote_addr;
proxy_set_header X-Forwarded-For \$proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto \$scheme;
proxy_set_header Host \$host;
proxy_set_header X-Forwarded-Host \$host;
proxy_set_header X-Forwarded-Port \$server_port;
EOL
fi
Next, I create a new site using:
sudo nano /etc/nginx/sites-available/strapi
And I configure it as follows (where app.her-site.com
is the hostname for the strapi app):
server {
# force https
listen 80;
listen [::]:80;
server_name app.her-site.com;
return 301 https://$host$request_uri;
}
server {
client_max_body_size 500M;
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_tokens off;
server_name app.her-site.com;
include snippets/ssl-params.conf;
include snippets/common-add-headers.conf;
location = /favicon.ico {
log_not_found off;
access_log off;
}
location / {
proxy_pass http://localhost:1337;
proxy_ssl_verify off;
include snippets/common-proxy-headers.conf;
}
}
With that in place, I enable the site using:
sudo ln -s /etc/nginx/sites-available/strapi /etc/nginx/sites-enabled/strapi
Before I reload nginx, I’ll check to make sure my updates above are good:
sudo nginx -t
It says test is successful
, I’m ready to reload nginx.
sudo service nginx reload
Now I’m ready to point Cloudflare to my server. I login to https://cloudflare.com and I add an A
record:
In Cloudflare, under the SSL/TLS tab, I set the encryption to Full
which guarantees that Cloudflare will use the 443 port of the server (Cloudflare will ignore the fact that the certificates are self-signed).
Still in Cloudflare, under the Edge Certificates tab, I tell Cloudflare to always use https
, and I set the minimum TLS version to 1.2. Those are probably the most important settings, but if you’re following along, feel free to experiment with others.
Time to see strapi
in action. I fire up the url https://app.her-site.com for the FIRST time!
I guess that’s good, but now I’m a little curious, what do I do now? Turns out, after many minutes of being lost, being new to strapi completely, I realize I just need to add /admin
to the url to get the login screen and/or setup the first user. All is good, whew!!
At this point though, I realize there is a major drawback to strapi
as it stands, and it’s simply that I can’t build content types while in production mode, because strapi
is generating code and needs to recycle the nodejs
process to hydrate model updates and migrate the database effectively (at least that’s my guess, without looking deeper at the code). This means I will need to develop content types locally on a dev machine for this site and push content type updates via git
.
This means I really really need a way to automatically deploy updates from GitHub to the server, ‘cause there’s no way I’m going to login to the server everytime I make updates, that would just be a pain. The server needs to be on auto-pilot as much as possible.
Automated deployments
An easy way to set this up is just to poll the git directory as the strapi
user on the server. If the local repo is behind, it’s just a matter of pulling the latest code down to the server and restarting the pm2
process.
To do that, I’ll just create a script update-strapi.sh
:
su - strapi
cd ~
nano update-strapi.sh
And I’ll use the following script:
cd /srv/strapi/strapi-app
changed=0
git remote update && git status -uno | grep -q 'Your branch is behind' && changed=1
if [ $changed = 1 ]; then
git reset --hard
git pull
# make sure to capture any changes in package.json
npm install
cd ~
pm2 startOrRestart ecosystem.config.js
echo "Update successfully";
else
echo "Up to date";
fi
I’ll make it executable:
chmod +x update-strapi.sh
I can now run it to make sure it works. One consideration here is that the git pull
command cannot prompt for username and/or password as a backend process, so what I did is create an ssh key for the strapi
user (using ssh-keygen
) and I added the public key to the GitHub repo. The script is ready to run.
./update-strapi.sh
Now it’s just a matter of scheduling the script to run at a given interval.
Still logged in as the strapi
user, I’ll set it up to run in cron
:
crontab -e
I’ll add the following line which will execute the script above every 5 minutes. A git pull
command is fairly light-weight, so I’m not too worried about this taxing the server, even though some might consider it a bit less elegant than a webhook
.
*/5 * * * * /bin/sh /home/strapi/update-strapi.sh > /home/strapi/update-strapi.log 2>&1
If you want to run a webhook
instead, the strapi documentation has an example which you can find here.
I then made an update locally on my Windows dev machine, by adding a content type, pushed it to GitHub on the master branch, and a few minutes later, the changes appeared in production. Great!
Hosting images on Amazon S3
One thing I want to do is put all the images my wife uploads into an Amazon S3 bucket. She uploads lots of images and I don’t want to have to worry about storage. Also, if I decide to use Cloudinary or Imgix down the road, having the images in Amazon S3 will make everything easier for me.
Looks like strapi has some documentation for doing this. I won’t copy that documentation here, as it’s pretty straight forward. I’m creating an account in Amazon S3, giving the account S3 access rights, and creating a bucket.
Once the bucket is created, I’m ready to link it into the strapi install. At this point I’m looking at the documentation section on configuring the Strapi Provider AWS S3 Plugin.
I’ve already installed the S3 plugin, using:
npm i strapi-provider-upload-aws-s3@beta
Because I want my development environment to use the local provider, and production to use S3, it look like I need to use a .js file instead of a .json file for that, so I’m going to a ./extensions/upload/config/settings.js
file and hopefully that’ll work.
if (process.env.NODE_ENV === "production") {
module.exports = {
provider: "aws-s3",
providerOptions: {
accessKeyId: process.env.AWS_ACCESS_KEY_ID,
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY,
region: process.env.AWS_REGION,
params: {
Bucket: process.env.AWS_BUCKET,
},
},
};
} else {
// to use the default local provider, return an empty configuration
module.exports = {};
}
Before I commit this, I want to modify the ecosystem.config.json
file on the server as the strapi
user, since the auto-deploy
method we put together will already recycle PM2
for me, which’ll save me a step.
su - strapi
cd ~
nano ecosystem.config.json
I’ll add the 4 needed environment variables from my AWS S3 setup, as shown here (with placeholder values of course):
module.exports = {
apps: [
{
name: 'strapi',
cwd: '/srv/strapi/strapi-app',
script: 'npm',
args: 'start',
env: {
NODE_ENV: 'production',
DATABASE_HOST: 'localhost',
DATABASE_PORT: '3306',
DATABASE_NAME: 'strapi_db',
DATABASE_USERNAME: 'strapi_db_user',
DATABASE_PASSWORD: 'G$RblyG00kPghawd',
AWS_ACCESS_KEY_ID: 'PUT_YOUR_ACCESS_KEY_ID_HERE',
AWS_SECRET_ACCESS_KEY: 'PUT_YOUR_SECRET_ACCESS_KEY_HERE',
AWS_REGION: 'PUT_YOUR_REGION_HERE',
AWS_BUCKET: 'PUT_YOUR_BUCKET_HERE'
},
},
]
};
I’m ready to commit and push my local changes, and the server should automatically pick up the new file upload destination.
After a few minutes, I’m trying a file upload on production.
Looks like the file was uploaded to S3, all good! Sweet!
Automated MySQL backups
Now that file uploads are going right to Amazon S3, I’ll feel much better if I have some MySQL database backups going to Amazon S3 as well.
Because I might want to use this backup strategy for other apps besides strapi
on this server, I’ll go ahead and run the backups as the easy-going
user.
First I’m going to create a bucket to save backups to inside Amazon S3, but this time, I’ll keep the bucket locked down to the public. Once that’s done, I’m ready to install the AWS S3 command line tools.
I need the unzip
utility.
sudo apt install unzip -y
I’ll install the aws
cli:
cd ~
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install
Now I can write a generic backup script:
touch backup-database-to-s3.sh
chmod +x backup-database-to-s3.sh
nano backup-database-to-s3.sh
These are the script contents, which as you can see relies on environment variables.
#!/bin/bash
TIMESTAMP=$(date +"%Y-%m-%d")
BACKUP_DIR=/tmp/backups
BACKUP_FILE="${MYSQL_DB}.$TIMESTAMP"
mkdir -p $BACKUP_DIR
# delete files older than 10 days
find $BACKUP_DIR -mtime +10 -type f -delete
mysqldump -u ${MYSQL_USER} -p${MYSQL_PASS} ${MYSQL_DB} > $BACKUP_DIR/$BACKUP_FILE.sql
cat $BACKUP_DIR/$BACKUP_FILE.sql | gzip > $BACKUP_DIR/$BACKUP_FILE.sql.gz
aws s3 cp $BACKUP_DIR/$BACKUP_FILE.sql.gz s3://${S3_BUCKET}/${S3_BUCKET_DIR}/
rm $BACKUP_DIR/$BACKUP_FILE.sql
Now I’m going to create a script that I can extend over time to include any additional databases I want to backup in the future.
touch backup-databases-to-s3.sh
chmod +x backup-databases-to-s3.sh
nano backup-databases-to-s3.sh
With the following contents (adapt as needed):
#!/bin/bash
cdir="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
export AWS_ACCESS_KEY_ID="PUT_YOUR_ACCESS_KEY_ID_HERE"
export AWS_SECRET_ACCESS_KEY="PUT_YOUR_SECRET_ACCESS_KEY_HERE"
export AWS_DEFAULT_REGION="PUT_YOUR_REGION_HERE"
# backup database #1
S3_BUCKET=strapi-backups \
S3_BUCKET_DIR=strapi \
MYSQL_USER=strapi_db_user \
MYSQL_PASS="G$RblyG00kPghawd" \
MYSQL_DB=strapi_db \
$cdir/backup-database-to-s3.sh
# backup database #2
# backup database #3
I’ll just run the command to see if it works.
./backup-databases-to-s3.sh
Sweet! I can see my first backup on S3.
Time to hook this up to run on a nightly or weekly basis.
crontab -e
I’m going to add the following line, which executes the script at 7am UTC every morning:
0 7 * * * /home/easy-going/backup-databases-to-s3.sh >/dev/null 2>&1
That’s it, I’m done! It’s quite a lot of steps. But in the end, it doesn’t seem like it will be a very hard setup to manage.
I could have gone the route of using Docker
. That might have been slightly simpler. But in the end, I would still need multiple containers (one for the app, one for nginx, one for MySQL with a data volume). I’d still need to come up with a backup strategy, an automatic update strategy, etc…