It can be hard to keep up with side projects, and when you get around to working on them, the last thing you want is worrying about meta-work like..
“how did I deploy this again? scp
the folder somewhere? ssh
into the server and … call what command again?”
What we want is Continuous Delivery — but without having to read those 500 page-long, buzzword-filled enterprise books.
Before we start
Let’s quickly get on the same page with what we’re trying to accomplish.
- We have some sort of application, which we version-control using Git.
- We’d like to deploy it to a server, to which we have shell access.
- We want to avoid copying files or (re)starting our application manually
Git Hooks
This is a nice and simple technique I learned a while back from @metalmatze in this blog post.
In essence, you set up your server as a Git remote, which simply involves creating a folder and giving it the expected structure using git init --bare
.
You then setup a post-receive hook, which is a shell script that gets invoked once the folder receives a push.
We can use this shell script to run all necessary tasks, such as copying files into our web-root or restarting our application.
To deploy, we then only need to run git push
, and the rest should happen automatically.
I like this approach, because it does not involve another running application that we’re depending on, and it’s fairly easy to setup.
Two pushes is one too many
However, it is not unlikely that you’re already hosting your Git repository elsewhere, for example on GitHub or GitLab. In that case, we’re now pushing twice after every change. We can do better. Both GitHub and GitLab offer their own solution to the “do something after I push code” issue, and both operate in a similar fashion. Let’s take a quick look at both of them.
GitHub Actions
On GitHub, we have GitHub Actions. If you look at the feature page, you see a lot of “actions” to run, many of them pre-configured for third party systems and providers. But we can also make our own, and it’s not too cumbersome to do so.
So when GitHub wants to run an action, it needs a “runner”. This is simply a server, which takes the given task and executes it. When using GitHubs provided actions, these are GitHubs own servers, but we want to receive and run the code on our own server. To do this, we’re going to add a self-hosted runner, which involves the following steps:
- Click through the GitHub UI (Repository -> Settings -> Actions -> Self-hosted runners) to receive a token for your runner
- Download the runner software on your server
- Configure it with your token and start it (preferably as a service in the background)
More details can be found in the official documentation.
Afterwards, you can setup you action through GitHubs GUI, or by creating a .yml
-file at $repository/.github/workflows/
.
All that’s left to do is setting runs-on: self-hosted
and writing our deployment code in the run
-section.
Here’s a short example for a Docker-based deployment:
name: My little action
on: [push]
jobs:
build:
runs-on: self-hosted
steps:
- uses: actions/checkout@v1
- name: Copy files and deploy containers
run: |
path="/your-path-of-choice"
rsync -ar --delete . $path/.
docker-compose --file $path/docker-compose.yml up -d --force-recreate --build
You should now see your actions running under your repository’s “Actions” tab after every push.
Sidenotes
- The runner script complains if ran with sudo privileges (and rightfully so!), so add a new, dedicated user to start the runner with
- Self-hosted runners are currently configured on a per-repository basis, so if you deploy multiple projects, you need to setup the server as a new runner each time. I’ve tried it, and it seems to be sufficient to make separate folders for each runner, and then configure and run a service for each repository.
GitLab CI/CD
GitLab’s Continuous Integration/Delivery might look more enterprise-y, but works awfully similar.
- We obtain a token via the GUI
- We configure our server as “runner”
- We setup our actions in a
.yml
-file.
Digital Ocean also has a nice guide, going through almost every step in detail.
Here’s my example .gitlab-ci.yml
:
stages:
- deploy
deploy:
stage: deploy
only:
- master
script:
- pwd
- path="/your-path-of-choice"
- rsync -ar --delete . $path/.
- sudo docker-compose --file $path/docker-compose.yml up -d --force-recreate --build
Sidenotes
- I had to configure my runner to
Run untagged jobs
under Project -> CI/CD -> Runners -> [edit button next to runner] for it to work
Final notes
Both GitLab and GitHub provide ways to deploy automatically, and both work pretty well for me so far.
Of course we’re using hosted services, so there are theoretical usage limits (currently 2000 CI minutes/month on both platforms, so roughly an hour a day).
If you’re running into usage limits or (understandably!) prefer decentralized solutions, a popular self-hosted CI alternative is drone. I’ve not played around with it so far, but I’ll let you know once I do :)