Running Github Actions Runners on Gigahatch Managed Kubernetes

In this guide we will show you how you can run self-hosted Github Actions Runners on the cheap and with minimal maintenance using Gigahatch Managed Kubernetes, starting at just 4.87€ per month.

Table of Contents

Prerequisites

To follow along, you will need the following:

Creating the cluster

To start with, we need to create a cluster to run our runners. If you already have one, you can skip this step.

  1. Go to your Gigahatch Managed Kubernetes organisation.
  2. Click Create cluster.
  3. Give the cluster a name, like Github Actions Runners and choose a node size. For this example we will be using one replica of the smallest SharedI1 with 2 CPUs and 4GB RAM. If you need more power for your runners, or you would like to run multiple of them, you can use larger node sizes.
  4. Click Create Cluster and wait a few minutes for your cluster to be created.

Setting up kubectl

After your cluster has finished creating, click the Get Kubeconfig button. This will download a yaml file that you need to connect to your cluster. Save it somewhere secure and open your terminal. Now you need to tell your kubectl cli where to find this file. The easiest way is to set it as the KUBECONFIG environment variable.

In bash this looks like so:

export KUBECONFIG='PATH_TO_YAML'

In powershell this looks like so:

$env:KUBECONFIG = "PATH_TO_YAML"
# Or if its in the current directory:
$env:KUBECONFIG = "$(Get-Location)\<NAME_OF_YAML>.yaml"

If you would rather not use the environment variable, you can pass it as the --kubeconfig flag to the kubectl cli. For example:

kubectl --kubeconfig 'PATH_TO_YAML' get nodes

Using kubectl

To check if you can access the cluster, run:

kubectl get nodes

You should now get a list of nodes like so:

NAME                                                       STATUS   ROLES                       AGE     VERSION
41488847-47e9-4f03-90df-7fb5f7f8e15f-l2ljs-qx8mf           Ready    <none>                      37s     v1.31.0+k3s1
9b43501b-f5d8-4cce-bdab-b5fdcb777325-control-plane-n67gv   Ready    control-plane,etcd,master   3m56s   v1.31.0+k3s1

If this doesn’t work, try using the cli flag.

Preparing Github

In this guide we will setup the runners for our whole organisation. If you want to restrict them to a single repo, please consult the official documentation about the differences.

To authenticate the runners with our Github organisation, we will create a Github App.

  1. Go to your Github homepage and click on your profile icon in the top-right.
  2. Click Your organizations. (if you want to create the app in your personal account, click Settings and skip the next step)
  3. Click Settings on the organisation in the list you want to use.
  4. Scroll down an click Developer settings -> GitHub Apps in the left menu bar.
  5. Click New GitHub App.
  6. Enter a name, we chose Gigahatch ARC.
  7. Enter https://github.com/actions/actions-runner-controller as the Homepage URL.
  8. Scroll down to Webhook and deactivate the Active checkbox.
  9. Under Permissions, open Repository permissions and select Metadata: Read-only.
  10. Open Organization permissions and select Self-hosted runners: Read and write.
  11. Click Create GitHub App.

Now we need to install the app and note down the connection settings for our runners.

  1. On the Github App page, note the value for App ID. We will need this a bit later.
  2. Under Private keys, click Generate a private key and save the .pem file somewhere safe.
  3. In the menu on the left, click Install app and click Install next to your organisation.
  4. After you confirm the installation, note the installation id. It can be found in the url like so: https://github.com/organizations/ORGANIZATION/settings/installations/INSTALLATION_ID. We will need this value later too.

Creating the runners

Now we can create the runners in the cluster. First, we need to install the helm charts with the Actions Runner Controller(https://github.com/actions/actions-runner-controller). To do this, create the following two files:

arc.yaml

apiVersion: v1
kind: Namespace
metadata:
  name: arc-system
---
apiVersion: helm.cattle.io/v1
kind: HelmChart
metadata:
  name: arc
  namespace: kube-system
spec:
  chart: oci://ghcr.io/actions/actions-runner-controller-charts/gha-runner-scale-set-controller
  targetNamespace: arc-system

and

arc-runners.yaml

apiVersion: v1
kind: Namespace
metadata:
  name: arc-runners
---
apiVersion: helm.cattle.io/v1
kind: HelmChart
metadata:
  name: arc-runner-set
  namespace: kube-system
spec:
  chart: oci://ghcr.io/actions/actions-runner-controller-charts/gha-runner-scale-set
  targetNamespace: arc-runners
  valuesContent: |-
    githubConfigUrl: https://github.com/<ORGANISATION> # <- Replace this with your organisation url
    githubConfigSecret: pre-defined-secret
    maxRunners: 5
    minRunners: 1
    template:
      spec:
        initContainers:
        - name: init-dind-externals
          image: ghcr.io/actions/actions-runner:latest
          command: ["cp", "-r", "-v", "/home/runner/externals/.", "/home/runner/tmpDir/"]
          volumeMounts:
            - name: dind-externals
              mountPath: /home/runner/tmpDir
        containers:
        - name: runner
          image: ghcr.io/actions/actions-runner:latest
          command: ["/home/runner/run.sh"]
          env:
            - name: DOCKER_HOST
              value: unix:///var/run/docker.sock
          volumeMounts:
            - name: work
              mountPath: /home/runner/_work
            - name: dind-sock
              mountPath: /var/run
        - name: dind
          image: docker:dind
          args:
            - dockerd
            - --host=unix:///var/run/docker.sock
            - --group=$(DOCKER_GROUP_GID)
            - --mtu=1450
            - --default-network-opt=bridge=com.docker.network.driver.mtu=1450
          env:
            - name: DOCKER_GROUP_GID
              value: "123"
          securityContext:
            privileged: true
          volumeMounts:
            - name: work
              mountPath: /home/runner/_work
            - name: dind-sock
              mountPath: /var/run
            - name: dind-externals
              mountPath: /home/runner/externals
        volumes:
        - name: work
          emptyDir: {}
        - name: dind-sock
          emptyDir: {}
        - name: dind-externals
          emptyDir: {}

Make sure to replace the githubConfigUrl with the correct URL to your organisation. You can also change the maxRunners and minRunners config to your liking. They change how many runners can run at the same time and how many runners will be kept around even if there are currently no jobs.

Then run the following command to deploy the helm charts:

kubectl kubectl apply -f .

Make sure to pass the --kubeconfig flag if not using the environment variable.

Now we have to create the secret that the runners will use to connect to Github:

kubectl create secret generic pre-defined-secret --namespace=arc-runners --from-literal=github_app_id=<APP_ID_HERE> --from-literal=github_app_installation_id=<INSTALLATION_ID_HERE> --from-file=<PATH_TO_PEM_FILE_HERE>

Wait a few moments and then check to see if the pods are running now by running the following commands:

kubectl get pods -n arc-system
kubectl get pods -n arc-runners

You should get outputs like the following:

NAMESPACE     NAME                                                  READY   STATUS      RESTARTS   AGE
arc-system    arc-gha-rs-controller-78d9bbf976-jvr25                1/1     Running     0          43m
arc-system    arc-runner-set-754b578d-listener                      1/1     Running     0          5m53s
NAMESPACE     NAME                                                  READY   STATUS      RESTARTS   AGE
arc-runners   arc-runner-set-nq9kv-runner-zthk5                     2/2     Running     0          14s

If the status isn’t running yet, wait a few minutes and try getting the pods again.

We have now successfully setup the runners so let’s run our workflows with them.

Configuring the workflow

To run a workflow on the new runners, just change the runs-on property in the workflows yaml to arc-runner-set. An example workflow would look like this:

name: Actions Runner Controller Demo
on:
  workflow_dispatch:

jobs:
  Explore-GitHub-Actions:
    # You need to use the INSTALLATION_NAME from the previous step
    runs-on: arc-runner-set
    steps:
      - run: echo "🎉 This job uses runner scale set runners!"

Now try running your workflow on your new runners.

Conclusion

You have now successfully setup your Github Actions Runners on Gigahatch Managed Kubernetes. If you have any questions or are stuck somewhere, please write us at info@gigahatch.ch. We look forward to hearing from you.

GKS All prices without guarantee and excluding VAT.
© 2024 Gigahatch - All Right Reserved Terms of Service Privacy Policy