Jakub Oboza
Jakub Oboza
Software Engineer, Founder and Enthusiast
Aug 2, 2020 14 min read

Rails on Kubernetes

Lets start

Deploying Rails on Kubernetes requires few steps to master. But it doesn’t differ from normal Capistrano deployment that much. Basically, you still need some sort of frontend servers like Nginx or apache and rails servers as backend. This doesn’t change, but there is a lot of details to grasp :). Lots of tutorials on the web will put stress on deploying rails without any way to serve static files or with RAILS_SERVE_STATIC_FILES=true which makes rails application host static files via rails stack. This is a bad idea in terms of performance. It can reduce the rate at which you can handle clients a lot. What I suggest is a complete example (in minikube) on how to prepare your images, do a local test, and run in production.

Contents

  1. How Rails was usually deployed
  2. Better fit for container
  3. How we will do it
  4. Rails app life cycle in production
  5. How you should run it in k8s
  6. Sidecar approach
  7. Split image approach
  8. Example setup in minikube
  9. Summary

How Rails was usually deployed

The bog-standard deployment for rails for years now was simply by having vms ( virtual machines ) that used to host rails application server via passenger, puma, or unicorn with Nginx on the same box serving the static files. This worked and we more less will do the same in Kubernetes with our application.

Having both frontend server ( Nginx ) and backend server ( rails ) on the same box. Was useful as you could communicate via socket. We can’t do this in Kubernetes but we can use ports. So all is not lost :).

One word about Passenger and Puma

Puma seems to be the current “way to go” for handling rails apps. A few years ago rails apps most commonly were deployed and served using Passenger. And passenger with Nginx was often built in a module. This doesn’t fit for Kubernetes. As it would force us to do complex setup and run inside of a single container multiple processes. Containers are designed around the idea of single responsibility. And that is key strength we don’t wanna lose.

Better fit for container

Kubernetes lets you deploy your applications into a self-healing observable environment. This is solo the biggest benefit of electing this platform. For years we developers tried to add monitoring and self-healing capabilities to deployments to make them more resilient. With k8s we get everything for “free”.

Rails weren’t designed with Kubernetes in mind. It is a much older project. But it is a great fit for the container.

How we will do it

There are two ways I will present presenting some advantages and disadvantages. First is simply by using a sidecar container. So the apps run inside single pod two containers. One with Nginx and one with rails connected via ports and mounted volume. The second option will be building images as part of a deployment that has all of the public directory copied into Nginx image so that Nginx serves all static content and the application server is entirely separated.

Pros and Cons

Sidecar approach:

This approach requires the least amount of images to be built. As we only have a thin nginx image with our config and a bigger rails image. In Kubernetes, both containers share volume that the application server simply copies public directory to on startup. And Nginx servers static content from.

The Minus of this approach is that if any of the containers will crash, the entire pod is restarted which can give some false impressions on what is wrong. Each pod contains a frontend and backend server so for 20 pods you end up with 20 Nginx and 20 app servers which isn’t ideal mapping.

Pre-build images approach:

With this approach, we separate the images and during the build process assets are generated and copied to nginx image. This means we have to build two images each time we want to do a deployment. It also means we have two deployments that monitor both of the pod groups. A definitive plus here is the separation of frontend and backend. We can have 2 frontend servers and 40 backend servers.

Both approaches have small pros and cons. The difference is not insane. We might want to scale in a different number frontend and backend electing the second option or we might need to share between images more data electing first. Good thing Kubernetes lets us swap between both.

Rails app life cycle in production

Capistrano during deployment will do several things most notable to us are

  1. Copy code
  2. Install gems
  3. Precompile assets
  4. Start server

We will replicate the same steps. The twist is that we will have to split them into two parts. One is docker image building and pushing. The second is rolling it out to Kubernetes.

Steps 1,2,3 so code copy, installing gems and precompiling assents will be done before deployment at the image building stage. Only the last step is part of the runtime we will trigger in Kubernetes.

How you should run it in k8s

You shouldn’t have to sacrifice performance for the ability to deploy in containers. You should expect equal or better performance! Every tutorial I saw on the topic suggested you kill much of performance by using RAILS_SERVE_STATIC_FILES=true That is in my opinionated idea. It works well only if you are building pure json/xml API in rails without any assets.

What RAILS_SERVE_STATIC_FILES=true flag does is forcing rails to serve static files via rails stack. This is a thing you can turn on. IT might be useful in local testing and development but in production, it doesn’t make sense. That is why it is off by default in production config.

Sidecar approach

The first example I will highlight is the Sidecar approach. In this example, we will build a simple Nginx image with basic config And more complex rails app image plus the Kubernetes config that glue it all together.

The idea here is that we will mount a shared mount volume on path /app_shared and during boot, the rails container will copy the entire public directory to the shared volume. Nginx will serve static files from the shared volume.

Nginx image

The Nginx image for this setup is very small.

FROM nginx:1.17.8-alpine

LABEL maintainer Jakub Oboza <jakub.oboza@gmail.com>

COPY ./nginx_conf/nginx.conf /etc/nginx/nginx.conf
COPY ./nginx_conf/default-small.conf /etc/nginx/conf.d/default.conf

WORKDIR /usr/share/nginx/html

All it does is based on alpine nginx image and copy config files. While nginx.conf isn’t interesting we need to talk about default-small.conf

upstream app {
    # This needs customizing or reading from ENV, consider envsubst
    server localhost:3000;
}

server {
    listen       80;
    server_name  localhost;

    root   /app_shared/public;

    try_files $uri/index.html $uri @app;

    location @app {
        proxy_pass http://app;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header Host $http_host;
        proxy_redirect off;
    }

    error_page  404              /404.html;
    error_page   500 502 503 504  /50x.html;
    location = /50x.html {
        root  /app_shared/public;
    }
}

Inside of sidecar containers the port space is shared between all containers. So we can from Nginx sidecar hit localhost 3000 and it will be all fine. Here we rely on it. The second important bit is where the root of the server config points it is app_shared/public

Inside of the Kubernetes config, we will set this up like this:

...
    spec:
      volumes:
      - name: shared-data
        emptyDir: {}
      containers:
      - image: jakuboboza/rails-on-k8s:latest
        name: backend
        volumeMounts:
        - name: shared-data
          mountPath: /app_shared
        ports:
        - containerPort: 3000
      - image: jakuboboza/rails-on-k8s:ngx-small-latest
        name: frontend
        volumeMounts:
        - name: shared-data
          mountPath: /app_shared
        ports:
        - containerPort: 80

...

In the deployment specification file for Kubernetes, we will establish volume that will be mounted at /app_shared on both containers.

The final bit of this setup is the startup script. In my example i called it ./bin/run_sidecar:

#!/bin/bash

# We need to copy the public directory to the mount volume
cp -R /app/public /app_shared/public

echo "Creating Database"
bundle exec rake db:create

echo "Migrating Rails!"
bundle exec rake db:migrate

echo "Starting Rails!"
bundle exec rails server -b 0.0.0.0

The first step here is to copy the public directory to the mounted volume. Next is to run create db (it is safe as it will only create it if it can). Run migration and start the server.

Rails image!

This bit is a bit more intensive as we need to explain a few things:

We are building the image with the “production” env set. This means we will need to set up a secret for the app. This is important to keep secure.

First! you might need to setup your editor by doing:

export EDITOR=vim

To edit the secrets file run:

rails credentials:edit --environment production

and set the secret key base like this:

secret_key_base: "0f8c4aee0a06d64a9ead3da8de2ba24ff55b80eb2c00585c464e4578020d879ac28e969b395a0fff46f855ec75be4e95404f737bcf84d4f7eefc52c0a24e29a0"

Above is just example you should for SURE not use this string. Once edit is done it will update the file config/credentials.yml.enc

You can generate a good candidate by using rake command:

bundle exec rake secret

Now we need to build the docker image that will have entire app ready and assets precompiled.

FROM ruby:2.5.5-slim

RUN mkdir -p /usr/share/man/man7
RUN mkdir -p /usr/share/man/man1


RUN apt-get update -qq && apt-get install -y curl build-essential apt-transport-https ca-certificates postgresql-client
RUN curl -sL https://deb.nodesource.com/setup_12.x | bash -
RUN curl -sS https://dl.yarnpkg.com/debian/pubkey.gpg | apt-key add -
RUN echo "deb https://dl.yarnpkg.com/debian/ stable main" | tee /etc/apt/sources.list.d/yarn.list
RUN apt-get update -qq && apt-get install -y yarn git libpq-dev nodejs

ENV APP_HOME /app
RUN mkdir $APP_HOME
WORKDIR $APP_HOME

RUN gem install bundler:2.1.2
ADD Gemfile* $APP_HOME/
RUN bundle install --without=development,test --deployment

ADD . $APP_HOME
RUN yarn install --check-files

RUN RAILS_ENV=production bundle exec rake assets:precompile

CMD ["./bin/run"]

This is a fair bog standard image, we install dependencies, copy Gemfile and we install gems. But we will do it the same as Capistrano would do. So without development and test groups. Final two steps are asset precompile forcing production env and setting up default entry point command to be ./bin/run.

K8s deployment config

The key part of the summary of this strategy is the entire deployment file we will use.

apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: railsk8s-sidecar-deploy
  name: railsk8s-sidecar-deploy
spec:
  replicas: 2
  selector:
    matchLabels:
      app: railsk8s-sidecar
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 1
      maxSurge: 1
  template:
    metadata:
      labels:
        app: railsk8s-sidecar
    spec:
      volumes:
      - name: shared-data
        emptyDir: {}
      containers:
      - image: jakuboboza/rails-on-k8s:latest
        imagePullPolicy: Always
        name: backend
        command: ["./bin/run_sidecar"]
        env:
        - name: DATABASE_HOST
          value: example-postgres
        - name: DATABASE_PASSWORD
          value: postgres123
        - name: DATABASE_USERNAME
          value: postgres
        - name: RAILS_ENV
          value: production
        volumeMounts:
        - name: shared-data
          mountPath: /app_shared
        ports:
        - containerPort: 3000
        resources:
          requests:
            memory: "128Mi"
            cpu: "125m"
      - image: jakuboboza/rails-on-k8s:ngx-small-latest
        imagePullPolicy: Always
        name: frontend
        volumeMounts:
        - name: shared-data
          mountPath: /app_shared
        ports:
        - containerPort: 80
        resources:
          requests:
            memory: "32Mi"
            cpu: "125m"

This is how less the deployment file for rails application in sidecar tandem will look like. You should move Database env to ConfigMap or Secret but I do not want to do this more complex on purpose.

Image building and naming

I have a make file that builds all the images and pushes them to a single repo on the Docker hub. That is why the names of images might look a bit off.

Make file for this example looks like this:

GIT_COMMIT := $(shell git rev-parse HEAD)

docker:
  @echo LATEST COMMIT IS $(GIT_COMMIT)
  docker build -t jakuboboza/rails-on-k8s:latest -t jakuboboza/rails-on-k8s:$(GIT_COMMIT) .
  docker push jakuboboza/rails-on-k8s:latest
  docker push jakuboboza/rails-on-k8s:$(GIT_COMMIT)
  docker build -t jakuboboza/rails-on-k8s:ngx_latest -t jakuboboza/rails-on-k8s:ngx_$(GIT_COMMIT) -f Dockerfile.nginx .
  docker push jakuboboza/rails-on-k8s:ngx_latest
  docker push jakuboboza/rails-on-k8s:ngx_$(GIT_COMMIT)
  docker build -t jakuboboza/rails-on-k8s:ngx-small-latest -t jakuboboza/rails-on-k8s:ngx-small_$(GIT_COMMIT) -f Dockerfile.nginx-small .
  docker push jakuboboza/rails-on-k8s:ngx-small-latest
  docker push jakuboboza/rails-on-k8s:ngx-small_$(GIT_COMMIT)

It is building three images. We have a rails app image we use for both examples and two different Nginx images nginx-small we use for sidecar example and ngx_latest for the second example.

Why we run migrate as part of the image ?

In production, you might elect to have a separate job that runs solo the migrations and triggers notification. But I elected to do it differently.

  ...
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 1
      maxSurge: 1
  ...
I went with maximum unavailable 1 and maximum Surge of 1 during rolling updates. This way I’m pretty sure only 1 pod will be updated at a time. Rails schema migrations are run in a transaction which gives us double security.

Ideally, you would go with the approach of a job and remove the migration line from the app start script.

Split image approach

In this approach, we will bake public assets into the Nginx image during the build. So there will be only a few variations. Let’s start!

We are using the same Rails image as above. But we will run the app with ./bin/run file:

#!/bin/bash

echo "Creating Database"
bundle exec rake db:create

echo "Migrating Rails!"
bundle exec rake db:migrate

echo "Starting Rails!"
bundle exec rails server -b 0.0.0.0

Just plain simple, create, migrate, and start the server. We do not copy anything to mounts as there are no mounts. All we have to do is start the image. With this out of the window, let’s jump on the Nginx image build.

Nginx image build is a multi-stage build. First, we build the assets and next we copy them into plain image discarding the big image and leaving us with only the essentials we need.

FROM ruby:2.5.5-slim as build

RUN mkdir -p /usr/share/man/man7
RUN mkdir -p /usr/share/man/man1


RUN apt-get update -qq && apt-get install -y curl build-essential apt-transport-https ca-certificates postgresql-client
RUN curl -sL https://deb.nodesource.com/setup_12.x | bash -
RUN curl -sS https://dl.yarnpkg.com/debian/pubkey.gpg | apt-key add -
RUN echo "deb https://dl.yarnpkg.com/debian/ stable main" | tee /etc/apt/sources.list.d/yarn.list
RUN apt-get update -qq && apt-get install -y yarn git libpq-dev nodejs

ENV APP_HOME /app
RUN mkdir $APP_HOME
WORKDIR $APP_HOME

RUN gem install bundler:2.1.2
ADD Gemfile* $APP_HOME/
RUN bundle install --without=development,test --deployment

ADD . $APP_HOME
RUN yarn install --check-files

RUN RAILS_ENV=production bundle exec rake assets:precompile

FROM nginx:1.17.8-alpine

LABEL maintainer Jakub Oboza <jakub.oboza@gmail.com>

COPY ./nginx_conf/nginx.conf /etc/nginx/nginx.conf
COPY ./nginx_conf/default.conf /etc/nginx/conf.d/default.conf

RUN mkdir /usr/share/nginx/app

COPY --from=build /app/public /usr/share/nginx/app

WORKDIR /usr/share/nginx/app

This is exactly as mentioned above. First, we do the same steps as we do during the build of the rails image. Next, we copy the needed files into our Nginx image and discard the build image. Takes some time but if we sequence image building most of the layers are not need to be rebuild.

After that, we need to check the Nginx default.conf this is super important.

upstream app {
    # This needs customizing or reading from ENV, consider envsubst
    server k8s-rails-svc:3000;
}

server {
    listen       80;
    server_name  localhost;

    root   /usr/share/nginx/app;

    try_files $uri/index.html $uri @app;

    location @app {
        proxy_pass http://app;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header Host $http_host;
        proxy_redirect off;
    }

    error_page  404              /404.html;
    error_page   500 502 503 504  /50x.html;
    location = /50x.html {
        root   /usr/share/nginx/app;
    }
}

Here on purpose, I changed the name of the backend to k8s-rails-svc to highlight that we have to remember this when writing config files for k8s. We have to have a service with this name on port 3000 that will act as our backend rails app.

Kubernetes service config:

apiVersion: v1
kind: Service
metadata:
  name: k8s-rails-svc
spec:
  selector:
    app: railsk8s-app
  ports:
    - protocol: TCP
      port: 3000
      targetPort: 3000

This creates out k8s-rails-svc that satisfies our config need.

Now we could put all of this into ENV vars, but during debugging if you have everything as env vars it might start to get tricky. So I recommend slowly moving from concrete values to putting everything into env vals. Also, you might not want to have 10+ config maps for each of your rails apps.

Now let’s see the deployment file! Or should I say two deployment files. One for App and one for Nginx.

App deployment file:

apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: railsk8s-app
  name: railsk8s-app-deploy
spec:
  replicas: 2
  selector:
    matchLabels:
      app: railsk8s-app
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 1
      maxSurge: 1
  template:
    metadata:
      labels:
        app: railsk8s-app
    spec:
      containers:
      - image: jakuboboza/rails-on-k8s:latest
        imagePullPolicy: Always
        name: app
        env:
        - name: DATABASE_HOST
          value: example-postgres
        - name: DATABASE_PASSWORD
          value: postgres123
        - name: DATABASE_USERNAME
          value: postgres
        - name: RAILS_ENV
          value: production
        ports:
        - containerPort: 3000
        resources:
          requests:
            memory: "128Mi"
            cpu: "125m"
Nothing special here, we pass on the DB config and other needed env values and expose port 3000.

and for Nginx:

apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: railsk8s-nginx-deploy
  name: railsk8s-nginx-deploy
spec:
  replicas: 2
  selector:
    matchLabels:
      app: railsk8s-ngx
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 1
      maxSurge: 1
  template:
    metadata:
      labels:
        app: railsk8s-ngx
    spec:
      containers:
      - image: jakuboboza/rails-on-k8s:ngx_latest
        imagePullPolicy: Always
        name: frontend
        ports:
        - containerPort: 80
        resources:
          requests:
            memory: "32Mi"
            cpu: "125m"

In this example, we have only 2 Nginx replicas but 4 backend servers but in real life, you might want to have many more backend pods in deployment.

Try it out!

Repository with code: https://github.com/JakubOboza/RailsOnKubernetes

To run the examples just:

  1. Install and start minikube
  2. clone the reposotiroy
  3. run kubectl apply -f k8s in the repository

This will create all pods and example Postgres database. Start the images and all the services. IF you are using Linux you can check them out on NodePort eg.

curl $(minikube ip):30080
curl $(minikube ip):30081

I suggest opening up in the browser to check if assets load as they should.

for OSX folk with the advent of minikube 1.12 minikube ip returns always 127.0.0.1 which isn’t useful for checking things up.

You will have to run

minikube service --url railsk8s-ngx-svc

and in second terminal

minikube service --url railsk8s-sidecar-svc

to get the IP:PORT to open in browser.

Real life production deployment

In a real-life production deployment, You will use ingress to bring the traffic into your application. Every cloud provider or your company on-prem setup will do it in a bit different way. But all the services/deployments etc are production-ready now!

In real life, you might go with a sidecar or bake assets into the Nginx image approach. Both of them are equally viable but in my opinion, the split image approach simply fits better.

If you need to have a shared volume across all rails and Nginx containers you can still do it via Persistent Volumes and or Persistent Volume Claims.

The End

I hope this short post helped a bit in your journey with Kubernetes. I hope You will not have to serve your static files via rails stack anymore ;).

I think it is the best thing that happened to the app deployment life cycle since 2000.