Rails in production with CoreOS

I've always wanted to have a minimalist OS for my production server where I could know pretty much everything there is to know about the system. It's not a matter a taste for me, it's just that I have this thing where whenever I touch something, it turns to shit. So more things that can break, the more it will.

So, having an OS that is very lean helps me get things done. Back when I started deploying apps, I thought that ArchLinux was a good candidate for my needs and it was. But recent troubles with system updates made me look for alternatives. CoreOS looked like what I always wanted even though it was -- and still is -- the new kid on the block. The OS is light and it makes it really hard to install anything directly on it. The only thing it wants to manages are containers.

All the complexity lives in containers. On a single host, you can have multiple containers running and interacting with each others.

Usually, my projects are pretty much monolithic. I have NGINX serving rails connected to a postgresql database and maybe a worker doing off-request through ActiveJob. In a CoreOS environment, that would turn my system into 4 containers environment all connected through the same configuration mechanics that you already come to expect from rails. I don't want to change my way of doing things because of a different architecture.

Enough talking, let's get down to business.

Rails in Docker

First, rails needs to be built inside a container. I'm assuming you already have Docker installed on your system.

FROM ruby:2.2 RUN apt-get update -qq RUN apt-get install -y libpq-dev libxml2-dev libxslt1-dev nodejs COPY . /srv/my_project WORKDIR /srv/my_project ENV RAILS_ENV production RUN bundle install --deployment EXPOSE 3000

First, I chose a recent version of Ruby to base my image from. Then, I install all the default requirements for rails to build. Probably, there exists images that cover those 2 steps so you could replace the first 3 lines with FROM rails:4.2 and it could work.

The thing you will need to change is the COPY and WORKDIR directive to reflect your project. The COPY directive tells Docker to copy all of the content in the directory where your Dockerfile to the new image it will create. In this scenario, I want to copy the content of my project over to the image.

Then, I set the environment to production, I install the dependencies via bundler and then I expose the port 3000 so Nginx can serve requests to rails.

Running docker build . should now build your project to a docker image ready to be used. If you have issues, you can read about debugging a docker container, something I wrote a while back that is very useful when you get issues with building images.

Starting a CoreOS cluster

Fleetctl is the component with rails that really made me like this way of doing things. You take all your docker container and you wrap them in services for fleetctl to manage the same way you would with systemd services. Sounds complicated!

In a normal docker scenario, you would have to build, launch and stop containers. You would have to monitor them in case of failures and you would have to create a structure for you to access your logs and rotate them when needed. Fleetctl does all that, and more.

CoreOS on Digital Ocean

This part of this post is going to be specific to Digital Ocean. I think you can adapt it to other providers but that would be for you to discover!

Get a discovery token

The first step is to get a new token that will be bound to your new cluster. This is provided for free by CoreOS. To get a new token, open your browser at https://discovery.etcd.io/new?size=1. The size parameter in the URL specify the number of etcd instance this new cluster requires to start.

Every time you want to start a new cluster, you will need a new token.

Create a new droplet

For the droplet to work as expected, you will need to enable Private Networking as well as provide User Data.

user data
#cloud-config coreos: etcd2: # URL you received from the previous step. discovery: https://discovery.etcd.io/<token> advertise-client-urls: http://$private_ipv4:2379,http://$private_ipv4:4001 initial-advertise-peer-urls: http://$private_ipv4:2380 listen-client-urls:, listen-peer-urls: http://$private_ipv4:2380 units: - name: etcd2.service command: start - name: fleet.service command: start
Digital Ocean panel example

SSH into the machine once the droplet is created and verify if fleetctl was properly configured.

fleetctl list-machines MACHINE IP METADATA 6d0cd39c... -

You should see a single machine in your cluster (yourself).

CoreOS folder structure

I usually create 4 different folders where I will put different things for different purposes.

$ ls configs/ sources/ data/ service/

The config folder is a folder where I store thing like nginx virtual hosts, rails secrets.yml and database.yml.

The sources folder is where I will store all the source repository I need.

The data is where I will store all the assets that rails compiles and also the folders. It will also host all the postgresql data.

Everything that I want persisted will be stored on the host OS, not the containers.

The reason for this is that if I was to store persisted data in the containers, I would lose the data as soon as I would update the image. That means migrating from postgresql 9.4 to 9.5 would wipe my whole database. I would also have to recompile all my assets every time I commit an update to my project. Not very useful.

So, some people use data containers to achieve persistence. I personally prefer to store all the data on CoreOS directly and expose those folders to the right container.

The service folder is where I will store all the service that I need to configure for my rails instance to run smoothly. By default, that means 3 services:

  • Postgresql
  • Nginx
  • Rails

Postgresql on CoreOS

The docker community already provides images ready to use. So, for my use, the only thing required is to create a service that will tell a few things about my architecture.

[Unit] Description=Postgres instance Requires=docker.service After=docker.service [Service] ExecStartPre=-/usr/bin/docker kill postgres.my_project ExecStart=/usr/bin/docker run --rm --name postgres.my_project -e POSTGRES_PASSWORD=A_VERY_COMPLICATED_PASSWORD -e POSTGRES_USER=project_user_name -v /home/core/data/postgres:/var/lib/postgresql/data postgres

The [Service] part might be confusing for people who are not used to Docker so let me take a few moments to explain it in details.

The first thing the section does is to kill any running postgresql instance because the system expects to only have 1 instance running.

Then, the service starts a new instance of postgres named postgres.my_project.

The --rm option tells Docker to destroy the existing container. This is personal. I rather start with a new container every time so I'm sure I'm always starting with a clean slate. Moreover, the data is hosted outside the container on CoreOS. Nothing in that container is important once it's turned off.

I also specify a user and a password inline. These will be the credentials I user when connecting Rails to the database.

The last argument is probably the most interesting one. It maps the CoreOS data folder to the data folder postgresql is going to use internally to store the database. If you ever mounted a folder through NFS, the same logic is applied here. Another thing to notice is that you don't have to create the folder /home/core/data/postgres. Docker will create it if it doesn't exist.

Once you save this, you can start your postgresql instance.

fleetctl start ~/services/postgresql

It should take a moment to load, but at the end, you should have a working postgresql database. To test that you indeed have it, you can access it using docker.

docker exec -it postgres.my_project.com /bin/bash su postgres psql

Now that postgresql is running, let's move on to rails.

Rails service

Rails requires a little bit more work to make it work. I covered the first step earlier in this post. Now it's time to adapt what we did earlier to a CoreOS multi-service environment. In order the steps are:

  • git clone your project to the ~/sources folder;
  • Build the docker image;
  • Set database.yml & secrets.yml files in ~/configs on the host CoreOS;
  • Initialize your database;
  • Configure a new service for fleetctl using this docker image;

I usually commit my Dockerfile to the git repository. So this step is easier if you did that. Based on the example at the beginning of this article, you can do:

cd ~/services git clone project_git_url cd my_project docker build --tag='rails' .

This will build a docker image named rails. Now that the image is built, I need to configure this project to connect to the database and have its own secrets.

Configuration for this environment

This step create new secrets.yml & database.yml files that docker will use to replace the internal files. This is a good practice as you don't want to commit secrets and database information to git.

This way, you will always be sure that your configuration for your production environment is safe from modification. It's also one pillar of the 12 factor application. I'm also creating sub-folders with the name of the image I'm storing config for but you are free to organize your folder the way you want. Just keep in mind that if you change the layout, you will have to adjust on the next few steps.

production: secret_key_base: !ijqwlkjLKEQLKWJj1327YU!@YI!HOUH!@OUY*!@&Y#IhkJQHWJKHIQU!Y
production: adapter: postgresql encoding: unicode host: postgres pool: 5 database: project_db_name_in_prod username: user_specified_in_postgres.service password: password_specified_in_postgres.service timeout: 5000

One thing to note is the host key in the database file. The value specified there is not something you are probably familiar with. When I'll create the rails service further down, I will map the postgresql container to the rails one via volumes and Docker will create a /etc/hosts entry named postgres with the postgresql IP. If you don't fully grasp the meaning of all this, don't worry. It might make more sense soon.

With these two files created, I'm now ready to initializing the database via rails' bash rake db:setup.

Initializing the database

This part is where I see if everything I've done so far works. This will also initialize the database, something that I have to do manually anyway.

The postgresql container should already be running by now. What I want to do is to run the rails container, get a bash session into it and run the rake command from there.

# I know, this command is long and painful to read. docker run -v /home/core/configs/my_project/secrets.yml:/srv/my_project/config/secrets.yml:ro -v /home/core/configs/my_project/database.yml:/srv/my_project/config/database.yml:ro --link postgres.my_project:postgres -it rails /bin/bash # The folder here is the one specified in your Dockerfile cd /srv/my_project # If everything went well, this method should connect to the postgresql # and initialize the database bundle exec rake db:setup exit

I want to take a second to explain the first command that I run as it's painfully long and it doesn't mean much at first sight.

The command should be split in 3 sections:

  1. Map the secrets.yml and database.yml file I created to the container with readonly access;
  2. Link and name the postgres.my_project container as postgres to use internally. This is where the host comes from within database.yml;
  3. Run the container in interactive mode by running /bin/bash.

So, when the rake command returns, it means that Rails successfully loaded the gems with bundler, connected to the database with the credentials provided, initialized, migrated and seeded the database. The application is good to go!

The last thing I need to do is to create a service so I can manage my rails instance easily with fleetctl.

Rails with fleetctl

I want this service to do 3 things every time it starts:

  1. Compile assets;
  2. Migrate the database if necessary;
  3. Start the webserver (thin, unicorn, puma, etc).
[Unit] Description=Rails instance Requires=etcd.service Requires=docker.service Requires=postgres.service After=etcd.service After=docker.service After=postgres.service [Service] ExecStartPre=-/usr/bin/docker kill rails.my_project ExecStartPre=/usr/bin/docker run --rm -v /home/core/configs/rails/database.yml:/srv/my_project/config/database.yml:ro -v /home/core/configs/rails/secrets.yml:/srv/my_project/config/secrets.yml:ro rails bundle exec rake db:migrate ExecStartPre=/usr/bin/docker run --rm -v /home/core/configs/rails/database.yml:/srv/my_project/config/database.yml:ro -v /home/core/configs/rails/secrets.yml:/srv/my_project/config/secrets.yml:ro -v /home/core/data/rails/assets/:/srv/my_project/public/assets rails bundle exec rake assets:precompile ExecStart=/usr/bin/docker run --rm --name rails.my_project --rm -p 3000:3000 -v /home/core/configs/rails/database.yml:/srv/my_project/config/database.yml:ro -v /home/core/configs/rails/secrets.yml:/srv/my_project/config/secrets.yml:ro -v /home/core/data/rails/assets/:/srv/my_project/public/assets:ro --link postgres.my_project:postgres rails bundle exec thin start -R /srv/my_project/config.ru

First, the service will try to kill any rails instance that was already running on the machine, this is because I don't want to have problem launching my rails instance as it uses a fix port.

Then, I migrate the database by providing only the secrets.yml and database.yml file in readonly. Rails should never need to write to those file so there's no need for rails to have unnecessary access.

After that, the service will compile any new assets it finds. It will write those assets on the data folder on the host CoreOS. This is important because I want to serve those assets from NGINX and NGINX will have access to the CoreOS the same way rails does. On the other hand, NGINX and rails can't share directly as those two are into separate containers.

Notice that I give full read/write access to the assets folder. This means that rails will create any folder/file it needs during assets compilation.

If the previous commands returned successfully, the service will launch a thin instance which is the web server I use in this case.

When starting the docker instance with thin, you can see that I give readonly access to assets, secrets.yml and database.yml. In the normal lifespan of the application, it has no writing rights on anything besides the database. I like it this way. It might be overkill, but I feel safer knowing that if something happens, Rails can't write to any asset files.

I also map the internal port 3000 to the CoreOS port 3000. This is going to be required for NGINX to proxy_pass request to the rails application. Before configuring NGINX, start the rails application to make sure things are good.

fleetctl start ~/services/rails.service # list-units should show that both postgres and rails are running fleetctl list-units


I kept NGINX to the very end because of the configuration it requires about the rails infrastructure. The Docker community provides, like they do for postgresql, an image for us to use. I just need to provide configuration files so it can route request to my rails application and things should work just as expected.

Like I did with rails, I will map files from ~/configs to the NGINX container.

server_tokens off; upstream rails.local { server; } server { listen 80; server_name my_project.com; location / { root /srv/www/html; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; proxy_pass http://rails.local; } location ~ ^/(assets)/ { root /srv/www/html; gzip on; expires max; add_header Cache-Control public; gzip_types text/plain text/css text/js text/xml text/javascript application/javascript application/x-javascript application/xml application/xml+rss; } }
[Unit] Description=Nginx server Requires=etcd.service Requires=docker.service After=etcd.service After=docker.service [Service] ExecStartPre=-/usr/bin/docker kill nginx.my_project ExecStart=/usr/bin/docker run --rm --name nginx.my_project -v /home/core/configs/nginx:/etc/nginx/conf.d:ro -v /home/core/data/rails/assets:/srv/www/html/assets:ro -p ${COREOS_PUBLIC_IPV4}:80:80 nginx

The upstream IP is the docker0 interface. It usually is that IP address but you can make sure by typing this command in your CoreOS instance:

ifconfig docker0 | grep inet

Still in the configuration file, you will see that both the root for both directive point to /srv/www/html. This is not a folder we ever created. So why am I pointing to this? The reason is that NGINX will usually pass request to the upstream server -- rails -- and only serve the assets that I precompile with rails.

The root folder is going to be mapped to the asset folder I created earlier with rails asset compilation task. And this is exactly what I do in the service.

First, I map the config folder to the internal configuration folder of NGINX so it can retrieve the directives I wrote in ~/configs/nginx/.

Then, I map the asset folder to /srv/www/html/assets so NGINX can serve those assets.

Finally, I map NGINX to the external IP of CoreOS. This is the first time you see this and there's a good reason for this: NGINX is the only container that I want to make publicly available. Everything routes through NGINX.

Also notice that I give readonly access to the configuration files and the assets. The same rule I had for Rails, I give to NGINX.

To start NGINX, just do like you did to the other services:

fleetctl start ~/services/nginx.service # List all units to make sure they are all up and running fleetctl list-units

If everything is working, you should be able to access your rails application from the CoreOS ip or hostname if you set a DNS pointing at your CoreOS instance.

I know this article is a mouthful. I hope you stayed with me until the end because once everything is configured, it's very easy to monitor!

In a next post, I'll explain how I do CI with this setup. It's really easy and smooth once you get it to run!

Get more ideas like this to your inbox

You will never receive spam, ever.