Now I understand a little bit how docker works

So I took a few hours to dig into docker. Yes, I'm late to the party. Fortunately, this isn't some frontend framework so there shouldn't be anything replacing it next week. Or so I hope.

It's probably sad that even though I'm talking about docker and backend stuff, I still manage to let one go about JS frameworks. I'm sure my shrink would have something to say about that but don't worry, this is about Docker.

A while back I did what most people did back then: I installed docker, said to myself Mission Accomplished and never looked back at docker. I understood the key concepts docker was providing, but I had other stuff to learn, and other stuff to build.

It just wasn't for me at that time. I was happy with how things were on my side of the table. However, this changed 2 weeks ago when I had some issues with some Rails project that I was upgrading. Namely, I had some problems with libraries that were either too old or too recent, I needed a different version of ruby so I had to set some rbenv file to let it know that folder needed a special ruby version, stuff like that.

And it got me thinking that having all those libraries and rubies on my machine was really not something I wanted. Way too many moving parts. Every time I have an issue with a project, I need to rebuild this map in my head and trying to figure out where the problem originates.

  • Is it rbenv?
  • Is it homebrew?
  • Is it a system library?
  • Is it something else?

If I could, I would like to have everything I need for a given project encapsulated so everything is at the same level and I only need to look for one place when things are broken.

And that reminded me of docker's promise. Linux containers.

Docker for rails development

My idea, is that I would create a container for every project I run. My host (OS X) would not have anything installed. Everything lives in containers. That means no postgres on OSX, no Redis, no Rubies, no nothing.

If a project requires special library or services, those would be set in the Linux container for that project. So when I start a container, everything is launched and managed by docker. When I'm done with the project, everything is shut down freeing resources for something else. Sounds nice to me.

I also want to keep my projects on OS X so I can edit it with the editor of my choosing and can browse my projects like I would any other folders on my system. I also don't want to be rebuilding my container every time I make a change to my project. That would make this whole project useless.

Fortunately, Docker provides something called Volume which are basically mounted folder on the container linking to the host folder, where my rails project are. This may sound confusing right now, but don't worry, you'll get it by the end of this post.

Bottom line, I want to encapsulate all my project's dependency in a linux container while keeping my development workflow identical.

Here's how I do it.


The way Docker works is that it takes a folder and builds a linux instance with that folder. The Dockerfile is Docker's Gemfile where you tell it what and how to build it. This file also tell Docker what file to copy from this folder to the linux container. By default, Docker hard copies everything inside the linux container. This is why it's so portable and you can deploy it everywhere. Because everything it needs to run is copied inside the image. But that's not what I want. I'm developing, the only thing I want my image to hold are the dependencies that I need to get my rails project running.

That means I want to install the postgres' development library, nodeJS for the JavaScript runtime needed by rails, XML library and obviously, a ruby environment. Those are the things that I don't want on OS X but I need for my project.

With this being said, here's the Dockerfile that lives at the root of my rails project.

FROM ruby:2.2 RUN apt-get update -qq && apt-get install -y build-essential RUN apt-get install -y libpq-dev RUN apt-get install -y libxml2-dev libxslt1-dev RUN apt-get install -y nodejs WORKDIR /rails

The first line tells Docker to build a container off an existing one: Ruby 2.2. That means the container will already have ruby 2.2 running and configured. No need to install ruby manually.

The RUN instructions are to make sure my linux container is up to date and that all the required library for a working rails environment are set. I'd say the only line you may want to change is the one with libpq-dev if you use MySQL instead of Postgresql. Everything else is pretty much standard issue for a rails environment.

Notice the WORKDIR instruction. This sets the current working directory. You may notice that /rails is not an existing folder in a standard linux file structure. And you are right, it doesn't exist yet.

Even more confusing, that image is empty. If you would build that image and inspect it, you would notice that your linux container doesn't include your rails project. That's because Dockerfile copies file the container. I don't want copy, I want symlink, or something similar.

Remember I talked about Volumes and how it made everything possible? Here's how you create those volumes.

Docker-compose to include your project and manage your bundles

Docker-compose is a small utility that takes a YAML file and map it to the command line. So instead of typing complicated command lines, you can specify your requirements in that YAML file and let docker-compose generate that for you. This has the advantage of making it easier to share your configuration requirements and make spinning up containers that much easier.

Docker-compose needs a file called docker-compose.yml at the root of your folder, next to the Dockerfile.

In this current environment, I want docker compose to manage a few things:

  • Spin up postgres' container
  • Set environment settings for Bundler
  • Map the port
  • Map my project from OSX to the container
  • Link Postgres so rails can connect to it

Docker-compose can manages multiple container in the same file. The root elements are containers. In my current setup, db and rails are two independant containers.

db: image: postgres ports: - "5432" rails: build: . command: bundle exec rails server -p 3000 -b environment: BUNDLE_PATH: /bundle RACK_ENV: development RAILS_ENV: development ports: - "3000:3000" volumes: - .bundle:/bundle - .:/rails links: - db

The container named rails links to the db. This is important because I want rails to be able to connect to the database so it can create, migrate and destroy the database as I develop new features.

The bundle path is really a personal favorite. I often need to look at the gem's content when I debug and I wanted it to be accessible outside the container. By using a volume, the bundled gems will live at the root of my project.

Also remember the WORKDIR rule in the Dockerfile? Well, /rails is now mapped to the root of the project.

Now I'm ready to build my docker image, and bundle install the gems required for rails.

docker-compose build docker-compose run rails bundle install

You'll notice that Bundler complains about the root privilege it has. There is an issue on Github that prevents the volume for having normal user privilege. So for now, I have to run everything as root. Not a big deal in my opinion.

Anyway, now that the image is built and the project is bundled with its gem, it's time to configure the database.yml to connect to the container. This part requires some magic that I have yet to explain how it works. But it does.

default: &default adapter: postgresql timeout: 5000 encoding: unicode pool: 5 username: postgres host: db development: <<: default database: myapp_development test: <<: default database: myapp_test production: <<: *default database: myapp_production

The most important thing here you need to remember is the host rule. It's set to db. No idea how it knows db is the linux container's IP for postgres, but it knows!

At this point, I should be able to create my database and run my project.

docker-compose run rails rake db:create docker-compose up

You should see something like Heroku logs if everything went fine.

Recreating myproject_db_1... Recreating myproject_rails_1... Attaching to myproject_db_1, myproject_rails_1 db_1 | LOG: database system was shut down at 2015-06-12 13:59:52 UTC db_1 | LOG: database system is ready to accept connections db_1 | LOG: autovacuum launcher started rails_1 | => Booting WEBrick rails_1 | => Rails 4.2.1 application starting in development on rails_1 | => Run `rails server -h` for more startup options rails_1 | => Ctrl-C to shutdown server rails_1 | [2015-06-12 14:00:16] INFO WEBrick 1.3.1 rails_1 | [2015-06-12 14:00:16] INFO ruby 2.2.2 (2015-04-13) [x86_64-linux] rails_1 | [2015-06-12 14:00:16] INFO WEBrick::HTTPServer#start: pid=1 port=3000

If you're on OSX you need boot2docker's ip to point your browser at your project.

boot2docker ip ->

In this case, I would point my browser to and would see the result of all this hard work.

Browsing your project from inside a Docker's container

A word of caution

I just started working with this setup. I'm sure it's not all pink and lovely. I'm sure I'll hit issues. It's probably not perfect. But I like what I see.

I'm still holding out to deleting rbenv, postgresql and everything from my machine as I don't want to reinstall everything in a week, but if everything goes nicely in the next couple of weeks, I'll permanently move to containers.

I just like the idea very much.

Get more ideas like this to your inbox

You will never receive spam, ever.