Glad You're Ready. Let's Get Started!

Let us know how we can contact you.

Thank you!

We'll respond shortly.


The simplest deployment that could possibly work

I've been working on a basic chef solo based rails deployment. I started with Building an AMI, then bootstrapping to get the instance running chef with capistrano. Since part two left us with chef running on a box with recipes, it's time to get a rails app running and serve some requests.

For the moment, I've created an application.rb recipe, which is the only thing in the soloistrc. It declares dependencies:

include_recipe "joy_of_cooking::daemontools"
include_recipe "joy_of_cooking::mysql"

and then gets on with dealing with the app server.

'What is daemontools'? I hear you cry out. it's the best way I've found to ensure processes are running as you expect. If you aren't using daemontools (and you probably aren't) it's worth looking at seriously. Daemontools takes the pids out of process management. There are no pid files to get out of sync with processes, and there's no polling to see if something is up. Daemontools know if something exists because it is a child process, and when it exits it returns control to the supervise process. Deamontools is worth wrapping your head around, but for now, all you need to know is that you give it a script and it runs the script in an infinite loop. I stole the install procedure from Michael Sofaer's Hellspawn, so you'll have to excuse the fact that it looks like ruby instead of chef.

ruby_block "install daemontools" do
  block do
    directory = "/package/admin"
    repo = "git://"
    dir_name = "daemontools-0.76"
    FileUtils.mkdir_p directory
    system("cd #{directory} && git clone #{repo} #{dir_name}")
    system("cd #{File.join(directory, dir_name)} && ./package/install")
  not_if "ls /command/svscanboot"

Once daemontools is installed, we move on to installing mysql. A database isn't necessary for an app, but it's necessary for doing anything interesting. I made a philisophical decision to compile mysql from source. I tend to view compiling from source as a continuum - you usually would compile your own app - you usually wouldn't compile your own kernel. The DB is closely related enough to your app that it's nice to have exact documentation about what you're running. It'll also lower the barrier to entry to trying out Percona or Drizzle.

The mysql recipe is a little too long to inline here, but you can and should check it out on github. The recipe starts by installing some dependencies (by all means, leverage your package management system here) create a mysql user and download/cmake/make install. Add a deamontools run script, and up it comes. Set up the users and we're all set. The most interesting part of the recipe to me is the block which waits for mysql to start up:

ruby_block "wait for mysql to come up" do
  block do
    Timeout::timeout(60) do
      until system("ls /tmp/mysql.sock")
        sleep 1

Usually I'd just throw in a sleep, but I was ashamed to share that with the world. This is a technique I'll carry forward, and would love to see it make its way into chef so those less prone to dropping into ruby could make use of it. (Also, if there's a better way or something that's already in Chef that I haven't come across, I'm all ears)

Once mysql is up and running, we do the git clone/bundle/db:create/db:migrate steps rails developers have come to know and love, then write out a deamontools run script and use svc to make sure the unicorn restarts on every deploy. I'm not a fan of the cap style cached copy/magical rollback that the chef deploy resource reimplements, so for now I'm rolling my own.

Once the app is ready to be started, write out a daemontools run script:

file "/service/unicorn/run" do
  content %{#!/bin/bash
cd /var/staging/foo/src
export RAILS_ENV=staging
source /home/mkocher/.rvm/scripts/rvm
rvm use ruby-1.8.7-p299@captest
exec /command/setuidgid mkocher rackup -p 3000
  mode "0755"

And daemontools starts up the app. The run script is pretty easy to follow - set up RVM, chose our ruby, and exec unicorn.

Hitting the app on 3000 reveals a rails app in all its scaffolded glory:

app screenshot

There's more work still do to. There's a repetition that needs to be refactored out. Capistrano, Chef and Rails all need to share some knowlege about the world. Cap and chef need to know the directory we're deploying the thing to, chef and rails need to know what the database configuration is, and so on. Cap has settings, Chef has nodes and Rails has Configure blocks. I'm leaning towards reinventing the weel again very simply, but I'm open to suggestions. Other todo items are putting nginx in front of the app server, and moving to a dedicated database server.

I'm not sure which direction this will go next, but I hope by now you're convinced that chef recipes aren't magic - they're just a thin ruby wrapping around the things you do to configure a box - execute some commands and edit some files.

All the recipes mentioned are available on github and MIT licensed, please steal them and make them better.


  • You'll need to add whatever ports you'd like open to your EC2 security group.
  • Having mysql on a AMI backed instance with the data on the local disk is the opposite of production ready.
  • Sometimes things in the world changes - either mirror files yourself, or expect the occasional mirror to disappear failing a chef run. Wayne Seguin will also occasionally change the install script location for RVM. Chef recipes are living things which must be tended to occasionally.

Read More

Chef Solo – Tugging on the bootstraps

There's certainly no wrong way to run chef, however, a reasonable set of defaults are usually helpful as a starting place. I'm building an example of a simple deploy with chef, with the goals of ending up with something that's easy to understand, copy and modify. Eventually, I hope this allows for discussion of and iteration on what a solid ruby app stack should include.

The first stumbling block with chef for most people is not usually building am ami. It's usually getting chef and their recipes runningon a box. At Pivotal we're firm believers in doing the simplest thing that could possibly work, and so what follows is the just least I could do to cleanly run chef. Essentially, you want to install the ruby prerequisites, install a known Ruby (let Wayne figure out how), install chef, get some cookbooks on the box and run chef. This boils down to a script written in Bash to get you to the point you can run Ruby, and some capistrano tasks to install gems and run chef.

I've written Soloist, a gem I use for invoking chef solo easily. The goal is for it to be a thin layer over chef-solo to make it more friendly to use. I'll use it here, but all it's doing is generating a solo.rb and node.json files - it's not magic.

Here are the files I ended up with:

├── Capfile                   <- How we tell cap what to do
├── Gemfile                   <- The three gems we need
├── Gemfile.lock
├──          <- The bash script to bootstrap an instance
├── chef
│   └── cookbooks         <- Chef cookbooks go here
│       └── joy_of_cooking
│           └── recipes
│               └── default.rb
├── config
│   └── deploy
│       └── staging.rb        <- What servers are in your environment
└── soloistrc             <- How you'll tell chef what to run

First off we'l start with a bash script. You can see the full thing on github, but the psuedocode is:

as root:
    Create an app user account
    allow password authentication       # adding an Authorized key would be fine too
    add epel as an RPM repository       # for an easy git instal
    install git
    install the rvm prereqs             # readline, zlib, openssl and more
    enable passwordless sudo            # it's the only way to live
as user:
    install rvm
    add rvm to bashrc
    set rvm to trust all rvmrcs, install rubys on use and install gemsets on use

That's it. My example is 41 lines of bash (including 11 blank and 6 comment lines).

Capistrano is a reasonable multi server SSH tool. Parts of it are fundamentally flawed, but the "ssh out to some servers and run some commands" works just fine. So, we'll use it to upload and run the bootstrap script:

desc "bootstrap"
task :bootstrap do
  set :user, "root"
  set :default_shell, "/bin/bash"
  upload "", "/root/"
  run "chmod a+x /root/"
  run "/root/"

The boostrap task sets the user to root because everywhere else the user is set to the app user. We set the shell to /bin/bash to override the use of rvm-shell. We then upload the bootstrap script and run it.

The deploy task consists of:

desc "Deploys"
task :deploy do

To see the details of each task, see the Capfile. Be aware the first time you deploy, rvm will compile the requested ruby - this will take a few minutes.

This leaves us at 4 lines in the terminal to spin up as many ec2 instances as amazon will let you and run chef on them:

ec2-run-instances ami-e67e8d8f -k mine2 -t m1.large
ec2-describe-instances  # you'll need to add the IP(s) of your
                        # new instance(s) to the staging.rb file
cap staging bootstrap
cap staging deploy

Currently, the chef run consists of a recipe which touches a file in the root directory. For my next trick, I hope to actually run a web app. Stay tuned.

All the source code is available on github.

Lessons learned:

  • Rubyforge will be down when you'd like to to be up.
  • Learning the ec2 shell commands is well worth the time over using the web interface.

Open questions/issues:

Read More

EC2 AMI Building Adeventures

As rails developers we're often presented with the task of finding the right ops solution for clients. While Heroku has turned out to be a great answer for many of our clients, there are others who don't fit in the (very well made) fixed size box that Heroku has built.

For these projects, the best answer we've found is going the devops route using chef solo. The first problem encountered is "how do I bootstrap my servers", which by its very nature is a hard problem that requires a fairly through understanding of both your projects needs, operations and chef. The best answer has been to ignore bootstrapping at the beginning, and use chef to document/automate what's really different about your app.

Getting from servers being a bunch-o-magic-bits to a 15 minute bootstrapping followed by a chef run is where most of the value of automated configuration is.

However, this answer doesn't satisfy the primal urge to automate everything. To that end, I'm embarking on a side project of bootstrapping a rails server from scratch with chef solo. My hope is that it will be useful as a starting point for rails projects looking to automate their infrastructure. I turned over some turtles, and found the stack ended when I had to chose which AMI to boot up on EC2.

I chose Centos in hopes of leveraging the knowledge of operations experts, and went looking for a basic AMI. There are thousands out there, but what I really wanted was something fully documented in code. Amazon provides "standard" images which are centos based, but they're too much of a collection of "magic bits" for my tastes. Rightscale is nice enough to give out their scripts, and I've adapted them (by way of Nicky Peeters ) to build the barest of bare centos AMIs.

The biggest hurdle was getting a 32 bit AMI that was as close as possible to a 64 bit AMI. I had the urge to standardize on the 64bit only, but the price difference is fairly substantial.

I figured this was a one afternoon project, but it turned into a full two weekend project as I learned many things about EC2, Centos and Linux.

Lessons learned or relearned:

  • EC2 requires different /etc/fstab's for different instance types.

  • EC2 does not use the AMI's installed kernel by default, and which one you pick is important. If your centos install hangs at "Creating /dev", this is probably the reason. PV-Grub looks interesting, but I didn't want to add another moving part yet.

  • A special ldconfig is necessary on 32bit servers, but seems to break 64bit servers. If you're seeing linker errors durring the boot up process, this might be the problem.

  • The version of yum used to create the image is important - using the yum provided with an earlier 5.x point release did not yield a bootable 5.5 Centos image.

  • The version of amazon's AMI building tools is important. The --kernel option is not available in older versions of the bundle command, which didn't turn out to be necessary but did cause problems. There may have been another reason which currently escapes my memory. The script now installs them at the beginning of the run.

  • /dev/ptmx needs to be created by hand on 64bit servers, but is unnecessary or already created on 32bit servers. /dev/ptmx provides psudo terminals for SSH - you can't get into your server 64bit server without creating one.

Further questions:

  • Is everything above really true? Is there anything that can be left out or simplified?

  • How do I make a matching vagrant box from the same script?

  • EBS Backed AMIs seem to be better in many ways. Can the same script be pointed at an EBS volume?

As this is my first pass, I'm not sharing prebuilt AMI's yet as they're likely to change fairly rapidly.

The script is available on github.

Read More

Standup 11/22/2010: How can a rails engine provide migrations?

Ask for help

  • We just upgraded one project from Devise 1.1 to Devise 1.2 and reported "many problems which blew up all sorts of stuff". It was bad enough we had to rollback. Are there others with failure or success stories for this upgrade?
  • Can you rate limit EC2 nodes using an Elastic Load Balancer? We'd like to cap the amount of traffic that can be sent to an app instance. I'm thinking advanced use cases like this are probably why you run your own haproxy instead of using ELBs.
  • How are people running database migrations in their engine gems? I know rails 3.1 promises to bring this to the table, but is there a backport gem we can use?


  • iOS 4.2 is out! We're looking forward to trying it on the iPad (finally).

Read More