Glad You're Ready. Let's Get Started!

Let us know how we can contact you.

Thank you!

We'll respond shortly.

Using the Boogie Board Sync 9.7

I just got my hands on a Boogie Board Sync 9.7. I asked our IT department if we could evaluate one because I am so frustrated with index cards. My handwriting is, at best, atrocious. I was born with a silver keyboard in hand.

Read more »

Extreme Pairing

Given the recent interest and discussions around pair programming, I thought that now would be a good time to write up my experiences doing dual-computer, in-person pair programming with Mavenlink at Pivotal Labs. Pivotal Labs is all about pairing and every team here pairs full time. Our standard setup is to have two developers at each workstation, with two mice and keyboards. On Mavenlink, we recently upgraded our workstations and decided to keep the older model around to try out dual-computer pairing, which we’ve nicknamed "Extreme Pairing."

One commonly-expressed frustration with pair programming is that of wasted time while doing especially rote programming, research, documentation reading, or printout debugging tasks. People read documentation at different speeds, or want to try different Google searches in parallel, for example. Generally, the benefits of pair programming greatly outweigh the downsides, and we pair full-time to decrease the risk of failing to pair when it would retrospectively have been of great value due to knowledge sharing or bug prevention. However, in an effort to minimize these inefficiencies, the Mavenlink team has been exploring dual-computer pairing.

What follows is not Pivotal’s standard pairing setup. Everyone on this team has years of pair-programming experience and we have each developed our own intuition for which corners can be cut and which cannot. The following is recommended for advanced pairs only -- it can make experienced pairs more effective, but may be hazardous for newcomers to the pairing environment! We never “Extreme Pair” when conducting interviews and revert to a more traditional setup in that context.

Extreme Pairing

Each of our two pairing desks house a beefy i7 27-inch iMac and a slightly older 24-inch iMac, each with its own mouse and keyboard, and linked bidirectionally with Teleport. We sit centered around the larger iMac and use the second computer when we have separable tasks during pairing, such as research, running tests, or looking up documentation. We still do traditional pairing 90%+ of the time and focus on never falling into the traps of soloing: email checking, distractions, siloed knowledge, and untested code. Having the second computer has allowed us to split our focus when we can do so while still both maintaining understanding and ownership of everything that is going on. This is most effective and appropriate when:

  • tracking down bugs; we are both able to apply different tactics simultaneously while talking to each other. This allows us to divide and conquer, then rejoin when we’ve made progress in our understanding of the base problem.
  • looking at docs; we can both read at our own pace and synchronize afterwards.
  • the non-driving pair can support the driver by setting up the context for an experiment, running rake tasks, etc.
  • googling and researching
  • optimizing, one person can track down issues with Query Reviewer or Rack Bug while the other can identify the underlying problems in the source.
  • browser testing and CSS tweaking in multiple browsers. In particular, we have found running windows in Parallels on the non-primary machine a boon to speed and productivity on the primary machine.
  • capturing stories in Pivotal Tracker

More generally, any time a task requires focus on two different places at once, we split it up and assist each other. I think of this as synchronization in multi-threaded code - we split and rejoin on separable sub tasks, while both continuing to perform traditional pair programming when viewed from the scope of a story. We never work on separate stories and tasks that last longer then a minute or so. Smoothly moving from traditional pair programming to separately googling, debugging, or CSS tweaking keeps us agile and efficient while maintaining all of the very-real benefits that we find to exist with traditional pair programming,

Does this sound like fun? Mavenlink and Pivotal Labs are hiring!

Identify memory abusing partials with GC stats

Recently we had to investigate a page that had very poor performance. The project used newrelic, but neither that, the application’s logs nor render traces gave clear results. The database wasn’t the bottleneck, but there were well over 100 partials in the render trace. These partials seemed to perform well, except sometimes the same partial would take hundreds to thousands of milliseconds.

As it turns out, the cause of these random delays was actually garbage collection triggered by abusive memory consumption within many of the partials - millions of objects and heap growth in excess of 100mb. We discovered this by using the memory and GC statistics features of Ruby Enterprise Edition. You can get an idea of how much memory is being used and garbage collected during rendering by wrapping parts of your page in a block passed to this helper:

def gc_profile
  raise "Dude, you left GC profiling on!" unless Rails.env.development?
  allocated_objects_before = ObjectSpace.allocated_objects

  yield if block_given?

  growth = GC.growth
  collections = GC.collections
  time = GC.time
  mallocs = GC.num_allocations
  allocated_objects = ObjectSpace.allocated_objects - allocated_objects_before

  concat content_tag :span, "GC growth: #{growth}b, collections: #{collections}, time #{time / 1000000.0}sec, #{mallocs} mallocs, #{allocated_objects} objects created."

Because there were so many partials on the page, moving the helper around to identify the most egregious memory abusers got tiring so I wrote a monkeypatch to the venerable Rack::Bug plug-in which shows you memory consumption and garbage collection statistics for each template shown in the Rack::Bug template trace, as well as on the memory panel.

Example template trace

Memory panel example

First, install Rack::Bug and make sure it's working properly, then add the code below in an initializer. Keep in mind this was written to work with Ruby Enterprise Edition 1.8.7 2010.02. Various 1.8 patches and Ruby 1.9 also provide some GC statistics reporting tools which you could probably customize this patch to use instead.

if Rails.env.development?
  require 'rack/bug'

  Rack::Bug::TemplatesPanel::Trace.class_eval do
    alias_method :old_start, :start
    def start(template_name)
      @initial_allocated_objects = ObjectSpace.allocated_objects
      @initial_allocated_size = GC.allocated_size
      @initial_num_allocations = GC.num_allocations
      @initial_gc_count = GC.collections
      @initial_gc_time = GC.time

    alias_method :old_finished, :finished
    def finished(template_name)
      @current.allocated_objects = ObjectSpace.allocated_objects - @initial_allocated_objects
      @current.allocated_size = GC.allocated_size - @initial_allocated_size
      @current.num_allocations = GC.num_allocations - @initial_num_allocations
      @current.gc_count = GC.collections - @initial_gc_count
      @current.gc_time = (GC.time - @initial_gc_time)/1000.0

  Rack::Bug::TemplatesPanel::Rendering.class_eval do
    attr_accessor :allocated_objects
    attr_accessor :allocated_size
    attr_accessor :num_allocations
    attr_accessor :gc_count
    attr_accessor :gc_time

    def memory_summary
      <strong>%d</strong><small>new objects</small>
      <strong>%d</strong><small>bytes in</small><strong>%d</strong><small>mallocs</small>} % [gc_time, gc_count, allocated_objects, allocated_size, num_allocations]

    def html
          <p>#{name} (#{time_summary}) [#{memory_summary}]</p>

  Rack::Bug::MemoryPanel.class_eval do
    alias_method :old_before, :before
    def before(env)
      @initial_allocated_objects = ObjectSpace.allocated_objects
      @initial_allocated_size = GC.allocated_size
      @initial_num_allocations = GC.num_allocations

    alias_method :old_after, :after
    def after(env, status, headers, body)
      old_after(env, status, headers, body)
      @gc_count = GC.collections
      @gc_time = GC.time / 1000.0
      @allocated_objects = ObjectSpace.allocated_objects - @initial_allocated_objects
      @allocated_size = GC.allocated_size - @initial_allocated_size
      @num_allocations = GC.num_allocations - @initial_num_allocations

    def heading
      "#{@memory_increase} KB Δ, #{@total_memory} KB total, %.2fms in #{@gc_count} GCs" % @gc_time

    def has_content?

    def name

    def content
      %{<style>#memory_panel dd { font-size: large; }</style>
        <h3>Garbage Collection Stats</h3>
          <dt>Garbage collection runs</dt>
          <dt>Time spent in GC</dt>
          <dt>Objects created</dt>
          <dt>Bytes allocated</dt>
          <dt>Allocation calls</dt>
      </dl>} % [@gc_count, @gc_time, @allocated_objects, @allocated_size, @num_allocations]

Look sharp, the build just went red (on Campfire)!

In order to post to our Campfire chat when the CI goes red, we threw the following script in script/campfire_post:

#!/usr/bin/env ruby
require 'rubygems'
require 'net/http'
require 'uri'
require 'json'

url = URI.parse('')
req =
req.basic_auth 'API_KEY', 'X' # leave the X
req.body = JSON.dump({ :message => { :body => ARGV.first }})
req.content_type = 'application/json'
res =, url.port).start {|http| http.request(req) }
case res
  when Net::HTTPSuccess, Net::HTTPRedirection
    # OK

And then in cruise.rake:

desc "The task that cruisecontrol.rb runs"
task :cruise do
    rake "spec"
    # etc
    # etc
    exclaim = ["Oh goodness, t", "OMG t", "Look sharp, t", "Attention everyone! T", "Ohnoes! T"][rand * 5]
    system "ruby script/campfire_post '#{exclaim}he build just went red!'"

And it's even cheeky!

Standup 11/13/2008: ActiveScaffold and Object Mothers

Interesting Things

  • ActiveScaffold was surprisingly easy to use for admin interfaces. One possible gotcha is that you have to manually configure namespaced RESTful routes (e.g. map.resources :tasks, :active_scaffold => true).

Ask for Help

"What are your current favorite Object Mother implementations or patterns?"

There are several approaches:

"Does anyone know how to disable caching in Safari? Caching can often break JSUnit between test runs"

Go to Develop -> Disable Caches and you get a fresh non-caching Safari.

Standup 11/10/2008: Memory profiling tools for Ruby

Interesting Things

Ask for Help

"We're having an issue with a long running Ruby process consuming too much memory and failing. What tools are available for finding and patching Ruby memory leaks?"

Several tools were suggested:

  • Valgrind - Considered very powerful but difficult to use. "It will do what you want if you can figure out how to properly ask it to do so."
  • BleakHouse - Ruby specific leak detection. It's available as a gem called "bleak_house"
  • Leaks - This is native to OSX. The Google Web Toolkit uses this for leak detection.
  • DTrace - Also a native OSX tool. There was a RailsConf presentation on Everyday DTrace on OSX.

"Rake will often silently 'fail' when running RSpec. It will not blow up but rather silently quits in the middle of the suite. This seems to happen intermittently, usually on the first run of our test suite. If we run the suite again, it works."

It was suggested that this might be a Rake version issue since there have been other test suite problems with Rake 0.8.3. Though, this particular type of problem was not identical to previous Rake versioning issues and may be something altogether different.

Keep reading for a more detailed description of this weird RSpec + Rake issue.