Glad You're Ready. Let's Get Started!

Let us know how we can contact you.

Thank you!

We'll respond shortly.

pg_search: How I Learned to Stop Worrying and Love PostgreSQL full-text search

I’m a Pivotal Labs developer at our NYC offices working on the Casebook development team. Casebook is a child-welfare-focused web application used by governments and non-profit organizations. Our users are social workers, caseworkers, and their leadership who work with children, families, and the broader community to provide services that ensure children are safe and healthy.

Search worries

Our users need to quickly find accurate information about the people on their workload to respond appropriately in crises and keep a high quality written record of their work with the children and families.

Solr powered Casebook’s initial search engine. Solr is built in Java, so we set up our application servers to run Java alongside our Ruby on Rails web application. We maintained a real-time copy of our important searchable data, such as people’s names, in our Solr index.

Our Solr-based approach ran into a few problems. Sometimes users would see outdated search results or, even worse, errors. This was annoying and also potentially damaging to our users’ ability to keep up with emergency situations.

Keeping our data synched in multiple locations caused most of our problems with Solr. Some of our more complex code paths would update the database but not propagate those changes to the search index. Users saw search-related error messages when there were communication problems with our Solr instances.

We had some fail-safes in place.

We wrote code that automatically restarted the Solr instances when they crashed. When we found the search data diverged from our application data, we manually rebuilt the search index to get the two data stores back in sync. These solutions just managed our problems rather than solving them.

These problems aren't unique to Solr. Other tools like Lucene, Ferret, and Sphinx have the same shortcomings when combined with Ruby on Rails.

Using the database itself as the search index

So the thought occurred to our team that we ought to try to make the database itself be the search index. We use a PostgreSQL database, and PostgreSQL 8.3 and later have built-in support for full-text search. PostgreSQL is a popular, mature SQL database solution that works great with Active Record. If you use Heroku, then you are already using a PostgreSQL 8.3 database that supports full-text search.

Since full-text search in PostgreSQL uses fairly complex SQL queries, we decided that the best approach would be to take advantage of Active Record's scopes. The idea is to make it easy to write code that looks like this:

Book.search_title("Ruby").include(:author).where("created_at > ?", 1.year.ago).limit(10)

So, I am proud to introduce pg_search, a Ruby gem that makes it easy to build search scopes that work just like this.

Cucumber and Sunspot…

In continuing with my Cucumber themed posts, here is a great post about using Cucumber and Sunspot together...

Standup 04/07/2010: Passenger, Solr, Git, and rSpec timeouts

Ask for Help

Passenger Memory Bloat

"We found one of our passenger workers is using around 900MB of memory. Has anyone has problem with Passenger memory usage? We are using REE 1.8.7-2009.10."

Solr Master-Slave Replication

"We are interested in adding automatic failover to our Solr slave when the master fails. What are some strategies for doing this?"

Interesting Things

Git Push --force Blocked
If you find your git push being rejected, even when you use git push -f, it's probably because your git server is configured to not allow non fast-forward pushes. You'll need to change the server configuration to allow them.

spec --timeout
Be careful when running rspec with the --timeout option. When the timeout occurs the test process will be interrupted and it will print out a stack trace for wherever it was executing when it was interrupted. This can lead to a lot of confusion if you do not immediately realize it was the result of timing out and instead think that an exception actually occurred at that point.

Standup 04/05/2010: Cinco de Mayo

Ask for Help

Paperclip Slowness

"In one web request we are collecting the file paths of about 250 objects that have attachments via Paperclip. Unfortunately this is really slow and takes a couple seconds to finish. Does anyone have thoughts on how we could speed this up? Is de-normalizing the file path a reasonable solution?"

Moderation of Solr Search Results

"One of our projects uses Solr and acts_as_solr to provide search results to users. One particular result is showing up far higher than we want. What is the best way to use boosting to downgrade the score of an individual result in Solr?"

Interesting Things

Bike to Work Day
May 13th is Bike to Work Day in San Francisco. We are hoping more people take advantage of this to try biking to work for the first time. To mix things up for those that normally bike to work we are planning a Bike to Lunch.

NYC Standup Roundup – Week of 4/19


  • A Pivot noted a facepalm + headdesk moment when debugging an issue whose cause turned out to be related to two adjacent string literals being auto-concatenated by Ruby's parser.

    >> "foo" "bar"
    => "foobar"

In this case, a missing comma in a method call went undetected because of this language characteristic. Whether or not this follows the principle of least surprise is an exercise left up to the reader.

  • Another pair warned that while this is valid syntax in Ruby 1.8.7 and beyond:

    define_method(:burninate) { |&block|"burninating") }

.. in 1.8.6 you can't use a block as a parameter of a block.

  • Another pair noted that exceptions with Sunspot can cause wider failures on a site than just those that touch Solr. The symptom on this project was that if Solr was inaccessible for any reason every page on the site would throw an error. Their fix was to use Sunspot's SessionProxy to wrap methods with some exception handling love.

  • Lastly, GoRuCo -- the Gotham Ruby Conference -- will be held on May 22nd at Pace University's downtown campus. The roster of talks is up and registration is open for business.

Boosting with Acts As Solr

Probably my favorite feature of the Solr full text search engine and the acts_as_solr plugin is a feature called boosting. Boosting is a great tool that gives you the ability to wield some influence over how the results that are returned are going to be ordered. When boosting is applied properly the quality of the search results appears improve dramatically even though the same results are being returned, just in a different order. There are two different kinds of boosting that you need to be aware of: column boosting and document boosting.

Field Boosting

Field or column boosting allows you to specify that if a query matches on a boosted field, give that more weight than usual. In the app I am working on, I added a field boost to the name attribute because I want results that have the query string in the name to appear before those results that have it somewhere in their description or as a tag. Here is an example of how to do a field boost when using acts_as_solr.

acts_as_solr :fields => [{:name => {:boost => 3.0}}, :description, :tags]

Document Boosting

A document boost should be utilized there is a way of quantifying one result as being better than another result, regardless of the query. For example, there are two entries in my database that both have a tag of "twitter client": Twitterific and Twitterfon. In the iPhone App Store, Twitterfon has a higher popularity rating than Twitterific so I want Twitterfon to appear above Twitterific if someone searches for "twitter client" within the app. To specify document boosting based on the app store popularity field I can pass a Proc object to acts_as_solr (rdoc) and return the member field that holds the popularity rating. A great thing about the Proc object is that I can execute any ruby code inside of it that I want. This is useful if the popularity score is not directly stored in the database and must be calculated on the fly.

acts_as_solr :fields => [:name, :description, :tags],
           :boost => { |item| item.popularity_score.to_f }


If you are using Solr at all it is important to be aware of what boosting can accomplish. When using multiple boosts, finding the right boost values to produce the best search results is a bit of black magic. I have found that after achieving "pretty good" results the law of diminishing returns comes into play and slows down progress. With a single boost it is much easier because there is only one variable in play.

Standup for 2/3/2009: SOLR & rails fails with IPv6

Interesting things

  • When we used localhost in our solr.yml configuration, we couldn't run tests on our OSX 10.5.6 machines. Commenting out the IPv6 localhost entries in /etc/hosts fixed the problem. The better solution would probably be to use in SOLR configuration.

Solr demystified

As I mentioned in this post, we've decided to set aside some of our weekly brown bags to spread around some knowledge on different technologies via a relatively informal presentation/discussion format. This past week we talked a bit about Solr.

This post covers much of what we discussed, ranging from the introductory to the somewhat arcane. If you're a seasoned Solr user, this may not have much for you. But, you never know.