Glad You're Ready. Let's Get Started!

Let us know how we can contact you.

Thank you!

We'll respond shortly.

  • Blog Navigation
pg_search: How I Learned to Stop Worrying and Love PostgreSQL full-text search

I’m a Pivotal Labs developer at our NYC offices working on the Casebook development team. Casebook is a child-welfare-focused web application used by governments and non-profit organizations. Our users are social workers, caseworkers, and their leadership who work with children, families, and the broader community to provide services that ensure children are safe and healthy.

Search worries

Our users need to quickly find accurate information about the people on their workload to respond appropriately in crises and keep a high quality written record of their work with the children and families.

Solr powered Casebook’s initial search engine. Solr is built in Java, so we set up our application servers to run Java alongside our Ruby on Rails web application. We maintained a real-time copy of our important searchable data, such as people’s names, in our Solr index.

Our Solr-based approach ran into a few problems. Sometimes users would see outdated search results or, even worse, errors. This was annoying and also potentially damaging to our users’ ability to keep up with emergency situations.

Keeping our data synched in multiple locations caused most of our problems with Solr. Some of our more complex code paths would update the database but not propagate those changes to the search index. Users saw search-related error messages when there were communication problems with our Solr instances.

We had some fail-safes in place.

We wrote code that automatically restarted the Solr instances when they crashed. When we found the search data diverged from our application data, we manually rebuilt the search index to get the two data stores back in sync. These solutions just managed our problems rather than solving them.

These problems aren’t unique to Solr. Other tools like Lucene, Ferret, and Sphinx have the same shortcomings when combined with Ruby on Rails.

Using the database itself as the search index

So the thought occurred to our team that we ought to try to make the database itself be the search index. We use a PostgreSQL database, and PostgreSQL 8.3 and later have built-in support for full-text search. PostgreSQL is a popular, mature SQL database solution that works great with Active Record. If you use Heroku, then you are already using a PostgreSQL 8.3 database that supports full-text search.

Since full-text search in PostgreSQL uses fairly complex SQL queries, we decided that the best approach would be to take advantage of Active Record’s scopes. The idea is to make it easy to write code that looks like this:

Book.search_title("Ruby").include(:author).where("created_at > ?", 1.year.ago).limit(10)

So, I am proud to introduce pg_search, a Ruby gem that makes it easy to build search scopes that work just like this.

Installing pg_search is easy. If you’re using Bundler, just add

gem "pg_search"

to your Gemfile and you’re good to go.

To use pg_search to build the Book.search_title scope above, you would write:

class Book < ActiveRecord::Base
  include PgSearch
  pg_search_scope :search_title, :against => [:title]

It’s as simple as that!

Adding more features

We took cues from the texticle gem to figure out how to generate our SQL code. Thanks to Aaron Patterson for this wonderful gem! However, our Solr solution had several features that texticle and the basic PostgreSQL full-text search alone don’t currently provide, like ignoring diacritical marks (accents like ü), searching for soundalikes, and searching for words that are misspelled.

We spent a day or two trying to hack texticle into something we could use, but realized that if we started from scratch we could more easily build a gem that could combine more than one PostgreSQL feature into a single search scope. That way, we could improve our Book.search_title scope by using unaccent to ignore accent marks, Double Metaphone to match soundalikes, and trigrams to match misspellings.

So with all of these features turned on, we get the following code:

class Book < ActiveRecord::Base
  include PgSearch
  pg_search_scope :search_title,
    :against => [:title],
    :using => [:tsearch, :dmetaphone, :trigrams],
    :ignoring => :accents

Except for :tsearch, the default-in full-text search implementation, the other features require you to install certain contrib packages into your database. For now, this is an exercise for the reader, but we hope to help automate this process soon.

Our gem development approach

We started by taking our application with the existing Solr-based search intact and boosting our test coverage as we could to cover all of the different cases (misspellings, soundalikes, etc.) for some of our most complicated searchable models. Once we were satisfied with our test coverage, we completely removed the Solr search code and were left with dozens of failing tests.

We then created a blank gem and starting adding features to it one-by-one to get each of our application’s tests to pass. First we made sure that simple situations were solid, such as when the search query string exactly matches the searchable text.
Then we moved on to the complicated parts.

Our existing application uses Ruby 1.8 and Rails 2.3, and at the same time we have a new second project that uses Ruby 1.9 and Rails 3. So we made sure that all of our code worked in both environments. I will write another blog post soon about how we used two instances of autotest to make this easy to do.

The great thing about this approach is that we were able to start by defining a set of behaviors based on what our real-world application needed. This kept our code lean. Also, we were able to define our own syntax for the pg_search_scope method. By mimicking the Active Record scope syntax, hopefully we have created something that is easy to pick up. We would just add a new option to one of our calls to pg_search_scope and code until it worked as desired.

User impact

Our users have noticed the difference after we deployed our updated search implementation. We had been rebuilding the search index or troubleshooting a search related bug a few times a week. We haven’t seen a search related help request from our users since we made the changes. In addition, our developers are happier because code deployments are much more reliable and easy to understand.

Overall, the project has been a resounding success!

Getting involved

pg_search isn’t complete yet (will it ever be?). There are many more features we’d like to have to improve performance, search quality, and overall user experience.

For example, right now our developers have to hand-build SQL indexes to improve query speed. pg_search should automatically generate those indexes for us based on which PostgreSQL features are in use.

That’s just one example. We’d love to hear more ideas from you about how pg_search can improve to meet users needs.

To learn more, read our documentation. We also have a public Pivotal Tracker project for requesting features and bugfixes, and a Google Group for discussing pg_search and other Case Commons open source projects.

Also, the Casebook team is currently hiring for an Agile Developer.

  • Very cool. I love use PostgreSQL full text search.

    What do you do about adding indexes? Raw SQL in migrations? What about schema.rb?

  • Grant Hutchins

    We add our indexes in migrations.

    Since Active Record doesn’t know enough about the special kinds of indexes needed to speed up full-text search in PostgreSQL, we had to switch to the raw SQL format for our schema instead of using schema.rb. We did this by adding

    config.active_record.schema_format = :sql

    to config/development.rb.

  • Cool thanks. Last time I tried using structure loading I had to patch it to get it to work. Seems like it works well now.

  • Great post! And great gem!

    I’ve recently added texticle to our product (to drop Sphinx as PG is just able to do it perfectly), but we had to extend it to be able to install the contrib modules, and this using Rails’ AR::connection (and not by executing the system’s psql CLI), this is mostly why we [forked the repo](

    So the fork just works right now with tsearch2 and unaccent and it’s totally possible to dump the schema in AR migrator’s DSL.

  • Awesome stuff. Looking forward to using this in my projects. I have recently taken interest in checking out PostgreSQL as an alternative SQL-based database to MySQL. Really enjoy working with MongoDB as well, and falling back to PostgreSQL when an SQL-database is more applicable for certain applications just got more appealing due to the built-in fulltext search and your gem to simplify it at the Ruby level.

    I actually just watched the Peepcode screencast that was recently released about PostgreSQL and Full text search, and your gem appears to make that even easier to query. I was actually watching it as an introduction to PostgreSQL, it seemed easier than I previously thought.

    I always like the idea of minimum external dependencies, like Sphinx/Solr to power your fulltext search of either MySQL or PostgreSQL. Being able to do it all from the database itself, to me, seems like a huge win and just simplifies it.

    By the way, you mentioned something about running two instances of ‘autotest’. What I use to test my gems against multiple rubies is a gem called ‘Infinity Test’ ( ) works well but you need RVM.

    Thanks for the write-up and the gem!

  • Mleal

    Hey This gem is great and works perfectly while running local in my machine, but once I deploy to the Engineyard server, I can not get it to work.

    All the queries that I use the search scopes with, will be like SELECT “branches”.* FROM “store_branches” WHERE (ST_Buffer(ST_GeogFromText(‘SRID=4326;POINT(11.00726807 -74.81023848)’), 200200200200200200200200200200200200200200200200200200200200200200200200200200200200200200200200200200200200200200200200) && location) LIMIT 50

    Getting a PGError: ERROR: “2002002002002002002002002002002002002….” is out of range for type double precision…

    I usually nest scopes like:“string”) which works locally but breaks on the site…

    Any clues?

Share This