Glad You're Ready. Let's Get Started!

Let us know how we can contact you.

Thank you!

We'll respond shortly.

  • Blog Navigation
Using database views for performance wins in Rails

Sometimes you come across a problem where your go-to approaches start to fail you. I came across one of these examples recently where a Rails app required a report to be built which consumed data from almost every model within the application. The page took a long time to render and so we began to discuss how we would deal with this performance problem. There was also the secondary performance and aesthetic problem of the view for the page containing nested loops to render the page.

There are several approaches that can be taken when using ActiveRecord including eager loading, database table joins using the ActiveRecord DSL, a custom query, caching or using a database view. Our application used each one of these approaches depending on what was best for the problem at hand, and here I’ll discuss one of the lesser used approaches with database views.

Database views

A database view is “the result set of a stored query which users can query just as they would in a persistent database collection.”)

One of the side affects of the above statement with respect to ActiveRecord is that when an application model extends ActiveRecord::Base it doesn’t care if the collection it’s modelling is a table or a view. This means an application can put the query logic into the database, and keep the application source code lighter weight.

class Foo < ActiveRecord::Base
    # table 'foos'

class Bar < ActiveRecord::Base
    # view 'bars'

In doing this, there is now an implicit dependency on how the database constructs that query logic, but fortunately it’s easy to test. Just as you may write tests for an application model using the database, the same can be applied for views. I tend to try and have applications not hitting the database for unit tests for test performance and isolation reasons, but here we are testing the integration between the application the ORM and the database, so we definitely want to hit the database.

Let’s take a simple example of a Twitter like message service. On the users signed in homepage they see a list of messages from people they follow with the message content, the username and avatar for the user. This may be spread across two or more tables depending on the design, for now let’s assume messages are in the messages table , and the user details are in the users table.

class User < ActiveRecord::Base
    has_many :messages

class Follow < ActiveRecord::Base
    belongs_to :follower, class_name: 'User'
    belongs_to :target, class_name: 'User'

class Message < ActiveRecord::Base
    belongs_to :user

When we load the messages to display to the logged in user, we would load the set of users whom the logged user follows, then the collection of messages and then for each message we would load the associated user model to display the username and avatar. We might create a new model called UserMessage to house the data for each message and user detail to be displayed. In the test we can create the data we need, a user and a message that the user has sent. We can then create an assertion that the UserMessage model can retrieve and expose the details needed for that page.

it 'uses the view' do
    user = User.create(username: 'robb1e')
    message = Message.create(user: user, content: 'Hello, world')

    user_message = UserMessage.first
    expect(user_message).to be
    expect(user_message.content).to eq('Hello, world')
    expect(user_message.username).to eq('robb1e')
    expect(user_message.user_id).to eq(
    expect(user_message.message_id).to eq(

In true, red, green, refactor mode, the above fails as the model UserMessage does not exist.

class UserMessage < ActiveRecord::Base
  self.primary_key = 'message_id'

We also need a migration which creates the view

CREATE VIEW user_messages AS
  SELECT u.username, AS user_id, m.content, AS message_id
  FROM messages m
  INNER JOIN users u ON = m.user_id

There is an argument to be made about the reasoning behind now having logic in more than one place, the application source code and the database, but this is for me is about using the best tool for the job, and putting the application logic in the optimal position. In this case databases can be extremely efficient at performing unions, intersections and calculations rather than mapping over in memory objects to perform the same operations. We can also make the most of some nice Rails features such as how the application can render models. In this case, the application could render the collection of user_messages implicitly using the file app/views/user_messages/_user_message.html.erb.

We’ve explored how database views can move data centric logic from the application to the database, and how performance benefits can be brought due to that change. We’ve discovered how we an test that logic within the application and how some Rails mechanisms can be used to clean up the code further.

  • Serguei Filimonov

    Hi Robbie,

    Thank you for blogging about this. Its refreshing to read anything that explores treatment of data, since many common approaches are disappointing both in terms of clarity and performance.

    Could you detail how the view in this case actually causes a performance win though? From a few brief searches, some say a Postgres view is just a stored query with a name when it comes to performance. Or does a view construct different indices that are updated upon writes?

  • Robbie Clutton

    Hi Serguei,

    I touched upon it in the post, but perhaps didn’t point it out enough. The win is where the calculation, or loading is done. If it’s done using Ruby, the object graph will be large through the associations of objects, and any calculations would be executed through loops in the code. Mapping the result set into one object per row is more memory efficient, resulting in a smaller object graph, and less garbage collection.

    I’ll try and put together a more concrete example and share.



  • Brendan Benson

    Another solution to consider is materialized views – especially if you have a complex query that involves many joins and aggregate functions, and the business requires a quick page load.

    Materialized views are similar to views in that they have an underlying query, but instead of executing the underlying query on an ad-hoc basis, they proactively re-calculate and cache the underlying query results. Therefore, you sacrifice a bit of consistency for speed, since the data may be stale. But, depending on your use-case, that may be perfectly acceptable.

    You’ll get more of a performance boost from materialized views vs. regular views because you’re fetching cached, denormalized data.

    Postgres supports materialized views as of version 9.3 (

    Unfortunately, MySQL does not support materialized views, but you fake them using triggers in the database and/or transactional application logic.

    And finally, as your data grows into “big data,” you may want to consider non-relational databases to complement a relational datastore.

  • Serguei Filimonov

    Hi Robbie,

    Ah so is it kind of like this?:

    I have 50 messages made by a total of 40 users. If I do regular arel thing even with a join, I get 90 objects loaded into memory wrapping each message and each user. But with a view “user_messages”, only 50 rows with user information included will be loaded into memory.

    Is that interpretation kind of on the right track?

  • Robbie Clutton

    Absolutely. This is a small example, but you can imagine the object graph if more relations are in place.

  • One issue with using SQL Views is ensuring that those views are available in test databases, particularly if you are using a CI that builds the database off of the schema file. You will likely have to modify how you store your schema, changing it from ruby to sql to capture the view.

    • Yeah, we had to use :sql instead of :schema, and it does depend on the adapter. I’ve had this working in Oracle and Postgres databases successfully.

  • Ryan Crowley

    One other option is to use a real table, and have a cron/scheduled task run that refreshes the data in the background at an interval that suits your requirements… it’s even faster than a view (it doesn’t have to create the temporary dataset in the db, do any calculations, joins, etc.)

    It can also be indexed well and be easily available within an ActiveQuery.

    • Worth looking at Brendan Benson’s reply regarding materialised views.

  • Thanks a bunch for your break down here. I am seriously considering this approach, but I don’t understand exactly how it all comes together. All the individual pieces make sense to me, but what exactly is the order of operations?

    You have your ‘UserMessage’ model and your ‘user_messages’ view, but how do they interact with your already existing User and Message model? For example, in your test you create a user and create a message, but how do they end up in the user_messages view? You will probably say with the migration, but then how is the user_messages view updated? The migration looks great for CREATING the view, but as new users and new messages are created (or updated), how are the corresponding user_messages created or updated?

    Am I missing something here?


    • Robbie Clutton

      @Eli, the view is effectively a query that will be run every time. So if there are changes to say the User or Message, then that change will be reflected in the UserMessage view.

  • Pingback: 3 ways to work with time in Postgres (& ActiveRecord) | Duck Type Labs()

Share This