Speeding Up Your Ruby on Rails App

Speeding Up Your Ruby on Rails App
Speeding Up Your Ruby on Rails App


There are a wide variety of tools which can help you spot and monitor performance issues, ranging from free to expensive, easy to use to less-easy to use. I would be remiss, however, if I didn’t direct your attention to the easiest and freest of all: the audit tool built straight into Google Chrome.

Yes, Google is awful, evil, and probably going to bring about the end of the world, but in the meantime they offer really handy tools for highlighting areas for improvement. Running the audit tool will give you feedback on things like how long it takes for your page to display meaningful information and become interactive. If it finds areas it thinks should be improved, it will display helpful messages with suggestions on what to do.

So drop into your Chrome dev tools (command + option + i), hit the Audit tab and run it on your website’s problem pages. You can even empty your storage and emulate a 3G phone with a slower CPU to simulate a mobile device hitting your page for the first time.

This article doesn’t get into scaling, but if you’re interested, you can read a post on configuring your Rails server for maximum efficiency, and another on scaling Rails to 1000 requests/minute, which I found very helpful.

First Thing’s First: The Database

As our apps grow larger, they inevitably tend to hold more and more data, and require increasingly complex queries. And because CPUs are fast, disks are slow, and our apps don’t usually require too much computation, sites are frequently I/O bound. This is almost guaranteed to be the case if you’re neglecting to optimize your database and queries completely.

Indexing: Why and When

One of the lowest-hanging pieces of fruit in a Rails app (or any database-connected framework) is indexing. An often-used analogy for database indexing is the index in the back of a book; rather than having to read the whole book, you can scan an alphabetized index for the location of what you’re looking for, and then skip to that page. It’s the same with indexing; it lets our database know where to find what we’re asking for.

Generally, any column you frequently use to find (WHERE) or sort (ORDER) results is probably a candidate for an index. Foreign keys should typically also be indexed to improve performance. You can check your logs (or the Rails server output in the console) to check which queries are slow.

This might not matter if you’ve only got a few hundred rows of data — if you’re just keeping a record of all the Coldplay fans in the world, for example. But as you start to get tens or hundreds of thousands of rows per table, this can add up quickly. In steps indexing.

ActiveRecord already does some indexing for us automatically — every entry has its own id, serving as its primary key. Primary keys are indexed by default; you may have noticed that finding something by id is faster than, say, by name.

But you probably don’t always look for things by id. A user likely wants to search for things by their name, or some of their properties. If you find that this is the case, it may be time to index the columns of the table you frequently use to locate an item.

Let’s add an index to name on our model. (Let’s say it’s a Person model). After generating the migration in console (rails g migration add_index_to_name_column_on_person), we add the index:

class AddIndexToNameColumnOnPerson < ActiveRecord::Migration[5.2]
  def change
    add_index :people, :name

Let’s see how this plays out in a database with about 30,000 records.

Before: Person.find_by(name: "Jon Snow") => 25.3ms

After: Person.find_by(name: "The Hound") => 0.6ms

Not too bad. We didn’t know where in Westeros Jon Snow was, and so our search took over 25ms. However, because we knew The Hound was at the local tavern ordering chicken, we found him immediately. (Then backed away slowly.)

This trend happily continues even if we’ve got 100 Jon Snows in all of Westeros; it doesn’t take much longer to find 100 indexed Jon Snows than one. Generally speaking, this is when indexing is most useful; when most values are unique. It generally makes more sense to index on relatively unique things like names, than, for example, hair color or gender.

If we’ve got 100,000 rows and you’re looking for Jon Snow by sex, that’s about 50,000 entries you’ve still got to go through to find him… it doesn’t save much time, and because you’ve put an index on 100,000 items, you’ve also taken up space (indexes aren’t free, so if you don’t need them, don’t use them).

What if, though, we’ve got a category which has a bunch of options, only one of which we actually care about? Maybe we’ve got a ‘classification’ column in the Person table, and 99.5% of people are commoners, merchants and soldiers, while the group we care about — heroes — comprises only ~0.5%? Wouldn’t indexing the whole column just to search for heroes be kind of wasteful? (Yes.)

Partial Indexing

Partial indexing allows us to index a subset of a column. So if we find we’ve got an attribute (like ‘classification’) which we frequently use to locate a small group of records, we can index only the rows in that column that match the criteria we’re looking for — in this case, heroes.

Note: I’ve shortened “classification” to “cls” below for the sake of space and formatting; you would of course have to use the full column name ‘classification’ for this to work properly.

class AddPartialIndexToPersonOnCls < ActiveRecord::Migration[5.2]
  def change
    add_index :people, :cls, where: "(cls = hero)"

This speeds things up, saves space and doesn’t waste memory. If partial indexing aligns with your needs, you should absolutely use it instead of a full index.


A friend of mine was recently building an app, and complained about how it took around a minute to load a particular page for the first time. When I looked at his controller action, it looked something like this:

@items = Item.all.map { | item | [ item.name, item.category ] }

All he wanted was to get each item’s name and category into its own array, but he was retrieving the full Item table (a model which had dozens of columns), converting them to full-blown Ruby objects with .map, and finally mapping over them to get their name and category.

Unsurprisingly, this was a bit of a problem as he had around 80,000 items.

We modified his controller action to use the pluck method instead, and the time dropped from ~60 seconds to around three*:

@items = Item.pluck(:name, :category)

The moral of the story? Pluck, along with its cousin select, can save a ton of memory and keep your site from slowing to a crawl.

*We later paginated his results, dropping his page loads to comfortably under a second

Pagination: Don’t Use Kaminari (or Will Paginate)

Pagination is great. It makes everyone’s life easier, and makes page loading faster and more memory efficient by slicing our thousands of cat pictures into a few pages rather than one never-ending mess. But the two most-commonly used gems to achieve this in Rails — Kaminari and Will Paginate — are both slow, inefficient, memory hogs.

The alternative? Pagy, a faster, much more memory-efficient gem designed with performance in mind. There’s not too much else to say about it; I’ve used it in a couple of projects now, and in addition to being easy to use, it lives up to its promise of being a better pagination gem.

Concurrency with Threading

Rails supports threading! You might already know this; Sidekiq is a pretty well-known gem. But you don’t need Sidekiq to leverage threading; it’s built right into Ruby.

Threading won’t improve CPU-intensive tasks (Ruby doesn’t allow parallel execution of threads), but if you’ve got pages waiting on blocking I/O, it can make things run a lot more smoothly.

Maybe you’ve got a couple critical pages that more or less just wait on one or two HTTP requests before they become fully functional (perhaps they’re grabbing information from specific pages that will be different depending on what the user did). You probably don’t need to install a gem for a simple case like that; you can implement threading yourself.

The syntax is quite simple:

Thread.new do
  <code here>

If you need to get the value of the thread when it resolves (maybe you’ve scraped a page and you want to get the result when it’s finished), you call .value on the end:

def scrape_page(url)
  Thread.new { Nokogiri::HTML(open(url)) }.value

And tada, you can open multiple pages simultaneously; the time it will take to retrieve information from all your HTTP requests will be equal to the wait on the slowest page, rather than their total!

Serializing: Ditch .to_json and AMS

If you’re only serializing small sets of data or single objects, this probably doesn’t matter. Continue happily calling render json: your_data on whatever you’re sending to your front end.

However, if you find yourself serializing hundreds or thousands of objects, you should really switch your serialization engine.

Netflix’s Fast JSON API runs circles around Rails’ Active Model Serializers (AMS). It can make a huge difference in page loads where lots of objects are being serialized. On one page where I was serializing 1200 Ruby objects (later paginated and displayed with JS), my page load times went down from an unacceptable ~850ms to a much snappier ~200ms.

Fast JSON API leverages OJ (short for Optimized JSON), a JSON parser gem for Rails that has been around for a while now, so if all you need is JSON parsing (or something to replace the built-in .to_json method), OJ is the way to go.

Lazy-Loading Content

If you’ve got pages where there are a lot of off-screen images or videos, be you on Rails, Sinatra, Node, or Super Ninja Fire Penguin Ultra, you could likely benefit from lazy-loading.

We don’t really notice when we’re on our phones or laptops connected to a high-bandwidth, low-latency WiFi connection, but having dozens of images weighing in at hundreds of KB each adds up pretty quickly.

The problem is this: If you’ve got tons of offscreen media, a browser’s default behavior is to load them all. If you’ve got a fast device and connection, that’s probably fine (if not ideal). If you’re on a slower connection, or a data limited mobile device, this is less fine.

Loading several megabytes of images the user might not ever even see wastes data, bandwidth, processing power, and potentially battery life. While the biggest impact will be felt on higher-latency, lower-powered devices, this can hurt page loads on fast machines too, and sometimes even a small speedup can be the difference in whether or not a page load feels immediate.

Rails’ options are somewhat limited, but I’ve found the lazyload-rails gem works for me in Rails 5.2, even though it hasn’t been updated in a while.

This way, images won’t be loaded unless a user actually scrolls down far enough to see them.

Eager Loading: Kill N+1 Queries

If you see a flood of queries in your console, all related to one single action and all looking suspiciously similar, you’ve probably got an N+1 issue; you’re executing one query to retrieve a parent object, and then N additional queries, one for each child object.

This is something you can easily miss in development if you aren’t using a test database that’s representative of the scale you’ll eventually be working on. But once you hit a certain critical mass, it can sink you.


Continuing with our Game of Thrones theme, let’s say we’ve got a House class which has_many people, and a Person class which belongs_to a house.

We’ve set @people = Person.all in our controller action. Then in our view for our index page, if we’re calling all of our people and displaying the house they belong to like this:

<% @people.each do | person | %>
  <div> <%= person.house %> </div>
<% end %>

…it will load every house one at a time… meaning for every iteration in our loop, we’re sending another query to the database.

You can observe this in your server output, looking something like this:

Person Load (40.7ms) SELECT "person".* FROM "people"
House Load (0.8ms) SELECT "houses".* FROM "houses" WHERE "houses"."id" = ? LIMIT 1 [["id", 1]]
House Load (0.7ms) SELECT "houses".* FROM "houses" WHERE "houses"."id" = ? LIMIT 1 [["id", 3]]
House Load (0.7ms) SELECT "houses".* FROM "houses" WHERE "houses"."id" = ? LIMIT 1 [["id", 5]]
House Load...

On the one hand, we’re putting our database’s IOPS performance to the test, so that’s fun. On the other hand, we’re slowing down our database and therefore our site, in addition to littering our production logs with thousands of lines that really shouldn’t be there.

This happens because ActiveRecord, like most ORMs, lazy loads everything by default, so until we actually ask for something we don’t get it. We’re not asking about houses until we’re inside of a loop, so when we call Person.all, it just loads the necessary Person data, not any associated records (like the house the person belongs to).*

The fix? @people = Person.all.includes(:show)

With this simple use of includes, we’ve eager loaded the associated house records, and so we only have to query the database twice; once for the Person model, and once for the House model. We don’t actually end up loading less data, just hitting the database far fewer times.

Installing the Bullet gem will help you spot and take care of N+1 queries.

*This is of course a very good thing; when we set our variable for our show page, @house_frey = House.find_by(name: "Frey"), we normally don’t want to load every single dead Frey along with it.

Upgrade Ruby Itself

This one’s probably pretty obvious. But if you’re still running Ruby 2.0 (or 2.2, or 2.3…) you’re missing out on a fair bit of performance. Every Ruby version tends to bring speed improvements, often across the board, and this translates over to Rails performance — response times improve even in I/O bound applications. And while Ruby 2.6.0’s JIT isn’t really ready for primetime yet when it comes to boosting Rails’ performance, there will be a time in the probably not-too-distant future when it will be.

New Ruby versions also bring specific improvements and new methods that are often faster for their specific use cases than less-specialized methods. Ruby 2.4, for example, came with a big performance boost for hashes, and gave us helpful methods like .match? (think JavaScript’s .test) which, when all we want to do test a string against a Regex, is much faster than .match. It also gave us .sum, which in addition to being pithier than .reduce(:+), is also speedier.

Don’t Ditch Turbolinks!

All right, I know Turbolinks can sometimes be a pain, and the headache might occasionally outweigh the benefits, perhaps especially in larger, more complex apps; if this is you (or if you’ve got a front end like React with client-side routing making Turbolinks useless anyway), I won’t argue the point.

But although it can cause issues we sometimes need to work around, it usually “just works” (in Rails 5+, anyway), and is worth using for the extra speed. Turbolinks really does improve perceived smoothness for the user. You can read here about its benefits and how it works if you remain unconvinced.

Compression (and minification)

The internet wouldn’t work very well without compression. (Actually, computers wouldn’t work well in general.) Sure, we’ve got 100 Mbps+ internet connections, but good luck streaming uncompressed 1080p video on YouTube with that.

By default, when you precompile assets in Rails, you minify HTML, CSS, and at least attempt to minify JS.

Unfortunately, Uglifier, the de facto Rails JS compressor, doesn’t have ES6+ compression enabled by default yet. Luckily however, support is in-built (it’s ‘experimental’, but I haven’t had any issues with it); all you have to do is make a small change in your production.rb:

config.assets.js_compressor = Uglifier.new(harmony: true)

And voila, you’ve got minified JavaScript.

If you have control over the assets displayed on your site, make sure they’re stored in efficient formats. If possible, use SVGs where appropriate. Serve JPEGs instead of PNGs if you don’t need image transparency. (WebP is a more efficient option either way, but users will probably hate you if they have any interest in saving those images themselves, as WebP isn’t a format that plays particularly well with others.)

Finally, we can add gzip compression to files that haven’t already been compressed, with Rack::Deflater. In /config/application.rb add the line:

config.middleware.insert_after ActionDispatch::Static, Rack::Deflater.

You can go here for an explanation about what it’s doing, and use this neat little tool to let you know what kind of compression ratios you’re getting. This can potentially have a big impact on how fast your site is, especially for users on slower connections.

Final Thoughts: Gems, jQuery, .count and Caching

Don’t Overuse Gems

We’re going to need to use some gems; they provide flexible, extended functionality for our app saving us enormous amounts of time, and often doing things we probably couldn’t really do very well on our own anyway (gems like bCrypt, Devise and OAuth come to mind).

But if you add a gem every time you stub your toe, you’re adding a lot of overhead. Gem-heavy apps can become unwieldy and memory hungry, which can hurt performance and make it harder to upgrade to newer Rails versions in the future due to dropped support or bugs.

Is there a Gem out there that solves a bunch of your problems all at once? Great, go get it. Do you have a small bit of functionality you want to add? Maybe don’t add that huge gem that does 800 things when you’re only going to use one of them. Perhaps find a more specialized gem, or simply add the feature yourself instead of loading a whole library to do the job for you.


If you’re using jQuery in your Rails app, assuming you don’t need to support IE < 9, run jQuery 3.x instead of jQuery 1.x. This is as simple as changing the //= require jquery (or require jquery.min) in your application.js file to //= require jquery3.min. You can use jQuery2 if you need to, too, but whichever you choose, both are faster and smaller than the original.

Use .size, not .count

Don’t use .count in a query unless there’s a specific reason to do so;.size will choose the appropriate way to accomplish what you ask it to do.

And don’t use .length to determine how many records there are; ActiveRecord doesn’t define a length property, so if you call, for example, Person.all.length instead of Person.all.size, all Person records get loaded and converted into an array, and you’ll end up with results like this:

**Person Load (346.0ms) SELECT “people”.* FROM “people”**

instead of this:

**(12.0ms) SELECT COUNT(*) FROM “people”**



Caching is a wide-ranging, thorny topic, and getting it right ranges from pretty straightforward to pretty bendy. The idea behind caching Rails apps is the same idea behind caching in general; we save frequently-used pieces of data so we don’t have to constantly query a database or reach out to a network to retrieve the same information over and over.

Much of this is already done for us by browsers; a user’s browser will cache much of what it comes across, thereby shortening page loads for individual users. Rails also automatically caches things for us, like our actual code, for instance. However, there is still plenty that isn’t automatically cached, and failing to cache in production can lead to poor performance.

Because of the depth of the topic, we’re not going to get into specifics here, but check out the Rails guides on caching; pay particular attention to fragment caching, as it is the most common use-case. Understanding caching in web development is also a helpful and beginner-friendly read.

That’s it! I hope you enjoyed reading, and learned something that will help you speed up your lagging Rails app. :)

Recommended Courses:

Professional Ruby on Rails Developer with Rails 5

Ruby on Rails for Beginners

Ruby On Rails To-Do List App

Learn Ruby on Rails from Scratch

How To Install Ruby On Rails On Windows

RESTful API with Ruby On Rails 5


Ruby on Rails 5 Tutorial: Build Web Application

Ruby on Rails Tutorial for Beginners

Web Development Trends 2020

What To Learn To Become a Python Backend Developer

Developing a social networking platform using Ruby on Rails

40+ Online Tools & Resources For Web Developers & Designers