February 27, 2026

Why Your Rails App Is Slow (And How to Fix It)

11 min read

Every slow Rails app I've profiled has the same story. The team assumes the problem is deep and architectural — maybe they need to extract microservices, maybe they need to rewrite a critical path, maybe the app has just outgrown Rails.

Then I open the logs and find 47 database queries on the dashboard endpoint, a missing index on a table with 2 million rows, and emails being sent synchronously inside the request cycle.

The performance problems in most Rails applications are not complex. They're a handful of well-understood mistakes that compound on each other. Fix them and your app gets dramatically faster without touching your architecture.

Here's where your Rails app is actually slow — and how to fix each one.

N+1 Queries: The Silent Performance Killer

If I had to pick the single most common performance problem in Rails applications, it's N+1 queries. Every Rails developer has heard of them. Most Rails applications still have them.

Here's what it looks like:

# app/controllers/orders_controller.rb
def index
  @orders = Order.where(status: "pending").limit(20)
end
<%# app/views/orders/index.html.erb %>
<% @orders.each do |order| %>
  <div class="order">
    <p><%= order.customer.name %></p>
    <p><%= order.line_items.count %> items</p>
    <p>Shipped via <%= order.shipping_method.carrier_name %></p>
  </div>
<% end %>

This looks clean. It's also generating 61 queries for 20 orders:

SELECT * FROM orders WHERE status = 'pending' LIMIT 20;   -- 1 query
SELECT * FROM customers WHERE id = 1;                      -- \
SELECT * FROM customers WHERE id = 2;                      --  |
...                                                        --  > 20 queries
SELECT * FROM customers WHERE id = 20;                     -- /
SELECT COUNT(*) FROM line_items WHERE order_id = 1;        -- \
SELECT COUNT(*) FROM line_items WHERE order_id = 2;        --  |
...                                                        --  > 20 queries
SELECT COUNT(*) FROM line_items WHERE order_id = 20;       -- /
SELECT * FROM shipping_methods WHERE id = 1;               -- \
...                                                        --  > 20 queries
SELECT * FROM shipping_methods WHERE id = 20;              -- /

The fix:

def index
  @orders = Order
    .where(status: "pending")
    .includes(:customer, :shipping_method)
    .limit(20)
end

For the line_items.count problem, add a counter cache:

# migration
add_column :orders, :line_items_count, :integer, default: 0

# app/models/line_item.rb
class LineItem < ApplicationRecord
  belongs_to :order, counter_cache: true
end

Result: 61 queries → 3 queries. On a page with real data, this alone can take a response from 800ms to under 100ms.

The tooling exists to catch these automatically. The Bullet gem will flag N+1 queries in development. Rails 6.1+ has built-in strict loading:

class Order < ApplicationRecord
  self.strict_loading_by_default = true
end

This raises an error any time you lazy-load an association, forcing you to declare your includes explicitly. Turn it on. Your future self will thank you.

Missing Database Indexes

This one is embarrassingly common and embarrassingly impactful. I regularly encounter Rails applications where foreign keys have no index, where columns used in WHERE clauses have no index, and where the developers have never run EXPLAIN ANALYZE on their slow queries.

Consider this:

# This is slow on a large table without an index
User.where(email: "john@example.com").first

# This scope runs on every authenticated request
Order.where(customer_id: current_user.id, status: "active")

Without indexes, both of these trigger full table scans. On a table with 100,000 rows, that might add 50-200ms per query. On a table with 2 million rows, you're looking at multi-second queries.

Check what you're missing:

-- Find tables with no indexes on foreign keys (PostgreSQL)
SELECT
  conrelid::regclass AS table_name,
  a.attname AS column_name
FROM pg_constraint c
JOIN pg_attribute a ON a.attnum = ANY(c.conkey) AND a.attrelid = c.conrelid
WHERE c.contype = 'f'
AND NOT EXISTS (
  SELECT 1 FROM pg_index i
  WHERE i.indrelid = c.conrelid
  AND a.attnum = ANY(i.indkey)
);

Add the indexes that matter:

# migration
class AddMissingIndexes < ActiveRecord::Migration[7.1]
  def change
    add_index :orders, :customer_id
    add_index :orders, [:customer_id, :status]
    add_index :users, :email, unique: true
    add_index :line_items, :order_id
  end
end

Measure the difference with EXPLAIN ANALYZE:

-- Before index
EXPLAIN ANALYZE SELECT * FROM orders WHERE customer_id = 4823 AND status = 'active';
-- Seq Scan on orders  (cost=0.00..45892.00 rows=12 width=184) (actual time=312.45..312.89 ms)

-- After composite index
EXPLAIN ANALYZE SELECT * FROM orders WHERE customer_id = 4823 AND status = 'active';
-- Index Scan using index_orders_on_customer_id_and_status  (cost=0.42..8.44 rows=12 width=184) (actual time=0.03..0.05 ms)

From 312ms to 0.05ms. That's not a Ruby problem.

A good rule of thumb: every belongs_to should have an index on its foreign key. Every column you filter or sort by in production queries should have an index. Run EXPLAIN ANALYZE on your slowest endpoints and you'll often find the fix is a one-line migration.

You're Not Caching Anything

Rails ships with one of the most elegant caching systems in any web framework. Fragment caching, Russian doll caching, low-level caching with Rails.cache — it's all built in, well-documented, and largely ignored by most applications I've worked on.

Here's a common scenario: your dashboard page hits the database on every single request, assembling the same data for the same user, even though the underlying data changes once an hour.

Before — no caching:

def dashboard
  @recent_orders = current_user.orders.includes(:line_items).recent.limit(10)
  @stats = {
    total_revenue: current_user.orders.completed.sum(:total),
    order_count: current_user.orders.completed.count,
    avg_order_value: current_user.orders.completed.average(:total)
  }
end

Every request runs four queries. Multiply by 500 users hitting this page throughout the day and you're running 2,000 unnecessary queries per hour.

After — low-level caching:

def dashboard
  @recent_orders = current_user.orders.includes(:line_items).recent.limit(10)
  @stats = Rails.cache.fetch("user_#{current_user.id}_dashboard_stats", expires_in: 15.minutes) do
    {
      total_revenue: current_user.orders.completed.sum(:total),
      order_count: current_user.orders.completed.count,
      avg_order_value: current_user.orders.completed.average(:total)
    }
  end
end

Fragment caching in views:

<% @recent_orders.each do |order| %>
  <% cache order do %>
    <div class="order-card">
      <p><%= order.customer.name %></p>
      <p><%= order.total %></p>
      <p><%= order.line_items.count %> items</p>
    </div>
  <% end %>
<% end %>

When an order hasn't changed, Rails serves the cached HTML fragment without touching the database or running the view template. When the order updates, the cache key automatically invalidates.

For API responses, consider caching the entire serialized payload:

def show
  @order = Order.find(params[:id])
  json = Rails.cache.fetch(@order) do
    OrderSerializer.new(@order).to_json
  end
  render json: json
end

The difference between a cached and uncached dashboard isn't 10-20%. It's often 5-10x faster.

You're Doing Too Much Inside the Request

Every millisecond your controller spends doing work is a millisecond the user is waiting. And too many Rails apps do things during the request that don't need to happen before the response is sent.

Common offenders:

# Sending email synchronously
def create
  @order = Order.create!(order_params)
  OrderMailer.confirmation(@order).deliver_now  # 200-500ms blocking
  redirect_to @order
end

# Generating a PDF inline
def invoice
  @order = Order.find(params[:id])
  pdf = InvoicePdf.new(@order).render  # 300-800ms blocking
  send_data pdf, filename: "invoice_#{@order.id}.pdf"
end

# Hitting a third-party API during a web request
def create
  @order = Order.create!(order_params)
  InventoryService.reserve_stock(@order)      # external HTTP call, 100-2000ms
  ShippingApi.calculate_rates(@order)          # another external call
  redirect_to @order
end

Move this work to background jobs:

def create
  @order = Order.create!(order_params)
  OrderConfirmationJob.perform_later(@order.id)
  StockReservationJob.perform_later(@order.id)
  redirect_to @order
end

# app/jobs/order_confirmation_job.rb
class OrderConfirmationJob < ApplicationJob
  queue_as :default

  def perform(order_id)
    order = Order.find(order_id)
    OrderMailer.confirmation(order).deliver_now
  end
end

The user gets their response immediately. The email sends in the background. The stock reservation happens asynchronously. The response time drops from 1-3 seconds to under 200ms.

If you're on Rails 8, Solid Queue is built right into the framework — no Redis dependency needed. For older versions, Sidekiq remains the standard. The key principle is the same: if it doesn't need to happen before the user sees the response, it shouldn't.

Instrument the Code You're Suspicious Of

Sometimes the slow endpoint isn't explained by a single N+1 or missing index. The work is spread across multiple method calls, service objects, and callbacks — and you need to know exactly which piece is eating the time.

This is where custom instrumentation pays for itself. Rails has ActiveSupport::Notifications built in, and wrapping suspicious blocks of code in instrumentation is one of the most underused debugging techniques I see.

Wrap any block of code to measure it:

def process_order(order)
  ActiveSupport::Notifications.instrument("process_order.app", order_id: order.id) do
    validate_inventory(order)
    calculate_tax(order)
    charge_payment(order)
    generate_confirmation(order)
  end
end

Go more granular when you need to isolate the bottleneck:

def process_order(order)
  ActiveSupport::Notifications.instrument("validate_inventory.app") do
    validate_inventory(order)
  end

  ActiveSupport::Notifications.instrument("calculate_tax.app") do
    calculate_tax(order)
  end

  ActiveSupport::Notifications.instrument("charge_payment.app") do
    charge_payment(order)
  end

  ActiveSupport::Notifications.instrument("generate_confirmation.app") do
    generate_confirmation(order)
  end
end

Subscribe to these events in an initializer:

# config/initializers/instrumentation.rb
ActiveSupport::Notifications.subscribe(/\.app$/) do |name, start, finish, id, payload|
  duration = (finish - start) * 1000
  Rails.logger.info "[PERF] #{name} took #{duration.round(2)}ms #{payload}"
end

Now your logs will tell you exactly where time is being spent:

[PERF] validate_inventory.app took 12.34ms {order_id: 4823}
[PERF] calculate_tax.app took 3.21ms {order_id: 4823}
[PERF] charge_payment.app took 847.92ms {order_id: 4823}
[PERF] generate_confirmation.app took 5.67ms {order_id: 4823}

There's your bottleneck. The payment gateway call is taking 850ms. That's a background job candidate, not something you stare at Ruby benchmarks to solve.

For quick, one-off measurements, Ruby's built-in Benchmark module works too:

require 'benchmark'

time = Benchmark.measure do
  Order.includes(:customer, :line_items).where(status: "pending").to_a
end

Rails.logger.info "Order query took: #{time.real.round(4)}s"

The habit of instrumenting before optimizing will save you from guessing. Guessing is how teams spend a week optimizing something that accounts for 2% of their response time.

Your Frontend Is Slower Than Your Backend

Here's one that Rails developers often overlook entirely: the slowness your users are experiencing might not be server-side at all.

I've seen teams spend weeks optimizing database queries to shave 50ms off server response time while ignoring:

  • An uncompressed JavaScript bundle over 2MB because nobody configured the asset pipeline properly
  • Render-blocking CSS and JS in the <head> that prevents anything from painting until the entire bundle downloads
  • No CDN in front of static assets, so every image, stylesheet, and JavaScript file is served from the application server
  • Images that aren't optimized or lazy-loaded, forcing the browser to download 15MB of product photos before the page is interactive

Quick wins:

# config/environments/production.rb

# Enable asset compression
config.assets.css_compressor = :sass
config.assets.js_compressor = :terser

# Serve static files with far-future headers for caching
config.public_file_server.headers = {
  'Cache-Control' => 'public, max-age=31536000'
}
<%# Lazy load images below the fold %>
<%= image_tag product.image_url, loading: "lazy" %>

<%# Defer non-critical JavaScript %>
<%= javascript_include_tag "analytics", defer: true %>

If you're using Webpack or esbuild, audit your bundle size. Tools like webpack-bundle-analyzer will show you exactly which libraries are bloating your frontend. I've seen applications shipping all of Moment.js (300KB+) when they needed a single date formatting function.

The point isn't that frontend performance is a Rails problem. It's that teams focus exclusively on server-side optimization while the user is actually waiting on asset downloads and browser rendering — things your backend never touches.

You Can't Fix What You Don't Measure

Everything I've described above follows from a single principle: measure first, optimize second. The teams that struggle with performance aren't the ones with the hardest problems — they're the ones guessing at where the slowness lives instead of measuring it.

In development, tools like rack-mini-profiler and Bullet give you immediate visibility into query counts and N+1 issues. But development doesn't reflect production. Your local database has 50 rows; production has 2 million. Your local machine has no network latency to external services. The performance characteristics are fundamentally different.

For production, you need proper application performance monitoring. I'm a fan of AppSignal — it combines performance monitoring, error tracking, uptime monitoring, and host metrics in a single tool. What I like about it is that it gives you a clear view of where time is being spent per request: how long in the database, how long in the view, how long in external HTTP calls. It also automatically detects N+1 queries and slow transactions without you having to configure anything.

The custom instrumentation events from the previous section (ActiveSupport::Notifications.instrument) automatically show up in AppSignal's event timeline too, so you get granular breakdowns of your own business logic alongside Rails' built-in instrumentation.

The full measurement stack I recommend:

# Gemfile

# Development
group :development do
  gem 'rack-mini-profiler'
  gem 'bullet'
  gem 'stackprof'        # CPU profiling
  gem 'memory_profiler'  # memory profiling
end

# Production (and staging)
gem 'appsignal'

The development tools help you catch problems before they ship. AppSignal helps you find the problems that only show up at scale with real data and real traffic.

The workflow is simple: AppSignal shows you which endpoints are slow and why. You reproduce the issue locally with rack-mini-profiler and EXPLAIN ANALYZE. You fix it, deploy, and verify the improvement in AppSignal's dashboards. Repeat.

Before you rearchitect anything — measure your application. The fix is almost certainly a missing includes, an absent index, or a job that should be running in the background.

Most slow Rails apps are one good afternoon of profiling away from being fast ones.

Every Engagement Starts with an Assessment

We begin every project with a $2,000 flat-fee assessment to evaluate your codebase and deliver a fixed proposal. If you proceed, the assessment fee is deducted from the project cost.

Request an Assessment →