March 5, 2026
12 min read
Every new engagement starts the same way. Before any code gets written or any recommendations get made, the application gets audited. Not a cursory glance at the README — a structured evaluation of the codebase, its infrastructure, and its operational health.
The goal isn't to produce a laundry list of everything that's wrong. It's to understand the application's current state, identify the highest-impact risks, and build a prioritized roadmap for improvement. A good audit answers three questions: where is the pain coming from, what's going to break next, and what should get fixed first.
Here's the process.
This comes first because it frames everything else. The Ruby and Rails versions reveal what security exposure exists, how far the application is from current support, and how constrained it is in adopting modern tooling.
Check the Gemfile for the Rails version, the .ruby-version file for the Ruby version, and cross-reference both against their end-of-life dates. If the application is running on an unsupported version of either — and many are — that becomes priority one.
Beyond EOL status, the version pairing says a lot about the application's history. A Rails 6.1 app on Ruby 2.7 suggests an application that hasn't been upgraded since around 2021. A Rails 7.2 or 8.0 app on Ruby 3.3+ means the stack is actively maintained. It's also worth checking whether the app is on the latest patch version — being on Rails 7.1.0 when 7.1.6 exists means security fixes are being missed even within a supported version.
The Gemfile is a window into the application's history. It's not just about what gems are installed — it's about how healthy the overall dependency tree is.
Start with bundle audit to check for known vulnerabilities. Any CVEs flagged here are immediate findings. Then look at the broader picture.
How many gems are in the Gemfile? Above 60-70 direct dependencies, bloat becomes likely — gems added for a one-time feature that never got removed, gems that duplicate functionality, gems that pulled in large dependency trees for a small piece of functionality.
Read through the Gemfile line by line. Look for gems that have been superseded by built-in Rails functionality, gems that solve problems the application may no longer have, and gems that are unrecognizable — which often turn out to be unmaintained projects or abandoned experiments.
Check the maintenance status of critical dependencies. For each gem the application relies on heavily — authentication, authorization, background jobs, file uploads — check when the last release was and whether it supports the current Rails version. A core dependency that hasn't been updated in two years is a risk worth noting.
Also look for forked gems — git: or github: references pointing to personal repositories rather than the official gem. These mean someone is carrying a private fork that needs to be maintained indefinitely.
Don't just look at the coverage percentage — look at what's actually being tested and how.
First: does the test suite run at all? It's not unusual to find applications where the suite has been broken for weeks or months and the team has been deploying without running tests. That reframes the entire audit — if code is shipping without tests, every other finding needs to be evaluated with that in mind.
If the tests do run, time the full suite. A test suite that takes 30+ minutes tends to get skipped in practice, which means regressions slip through more often.
Then look at what's actually being tested. A high coverage number can be misleading — 90% coverage that's all model validations and zero controller tests gives a false sense of safety. Open a few test files and assess the quality. Are they testing behavior or just testing that ActiveRecord methods exist? Are there integration tests that exercise real user flows? Is there any coverage on the critical paths — authentication, payment processing, data mutations — or are those precisely the parts that got skipped?
Also check the CI configuration. Tests that only run locally tend not to run consistently. Ideally, tests run automatically on every pull request and gate deployments.
Red flags: broken test suites, coverage below 60%, zero coverage on critical paths, no CI, test suites over 30 minutes without parallelization, large numbers of skipped or pending tests.
What good looks like: suite runs green in CI on every PR, coverage above 80% with meaningful tests on critical paths, mix of unit and integration tests, suite under 10 minutes with parallelization.
The database is where most Rails performance problems live, so this section gets a disproportionate amount of attention.
The approach depends on what tooling the team already has in place. If they're running AppSignal, New Relic, or Datadog, start there — the slowest queries and most-hit endpoints are already surfaced. If pg_stat_statements is enabled (and it should be), that gives a ranked list of the most expensive queries in production. Even the Rails logs are useful — they show query counts and timing per request.
Look for the usual suspects: missing indexes on foreign keys and commonly filtered columns, N+1 patterns that made it to production, queries doing sequential scans on large tables. The top 10 slowest queries reveal more about the application's performance characteristics than almost anything else in the audit.
Also check the database configuration in database.yml. Connection pool size is a common misconfiguration — an application with 10 Puma workers, 5 threads each, and a connection pool of 5 will run out of database connections under load.
Beyond performance, look at how well the data is normalized. Denormalized data and redundant columns are common in applications that have grown quickly — the same information stored in multiple places, columns that could be derived from other tables, or data that's been flattened for convenience but now causes consistency issues when it needs to be updated. Also check for heavy use of serialized columns (JSON blobs, YAML) for data that should be in its own table, and polymorphic associations that may be causing performance issues at the current data volume.
Red flags: unindexed foreign keys, slow queries visible in monitoring, connection pool misconfiguration, serialized columns for structured data, no visibility into query performance at all.
What good looks like: indexes on foreign keys and commonly queried columns, visibility into slow queries through monitoring, connection pool sized appropriately for the concurrency model, well-normalized table design.
This section often uncovers problems that have been silently accumulating for the entire life of the application. Data integrity issues don't cause crashes — they cause subtle incorrectness that compounds over time.
Start by checking whether the database has foreign key constraints. Rails doesn't add them by default — it relies on application-level validations. This means that if a record is deleted through a mechanism that bypasses ActiveRecord (a direct SQL query, a bulk delete, a console session), orphaned references can be created with no error. It's common to find applications with thousands of orphaned records — appointments pointing to deleted patients, line items referencing products that no longer exist.
Check for missing NOT NULL constraints on columns that should never be null. Rails validations only apply when saving through ActiveRecord. Without a database-level constraint, null values creep in through direct SQL, bulk imports, console operations, or race conditions.
Also check whether multi-step data operations are wrapped in transactions. If creating an order involves creating the order record, the line items, and a payment record, can the process fail partway through and leave a partially created order in the database?
This is the most subjective part of the audit, but also one of the most revealing. The question is how the application is organized and whether its structure makes it easy or difficult to work on.
Look at how the application handles business logic. Is it all in models and controllers, or has the team adopted patterns for managing complexity? Service objects, form objects, query objects, interactors — the specific pattern matters less than having some consistent approach to organizing complex logic.
Look at the models to understand how responsibilities are distributed. Are there models that have taken on too many roles — handling business logic, callbacks, API integrations, and data transformations all in one place? The issue isn't the size of any single file, it's whether the responsibilities are clear enough that a developer can make a change confidently without worrying about unexpected side effects elsewhere.
Scan for patterns that tend to cause problems as the application grows, like deeply nested callbacks that create invisible execution chains or circular dependencies between models.
The real question is: if a new developer joined this team, how long would it take them to understand where things live and feel comfortable making changes?
If the application uses background jobs — and most non-trivial Rails applications do — the job infrastructure is worth evaluating.
Check which queue backend is in use and whether it's appropriate for the application's scale.
Look at how queues are organized. Everything on a single default queue means that a batch of 10,000 analytics jobs will block time-sensitive work like password reset emails. Examine individual job classes for red flags: jobs that make external API calls without timeouts, jobs that process entire batches instead of individual units, and jobs that aren't idempotent — if a job fails and retries, will it send the same email twice or charge a customer twice?
Check retry and error handling configuration. Are retries configured with appropriate backoff? Is anyone monitoring the dead job queue? A large number of dead jobs is a sign that errors are being ignored. Queue latency — the time between enqueue and execution — reveals whether time-sensitive work is actually being processed promptly.
The key question: if something breaks in production right now, how does the team find out?
The best case is a proper monitoring service — Sentry, AppSignal, Honeybadger — that captures exceptions with full context and alerts the team. The worst case is that errors are only visible in production logs that nobody reads. In between, there are teams that have error monitoring configured but have let it get so noisy that the signal is lost. An error tracker with 5,000 unresolved issues stops being useful pretty quickly.
Beyond error tracking, evaluate what observability the team has into performance. Can they identify which endpoints are slowest? Can they see database query times? Do they have visibility into background job throughput and failure rates? Do they track deploy events so they can correlate performance changes with code changes?
If there's no observability tooling at all, that goes in the report. AppSignal is a solid choice — it covers performance monitoring, error tracking, and host metrics in a single tool with excellent Rails integration. But the specific tool matters less than having something in place.
If the application exposes an API — to a frontend SPA, a mobile app, or external consumers — the API design is worth evaluating. API problems are especially painful because they're hard to fix without breaking clients.
Look at whether there's a versioning strategy. Applications that serve an API without versioning have no path to making breaking changes without coordinating simultaneous deploys across every client. Check the response structure for consistency — are all endpoints returning data in the same format, or does each endpoint have its own ad hoc structure? Are error responses standardized?
Check whether authentication is applied consistently across all endpoints. And look at whether the API has any documentation — even a Postman collection or basic OpenAPI spec makes onboarding new developers and debugging integration issues significantly easier.
An audit should establish measurable numbers that describe how the application performs today, so that future changes can be evaluated against a known starting point.
Look at the average and p95 response times across the application's endpoints. The average shows the typical experience; the p95 shows how bad the worst case is. An application with a 200ms average and a 3-second p95 has a long tail of slow requests that needs investigation.
Identify the slowest endpoints specifically — usually 5-10 that account for the majority of the pain. Check the total number of queries per request on key endpoints. An endpoint firing 50+ queries is almost certainly doing something wrong regardless of how fast each individual query runs.
If there's no performance monitoring in place, that goes in the report — without baselines, there's no way to measure whether future changes are actually helping.
The output of this process is a prioritized report — not a 50-page document that sits in a drawer, but a focused, actionable summary organized by severity.
Critical findings go at the top: unsupported framework versions, known security vulnerabilities, broken test suites, missing error monitoring, data integrity issues actively corrupting production data. These need immediate attention.
High-priority findings come next: missing database indexes on critical queries, job infrastructure that can't keep up, missing database constraints. These aren't emergencies today but will become emergencies if left alone.
Everything else — code architecture improvements, test coverage gaps, API design inconsistencies — goes into a roadmap that can be tackled incrementally alongside feature work.
Each finding comes with a recommendation and a rough effort estimate — everything needed to make informed decisions about what to tackle and when.
The goal is never to overwhelm. It's to provide clarity on where things stand and a concrete plan for what to address first. Every Rails application accumulates issues over time. An audit just makes them visible so they can be dealt with on your terms.
We begin every project with a $2,000 flat-fee assessment to evaluate your codebase and deliver a fixed proposal. If you proceed, the assessment fee is deducted from the project cost.
Request an Assessment →