March 9, 2026
14 min read
The monolith gets a bad reputation it doesn't deserve. Somewhere along the way, the industry decided that a single Rails application serving an entire business was a problem to be solved rather than an architecture to be maintained. Teams that should be shipping features are instead splitting their application into services, adding network boundaries between code that was working perfectly well together, and trading one set of problems for a much harder set of problems.
A Rails monolith is the right architecture for most applications. It's the right architecture for most teams. And with the right structure, it's an architecture that can serve a business for a decade or longer without becoming the unmaintainable mess that people fear.
The monolith doesn't fail because it's a monolith. It fails when teams stop being intentional about how it's organized. This article is about staying intentional.
The case for microservices usually starts with a scaling argument and ends with an organizational one. The application is too big. The team is stepping on each other's toes. Deployments are too risky. The monolith is "holding us back."
Most of the time, the real problem is that the code isn't well-organized — and splitting it into services doesn't fix that. It just moves the mess behind network boundaries where it's harder to see and harder to fix. Instead of a tangled monolith, you get a distributed system with the same tangled logic spread across services that now need to coordinate over HTTP, manage eventual consistency, and debug failures across network calls. The complexity didn't decrease. It just got more expensive.
A monolith gives you things that are genuinely hard to replicate in a distributed system: a single database with transactional integrity, a single deploy pipeline, a single test suite that can exercise the full application, refactoring with the confidence that the compiler and test suite will catch breakage, and a development environment that runs on a single rails server command. These aren't limitations. They're advantages.
The teams that do well with microservices are usually large enough to dedicate entire teams to individual services, with the infrastructure investment to support distributed tracing, service meshes, and independent deployment pipelines. For everyone else — and that's most companies — the monolith is where you should be.
The question isn't whether to use a monolith. It's how to structure one that doesn't degrade as it grows.
The single most important structural decision in a long-lived monolith is how domain boundaries are drawn. Without clear boundaries, everything eventually depends on everything else, and every change risks cascading side effects across unrelated parts of the application.
Rails doesn't enforce domain boundaries by default. The framework gives you app/models, app/controllers, and a flat namespace, and it's perfectly happy to let any model talk to any other model from anywhere. That works fine at 20 models. At 200, it's a problem.
The goal is to organize code so that related concepts live together and unrelated concepts interact through clear, narrow interfaces. There are a few ways to do this in Rails.
The simplest approach is namespace conventions. Group models, services, and jobs by domain rather than by type. Instead of a flat app/models/ with 150 files, create app/models/billing/, app/models/scheduling/, app/models/notifications/. Do the same with services, controllers, and jobs. This is a convention — Rails won't enforce it — but it communicates intent and makes it obvious when code is reaching across domains.
If this sounds like domain-driven design, it is — at least the parts that are practical in a Rails context. You don't need to adopt the full domain-driven design vocabulary or introduce aggregates and value objects on day one. The useful core of it is thinking about your application as a set of distinct domains with clear responsibilities, and being deliberate about how those domains interact. That mental model alone changes how you structure code.
Whichever approach you use, the discipline is the same: code within a domain should be able to do its work without reaching into another domain's internals. When domains need to communicate, they do it through a defined interface — a service call, an event, a public method on a domain's facade object. Not by querying another domain's tables directly or calling its private methods.
Events deserve special attention here. As the application grows, domains will need to react to things that happen in other domains — a new user signs up and billing needs to create a subscription, an order ships and notifications needs to send an email. The tempting approach is direct calls: the registration service calls the billing service, which calls the notification service. This works, but it couples the domains together and makes the registration flow responsible for knowing about every downstream effect.
A better long-term pattern is domain events. ActiveSupport::Notifications works for simple cases. For more structured event systems, a lightweight pub/sub pattern — even just a plain Ruby class that maintains a list of subscribers — keeps domains loosely coupled. The registration domain publishes a user.registered event. Billing subscribes to it. Notifications subscribes to it. Registration doesn't need to know or care what happens downstream. New subscribers can be added without modifying the publishing domain.
This takes discipline to maintain. Every shortcut — every "I'll just call that model directly for now" — erodes the boundary. But the payoff compounds over years. When domains are well-separated, teams can work on different parts of the application without stepping on each other, new developers can understand one domain without learning the entire codebase, and large refactors can happen within a single domain without risking the rest of the application.
In a long-lived application, database decisions compound more than almost anything else. A table structure that's easy to work with at launch can become a serious bottleneck three years later, and by then the data volume makes structural changes expensive and risky.
The most important principle is normalization. It's not glamorous advice, but denormalized data is responsible for more long-term pain in Rails applications than almost any other design choice. When the same information is stored in multiple places, it will eventually get out of sync. When columns exist that could be derived from other tables, they become liabilities — every write path needs to keep them updated, and eventually one doesn't. Normalize early. Denormalize later, deliberately, with caching or materialized views when the performance data justifies it.
Foreign key constraints matter. Rails doesn't add them by default, and many applications never add them at all. This means orphaned records accumulate silently over the life of the application — every direct SQL query, every console session, every bulk operation that bypasses ActiveRecord can leave broken references behind. Adding foreign keys from the start is a one-line migration change that prevents years of data cleanup.
Be deliberate about NOT NULL constraints. A column that should always have a value needs a database-level constraint, not just a Rails validation. Validations are bypassed constantly — imports, migrations, console operations, direct SQL — and once null values are in the data, every piece of code that touches that column needs to handle a case that was never supposed to exist.
Think carefully before adding serialized columns. JSON columns in PostgreSQL are powerful and sometimes the right choice, but they're also easy to overuse. Data in a JSON column can't be indexed efficiently, can't participate in joins, and can't be validated at the database level. If you're finding yourself querying into JSON columns regularly, the data probably deserves its own table.
Think about table growth early. A table that's growing by a million rows a month needs a different strategy than one that holds a few thousand reference records. Partitioning, archiving, and retention policies are easier to design upfront than to retrofit onto a table with 500 million rows.
Indexing strategy matters more as data grows. At small scale, missing indexes are invisible — queries are fast regardless because the dataset fits in memory. At scale, a missing index on a foreign key or a commonly filtered column is the difference between a 2ms query and a 2-second one. Index every foreign key from the start. Add composite indexes for query patterns that filter on multiple columns together. And review slow query logs periodically — the queries that need indexes change as the application's usage patterns evolve.
One more thing: treat migrations as production operations from day one. Use gems like strong_migrations to catch unsafe migration patterns — adding a column with a default value, creating an index without CONCURRENTLY, adding a NOT NULL constraint to an existing column without the safe two-step approach. These migrations work fine on a development database with 100 rows. On a production table with 10 million rows, they lock the table and take the application down. Building safe migration habits early means they're automatic by the time the data volume makes them critical.
A test suite that works at year one and still works at year five requires intentional design. The natural tendency is for test suites to slow down and become brittle as the application grows, to the point where they become a burden rather than a safety net.
The most important thing is that the tests run fast. A slow test suite changes team behavior — developers stop running it before pushing, CI feedback loops get long enough that people context-switch away, and the suite gradually becomes something that runs in the background rather than something that gates every change. Aim for under 10 minutes with parallelization. Invest in keeping it there.
Test at the right level of abstraction. Heavy reliance on full integration tests — browser-driven tests that boot the entire stack — leads to slow, brittle suites that break for reasons unrelated to the code being tested. These tests have their place for critical user flows, but the bulk of the suite should be faster, more focused tests that exercise business logic directly. A service object that calculates pricing doesn't need a browser to test it.
Structure tests to mirror domain boundaries. If the application code is organized by domain, the tests should be too. This makes it possible to run a subset of the suite that's relevant to the code being changed, and it keeps test files navigable as the suite grows.
Avoid fixtures that create the entire world for every test. Shared test setup that creates 50 records across 12 tables makes tests slow and fragile — a change to any of those models can break tests that have nothing to do with the change. Each test should set up only what it needs. Factory patterns help here, but only if the factories themselves are kept minimal.
Test the boundaries between domains explicitly. If the billing domain depends on an interface from the scheduling domain, test that interface. These boundary tests act as contracts — they'll catch breaking changes when one domain evolves without updating the other.
Write tests that describe behavior, not implementation. A test that asserts "this method calls these three other methods in this order" breaks the moment the implementation changes, even if the behavior is identical. A test that asserts "given this input, the output is correct" survives refactoring. The second kind of test is what makes long-term maintenance possible.
Gem dependencies are one of the biggest sources of long-term risk in a Rails application, and the risk is almost invisible until it becomes urgent. A gem that's well-maintained today can be abandoned next year. A gem that works perfectly on Rails 7 can be the reason you can't upgrade to Rails 8.
The first principle is restraint. Every gem added to the Gemfile is a long-term commitment. It's a dependency that needs to be maintained, kept compatible with future Rails versions, and monitored for security vulnerabilities. Before adding a gem, it's worth asking: is this solving a problem complex enough to justify the dependency, or could this be handled with a few dozen lines of application code?
When a gem is justified, wrap it. Don't let third-party library interfaces leak throughout the codebase. If the application uses a geocoding gem, don't call it directly from 15 different files — wrap it in an application-level service that the rest of the code calls. When the gem is eventually replaced (and over 10 years, it will be), the change happens in one place. This applies especially to gems that provide core functionality: payment processing, file storage, email delivery, search. The application should depend on its own interface, not the gem's.
Keep the Gemfile small and review it periodically. Gems get added during feature development and rarely get removed. Over the life of a long-lived application, this accumulates into a dependency list where half the gems are no longer needed, some duplicate functionality that Rails has since added natively, and a few are unmaintained with no clear replacement. A periodic review — even just once or twice a year — keeps this under control.
Pin versions deliberately. Running bundle update without understanding what's changing is how subtle breakages get introduced. Update gems individually, read the changelogs, and run the test suite. For critical dependencies — your authentication library, your background job framework, your database adapter — treat updates the way you'd treat a Rails upgrade: read the release notes, test thoroughly, deploy carefully.
Stay current with Rails itself. The single biggest dependency management decision is how frequently the application tracks Rails releases. Teams that upgrade within a few months of each new release keep the upgrade cost small and incremental. Teams that skip versions let the upgrade cost compound until it becomes a project in itself. A yearly Rails upgrade, planned and budgeted, is dramatically cheaper than a multi-version upgrade every three years.
Run bundle audit in CI. Known vulnerabilities in dependencies should be caught automatically, not discovered during an audit. Make it part of the build. If bundle audit fails, the build fails.
Structure alone isn't enough. A well-structured monolith still requires ongoing attention to stay healthy. The patterns described in this article aren't things to set up once and forget — they're disciplines that need to be maintained as the team and codebase evolve.
Enforce domain boundaries in code review. The structure only holds if the team respects it. When a pull request reaches across a domain boundary instead of going through the defined interface, that's worth flagging. Not because the shortcut won't work, but because every shortcut makes the next one easier to justify.
Keep upgrade cadence steady. Ruby and Rails versions, gem dependencies, database versions — staying current on all of them is a continuous cost, but it's always cheaper than catching up. Build upgrade work into the regular development cycle rather than treating it as a separate project.
Monitor what matters. Performance baselines, error rates, background job latency, database query times — these are the vital signs of a long-lived application. Degradation in any of them is easier to address when it's caught early.
Revisit the architecture as the team grows. The domain boundaries that made sense for a five-person team may need to be redrawn for a twenty-person team. The testing strategy that worked at 100 models may need to evolve at 300. Architecture isn't a one-time decision — it's an ongoing conversation.
A Rails monolith that's well-structured, well-tested, and actively maintained isn't a compromise. It's a competitive advantage. While other teams are debugging distributed systems and coordinating deploys across a dozen services, a well-maintained monolith lets you ship features, onboard new developers, and evolve your architecture — all within a single codebase that you actually understand.
We begin every project with a $2,000 flat-fee assessment to evaluate your codebase and deliver a fixed proposal. If you proceed, the assessment fee is deducted from the project cost.
Request an Assessment →