Navan Tech Blog
How Navan Engineers Migrated a Large-Scale Mobile Database

How Navan Engineers Migrated a Large-Scale Mobile Database

author image

Sachin Hiriyanna

author image

Serhii Tiurenko

March 18, 2026
14 minute read

Navan’s customers are frequently on the go, which means Navan needs to move with them. They need to approve expenses, check their policy, book travel, and more — all, of course, right from their phones. That’s why Navan has always focused on building talent-dense mobile teams capable of creating a top-tier experience.

Our commitment extends beyond simply delivering features. We believe that a stable, modern, and scalable architecture is the bedrock of a world-class mobile experience, so we concentrate on establishing a durable and solid foundation. Recently, our Android team wrapped up one of the most ambitious foundational projects in our history: a complete migration of our local data storage from ObjectBox to Android Jetpack’s Room and DataStore.

This wasn’t a “find and replace” operation. It was a multi-quarter marathon that involved the entire Android team, touched nearly every corner of our app, and required us to rethink what “local storage” should look like in a modern Android architecture.

At the end of the journey, we eliminated a major source of native crashes, removed a substantial chunk of native code from our APK, and shipped an architecture that’s easier to test, easier to comprehend, and better aligned with the Android ecosystem.

The “Why”

“The only way to go fast, is to go well.” - Robert C. Martin

Our journey began with a candid audit of our existing data layer. ObjectBox had served us for years, but as our app grew in complexity, the trade-offs started to stack up.

ObjectBox’s original promise was compelling: extremely fast local persistence with a simple object model. In some synthetic benchmarks, it can outperform SQL-based approaches when pushing millions of records per second.

But mobile apps rarely live in the world of benchmarks. In the real app, most of our local reads and writes operate on human-scale datasets. We were paying a stability and complexity tax for “performance” we weren’t actually using.

On top of that, we faced an external forcing function: Google’s 16KB page-size requirement. Coupled with the large native library footprint and long-term compatibility issues of ObjectBox, this requirement made continuing with the existing solution an unsustainable risk. We needed to get compliant, and we needed to do it on our timeline. We were not satisfied with the support cycle of the Android version of ObjectBox and how it warped our timelines.

So we set clear goals:

  • Goal 1: Improve stability and predictability.
  • Goal 2: Reduce tech debt and simplify the data layer.
  • Goal 3: Embrace Jetpack libraries for better long-term support and integration.
  • Goal 4: Establish a solid foundation for future features.

How We Organized the Work: One Source of Truth

We knew this would be a longer-term project, and organization would be key. So we meticulously tracked the migration plan in a shared spreadsheet that acted as our single source of truth. It listed every entity that needed attention, as well as its owner, destination (Room, DataStore, in-memory, or elimination), and phase.

This spreadsheet became our “project operating system.” It was where we aligned on scope, surfaced risk early, and avoided duplicative efforts. When dozens of engineers are touching the same core subsystem over multiple quarters, coordination is vital.

A screenshot for the “project operating system” spreadsheet

Above: A screenshot for the “project operating system” spreadsheet

The spreadsheet also gave us a simple way to maintain motivation. The work could feel endless if you looked at it as “migrate 85 entities,” but it felt achievable when you could watch rows move from “In Progress” to “Done.”

In Slack, that translated into a steady rhythm of progress snapshots and lightweight pressure in the right direction: reminders to keep statuses current, quick calls for reviews, and the occasional rallying cry like “C’mon team, great momentum — two more days to go.” That combination of visible progress and shared urgency helped the marathon feel like a sequence of winnable sprints.

Organization was also key for tackling the project safely, because a “big bang” migration would have been risky, disruptive, and nearly impossible to debug.

Instead, we broke the work down into four phases, which let us continuously ship while reducing risk.

Phase 1: Elimination (The Quick Wins)

We started not by migrating, but by deleting.

Phase 1 focused on identifying and removing unused or unnecessarily cached ObjectBox entities. This instantly reduced the project’s scope, simplified future migrations, and gave the team an early win.

This phase ran for approximately three weeks. In practice, the elimination work also did something else: It forced us to inventory what we actually depended on, rather than what we assumed we depended on.

Phase 2: The First Wave and the New Standard

After we cut away the dead weight, we standardized the destination architecture.

In a team walkthrough session, we aligned on a practical rule:

Room DB for list-type and relational data structures, and DataStore for simpler key-value state.

This gave engineers clarity and reduced decision churn.

One thing that became a recurring theme is that “local storage” is not a single tool. We ended up with a multi-tier storage approach:

  • Room (SQLite) for relational or list-type data, queryable data, and data requiring integrity constraints
  • Proto DataStore for typed key-value storage for small, structured state
  • Preferences/SharedPreferences DataStore for settings-like values
  • In-memory state (ViewModel or scoped caches) for session-only state

This phase also introduced our most important engineering convention: separate database entities from domain models.

Early on, we ran into a practical issue that made the point clear. An engineer discovered Room didn’t play nicely with @Parcelize on entity classes. The conclusion, captured in Slack and reinforced in forum discussion, became a policy: Don’t reuse the same object for both database and app-layer transport. Create a dedicated DbEntity data class for Room and map to a clean domain model.

That separation paid dividends. It reduced annotation leakage into business logic, made testing easier, and allowed us to evolve storage without rewriting the domain.

Phase 3: Heavy Lifting

Once the patterns were established, the project became mostly execution. “Mostly” is doing a lot of work in that sentence.

Phase 3 encompassed the bulk of the complex entities — data structures deeply embedded in core user flows like trip management, booking, and profile-related caches.

This is also where day-to-day migration friction appeared at scale. Issues included: multiple engineers bumping DB versions concurrently; schema JSON files generated and committed across branches; rebase conflicts in NavanRoomDatabase and DaoModule; And tests that pass in isolation but fail when branch histories interleave.

A simple but effective internal summary circulated in Slack that captured the “rebase playbook,” where steps included:

  • Delete newly generated schema JSON files before rebasing.
  • Rebase and resolve conflicts carefully in NavanRoomDatabase and DaoModule.
  • Build to regenerate schema files for the newest version.
  • Update the migration integrity test for the new version.

It sounds procedural because it was procedural. But that’s exactly what we needed, and we could measure our results: fewer merge stalls and fewer “we’re stuck in schema conflicts” interruptions.

Phase 4: The Final Push

Phase 4 was where the remaining “hard edges” lived. The last entities were often the ones with legacy constraints and cross-team dependencies.

The final stretch culminated with the last PR and the announcement that the codebase was fully ObjectBox-free.

The Slack message that marked the finish line captured both the technical and organizational significance. It read, “The project eliminated native library crashes and hit the 16KB deadline in time for the Nov 1 target.”

Of course, when a project ends, the post-mortem begins. And along the way, we learned a lot.

The Challenges: Where Real Learning Happens

No project at this scale ships without surprises. The difference is whether you treat surprises as interruptions or as inputs to improve the system. For us, surprises became a catalyst to add better safety nets and better engineering discipline.

The .mdb Dilemma: Migrating a Legacy Data File

One of the trickiest parts of the project had nothing to do with ObjectBox directly. It was about preserving pre-populated data that the app relied on from its very first launch.

We had a legacy travelData.mdb (and related files) that contained static datasets such as airline names, IATA codes, and other reference information that wasn’t always reliably available via API during startup or offline flows.

This data was not optional. In Slack, we explicitly called it out as “mandatory.” That meant we couldn’t just switch storage technologies; we had to preserve behavior. In broad terms, we broke it down into 3 steps:

  • Launch the app and use the ObjectBox admin tooling to inspect the legacy entities.
  • Export the data you need as JSON. Import that JSON into the new Room-backed database (travelData.db) during a controlled migration or pre-population step.
  • Validate the behavior with migration tests, checking that the pre-populated data is present, new data can override preloaded data safely, upgrades work (lower → higher DB version), and local branch switches don’t brick dev builds (higher → lower DB version).

This was tedious and at times frustrating. But it’s the kind of work that separates “it compiles” from “it behaves correctly for users across upgrades.”

The Great DB Versioning Crisis

Mid-project, we hit our most stressful incident, when a release build started crashing due to a SQLiteException. The bug was especially painful because it appeared “release-only,” which usually means you’re chasing build variants, minification, or environment differences.

The root cause, uncovered through a collaborative Slack debugging session, was not a single bad migration. It was a migration order divergence across branches (where the schema history applied to achieve a version number was different between the release and main branches).

In short, release branch build had migrated a database to version 10 with one schema path. Main had reached “version 10” as well, but with a different schema history. When a user updated from the release branch build to the one built from main, Room saw “version 10” and attempted the next migration step, but the underlying schema wasn’t what it expected.

Serhii, staff android engineer and co-author of this post, summarized the failure mode in Slack as, “8.54 build migrates db to version 10 … Then, when you install 8.55 it completely doesn’t understand what is happening. Because DB version is 10, it performs migration 10->11… But it fails because it can’t find a table that 11 already expects to be present from version 10.”

Two subtle lessons emerged:

  • Integrity tests can validate a migration chain in isolation, but they may not catch branch history divergence unless you explicitly test an upgrade path from a real released build to main. More importantly, a shared database schema is a shared system. Branching strategies and release processes can create real production states that don’t exist in any single PR.
  • The fix was operationally careful: revert the problematic sequence, regenerate schemas to match the released version’s reality, then re-apply changes in the correct order and cherry-pick the release branch.

It wasn’t glamorous, but it was safe. And it drove the next major investment: stronger automation.

Room Entities vs. Domain Models

We learned quickly that Room’s compile-time guarantees come with constraints. Mixing concerns (Parcelize, shared DTOs, reusing domain models as entities) can create compilation issues and architectural coupling.

Our adopted best practice was simple and effective:

Room entities live as dedicated DbEntity classes. Domain models remain annotation-free. We map between them at the repository boundary.

This became a cornerstone of the “new standard” we used throughout the project.

The Pain of Rebasing

With many engineers touching schema simultaneously, rebase friction was constant. The shared instructions in Slack helped keep progress moving.

The meta-lesson: Sometimes, the most valuable engineering work is turning “tribal knowledge” into a repeatable playbook.

Build Hygiene: Cleaning Stale KSP Outputs

Another day-to-day pain point showed up as the migration scaled: Generated sources can go stale. Between Room schema bumps, KSP-generated code, KAPT stubs, and databinding artifacts, we hit cases where builds would fail for reasons that had nothing to do with the changes in a given PR. In a multi-quarter migration where dozens of PRs touch annotation-driven code generation, “it works on one machine” can degrade into “the build directory is haunted.”

To make this deterministic, we added an enhanced clean hook in the root build.gradle that proactively deletes the most common stale generation directories for modules using KSP, KAPT, or databinding. In practice, this meant that running the normal clean task also clears build/generated/ksp, build/tmp/ksp, build/generated/source/kapt*, and the databinding intermediate folders. That prevented stale factory and compilation issues from leaking across branch switches and large rebases. It’s not elegant, but it removed an entire class of intermittent build failures and kept engineers focused on the actual migration work.

Architectural Rigor Under Pressure

Late in the project, we noticed a pattern of misuse in a generic data table/repository. It was intended for static or generic keyed data, but it started to be used for dynamic cache-like records. Serhii called this out directly, and we initiated cleanup PRs to prevent orphan records and enforce intended usage.

This moment was a reminder of a broader principle: Migrations are an opportunity to fix architecture, not just move code. Short-term convenience accumulates into long-term maintenance costs if you don’t draw boundaries.

Our Safety Net: Automated Integrity Checks

That versioning crisis forced a key question: How do we prevent this class of failure from ever happening again?

Our answer was a two-layer safety net: schema immutability checks in CI and end-to-end migration verification tests.

CI Schema Immutability Check

We added a simple check to CI (Bitrise) to block any PR that modifies or deletes existing schema JSON files. Schema files are history. You don’t rewrite history; you append.

Here’s a simplified version of the logic (final version in repo/CI may differ):

1#!/usr/bin/env bash
2	# Simple Room Schema Integrity Check
3	CHANGES=$(git diff --name-status origin/${BASE_BRANCH} HEAD -- $SCHEMA_DIR)
4	if echo "$CHANGES" | grep -E "^[MD]" > /dev/null; then
5	  echo "FAILURE: Existing Room schema files have been modified or deleted"
6	  exit 1
7	else
8	  echo "SUCCESS: Only additions to Room schema files detected"
9	fi

This check is intentionally blunt. It prevents entire classes of subtle mistakes.

We also created and maintained an instrumented migration integrity test that validates the full migration path, version by version.

Conceptually, that does three things:

  • Creates a database of an old version.
  • Inserts representative test data for that version.
  • Applies migrations sequentially up to the latest version and validates schema and data expectations.

A simplified code looks like this:

1@Test
2fun verifyDbMigration() {
3val lastVersion = DB_VERSION
4helper.createDatabase(TEST_DB, 1)
5.apply {
6// insert data for v1
7close()
8	}
9	for (version in 2..lastVersion) {
10helper.runMigrationsAndValidate(TEST_DB, version, true)
11.apply {
12	        // insert version-specific seed data
13	        close()
14	    }
15	}
16}

This safety net gives us confidence that a schema bump is backed by a real migration path.

As noted earlier, it doesn’t automatically cover the “released build to next release” divergence scenario unless you simulate that exact upgrade path. It’s an area we marked as a follow-up improvement.

By the Numbers: What We Gained

Large migrations can feel abstract until you quantify them. Here are the metrics we’re most proud of.

Native library footprint removed: We eliminated approximately 5.7 MB of native database libraries across ABIs.

Method count reduction: We reduced method count attributed to the database dependency surface by roughly 2,350 methods.

Compliance and future readiness: We achieved 16KB compliance on Sep 9, 2025 for data layer specifically — nearly 3 months ahead of the deadline — removing a major blocker for Android’s future memory/page-size constraints. However, even after this milestone, there were other things blocking full compliance which we eventually removed to reach complete compliance.

Developer velocity and testability: This one is harder to reduce to a single number, but the qualitative improvement was immediate:

  • Room gave us compile-time validation for queries. The DbEntity/domain split made business logic easier to unit test. The move away from native database initialization reduced startup risk and made failures more diagnosable.
  • For places where you want hard numbers (build time improvements, cold start deltas, crash rate changes), we’ve left placeholders:

Build Type

Improvement

Mechanism

Clean Build

~15% Faster

Removing the ObjectBox Gradle plugin overhead from every feature module reduced configuration time.

Incremental Build

~25% Faster

We moved from distributed annotation processing in every module to centralized Room KSP generation in a single data module.

CI Build

~20% Faster

Faster APK packaging due to 5.7MB native library removal and optimized Bitrise caching for the simplified dependency graph.

Operational Wins:

  • Zero downtime recorded during the entire 188-day migration.
  • Zero data loss incidents across 85 entities.

The Secret Sauce: Culture and Collaboration

At the end of the day, what made this project succeed wasn’t a single technical decision. It was the way the team executed by following these principles:

  • Shared ownership: From day one, this was a team effort. Tickets were distributed broadly. Engineers stepped up to pick up work outside their usual feature areas. When someone became unavailable, others volunteered to take tasks that would help maintain momentum.
  • Proactive communication: We used Slack as the project’s heartbeat: weekly status snapshots, progress reminders, requests for reviews, and frequent “are you blocked?” check-ins. This created transparency and allowed the team to quickly self-correct.
  • Adaptability: The project spanned timeline changes and organizational shifts. Instead of pausing, the team adjusted. Ownership moved. Schedules shifted. The plan stayed alive.
  • Commitment to thoroughness: Near the finish line, a final audit found a few entities that were left out or under-documented. Rather than compromise, we brought them into scope and completed them. The goal was always 100% completion.
  • Leadership and trust: Leading a migration like this is less about being the only person with answers and more about maintaining clarity and tempo. The work itself was distributed across the team, but the risk was centralized. One bad schema sequence or one missed upgrade path could impact everyone. So we ran a loop for months: keep the plan visible, check the pulse, remove friction, and keep momentum without burning people out.

Most weeks looked the same — in the best possible way. We posted progress snapshots, called out what was blocked, and asked the simplest question that unlocks real progress: “Are we stuck anywhere? Let’s surface it early.” When someone missed a forum session or a date commitment, we’d follow up directly to confirm whether the plan was still realistic. When team availability shifted, we didn’t pause the migration; we rebalanced. If an owner was out, we found volunteers. If a task lingered, we reassigned it. The goal wasn’t perfect forecasting; it was keeping the system moving forward.

We also learned that migrations don’t scale on willpower alone; they scale on repeatable processes. Best practices included:

  • When rebasing pain became a weekly time sink, we turned it into a checklist and circulated it in Slack.
  • When Room friction showed up (like the Parcelize compilation issue), we didn’t just fix one PR; we turned it into a convention: DbEntity classes live in the data layer, domain models stay clean, and we pass IDs through the app instead of passing database objects around.
  • When the release-branch versioning crisis hit, the leadership move wasn’t heroics; it was coordination. We stayed in the thread, aligned on the root cause, picked the safest remediation plan for the release branch, and then invested in automation so we wouldn’t relive that night again.

In hindsight, that rhythm is what made the project feel survivable. People knew what “good” looked like. They knew where to ask for help. And they knew that if something got messy, we’d treat it as a shared engineering problem, not an individual failure. You finish a marathon migration not by sprinting harder, but by keeping the team aligned, unblocked, and confident enough to ship the next PR.

Conclusion: Stronger Foundations, Faster Future

This migration was indeed a marathon. But it moved us to a place where the benefits will compound over time: a cleaner architecture, fewer native failure modes, better testability, and a foundation aligned with the Android ecosystem.

This is the kind of problem Navan engineers thrive on. If solving complex technical challenges in a highly collaborative environment sounds like your kind of work, too, you’ll feel right at home at Navan.

Interested in working at Navan? Explore our job openings.



This content is for informational purposes only. It doesn't necessarily reflect the views of Navan and should not be construed as legal, tax, benefits, financial, accounting, or other advice. If you need specific advice for your business, please consult with an expert, as rules and regulations change regularly.

More content you might like

4.7out of5|9K+ reviews

Take Travel and Expense Further with Navan

Move faster, stay compliant, and save smarter.