When SQLite is enough, and when it stops being enough
I spent a week migrating from SQLite to Postgres on a project that did not need it yet. Here is the list I wish I had read first.
// body
I migrated a project from SQLite to Postgres last spring. It cost me a week of focused work and another week of post-migration cleanup. It also did not need to happen for at least six more months. This note is the checklist I wish I had read before booking that week.
SQLite is enough when
- Single-user desktop app, or a sidecar database inside one process.
- Embedded analytics on a file under, say, 10GB.
- Mostly-read workload. SQLite eats reads for breakfast.
- You want zero ops. No service to babysit, one file to back up.
- Test fixtures and CI. WAL mode + in-memory is unbeatable.
SQLite stops being enough when
- Real concurrent writes. SQLite serialises writes; you'll feel it past ~50 writes/sec on a real workload.
- Multiple processes touching the same file across a network mount. Don't.
- You need real
JSONquerying with indexes, partial indexes on JSON paths, full-text search at scale. - You want logical replication, point-in-time recovery, role-based access. SQLite doesn't sell tickets to that movie.
- You're past one machine.
What I should have asked first
Three questions. How many concurrent writers do I expect in 12 months? If under ten, stay on SQLite. Do I need replication? If no, stay. Is my pain query complexity or write throughput? If complexity, you might still stay — SQLite's query planner is fine for small data, and most of the "I need Postgres" feeling is fashion.
The migration itself was clean. The mistake was the calendar, not the SQL.