Skip to content

Backup + restore formats

SQL export is Quay’s top-priority surface — the place where every other database client falls down. Half-broken dumps, FK constraint errors on restore, missing triggers, wrong charset, lost AUTO_INCREMENT. The standard for “best” here is first-try restoreable: dump from Quay, restore into a fresh DB, it just works, no manual SQL editing.

Every backup includes (with toggles to opt out individually):

  1. CREATE DATABASE + character set + collation + default schema
  2. Sequences (Postgres) before the tables that depend on them
  3. Tables (DDL only, no FKs yet) in dependency order
  4. Data, again in FK order, snapshotted in a single read-consistent transaction
  5. Indexes (after data — ~100× faster than building during INSERT)
  6. Foreign keys, applied last so circular FKs don’t block
  7. Triggers with proper DELIMITER // (MySQL) or quoting (Postgres)
  8. Views in dependency order — a view that selects from another view emits second
  9. Stored procedures + functions + events
  10. Re-enable FK checks
  11. Verification block (counts per table, optional)

The FK-aware ordering is computed from KEY_COLUMN_USAGE (MySQL), pg_constraint (Postgres), or the equivalent system catalog per engine. Tables with no FK incoming come first; tables that reference others come after. Cycles are broken with a defer-FK pattern — Quay emits the data with foreign-key checks disabled, then re-enables and re-validates at the end.

  • MySQL: --single-transaction style. START TRANSACTION WITH CONSISTENT SNAPSHOT (InnoDB), so the dump sees one logical state even if the database is being written to during the export.
  • Postgres: BEGIN; SET TRANSACTION ISOLATION LEVEL REPEATABLE READ; ... COMMIT;
  • SQLite: BEGIN IMMEDIATE for the duration of the dump.
  • Cloud / managed engines (Snowflake / BigQuery / Databricks): no per-statement transaction model, so the dump is best-effort — Quay logs warnings for any table whose row count changed mid-dump.
  • Charset / collation at every level (database, table, column)
  • Engine + row format (InnoDB / MyISAM / Aria; ROW_FORMAT=COMPRESSED)
  • Partitions (range, list, hash) with their definitions
  • AUTO_INCREMENT / sequence values at the time of dump
  • Generated columns with their expressions
  • Comments on tables and columns
  • Triggers with proper delimiters so the SQL parses on restore
  • Binary data hex-encoded (X'…'), not base64

When you open a .sql file in Quay, you don’t run it — you get a preview:

Restore plan: my-backup-2026-05-01.sql
47 tables • All FKs reference declared objects
8.2M rows • 4 INDEXES will be created
12 stored procedures • 1 trigger uses CURRENT_TIMESTAMP() (ok)
No DROP TABLE statements
Estimated time: 3 min on this hardware
Drag the file into a session to restore here →

The preview parses the file (handles DELIMITER //, multi-statement triggers, dollar-quoted PG functions). DROP / TRUNCATE statements get flagged in red. The preview is read-only — restore happens when you explicitly drag the file into a session or click Restore.

For sensitive backups, switch the export type to .qenc — the dump gets AES-256-GCM encrypted with an Argon2id-derived key from a passphrase. See Encrypted exports.

Multi-GB dumps stream through 1 MB chunks; the import never loads the whole file into memory. Live progress shows bytes parsed, statements executed, failures (with optional skip-and-collect mode so a bad statement doesn’t kill the import).

~/Documents/Quay Backups/<connection-name>/<UTC stamp>.sql

Configurable globally in Settings → Behavior, and overridable per-export.