quay daemon
The daemon is the long-running process that makes 24/7 backups actually 24/7. Run it once and forget it; the launchd / systemd unit handles restart-on-failure + restart-on-login for you.
Run it manually (development)
Section titled “Run it manually (development)”quay daemonThat blocks the terminal. You’ll see one log line per fire:
2026-05-09T22:35:00.012Z INFO quay: quay daemon starting; tick interval = 60s2026-05-09T03:00:00.103Z INFO quay: schedule=ddf7a4d9... dialect=postgres output=/Users/zaid/Documents/Quay-backups/prod-pg-2026-05-09T03-00-00Z.sql ok: 84 tables, 12,402 rowsStop with Ctrl-C. State (last_run_at + last_status) is persisted
synchronously after each fire, so a crash mid-loop doesn’t lose
progress.
Tick interval
Section titled “Tick interval”Default: 60 seconds. The daemon checks each enabled schedule’s cron expression every tick to see if it should fire (specifically: “did the cron’s next firing after the last tick fall at or before now?”).
Override with --tick-seconds:
quay daemon --tick-seconds 30 # twice a minutequay daemon --tick-seconds 5 # for testingIn production leave it at 60. Lower values just burn CPU without firing anything more often (cron resolves to the minute).
The daemon writes to stdout + stderr. Under launchd / systemd the unit file routes these to log files:
- macOS launchd:
/tmp/quay-daemon.log+/tmp/quay-daemon.err - Linux systemd:
journalctl --user -u quay-daemon -f
Tune verbosity via RUST_LOG:
RUST_LOG=quay=debug quay daemonWhat runs on each tick
Section titled “What runs on each tick”For every enabled: true schedule in schedules.json:
- Parse its cron expression. Skip with a warning if invalid.
- Compute “next fire after the last tick”. If that’s
≤now, fire. - Resolve the connection from
connections.json(skip with a clear status if not found — the schedule is otherwise left intact). - Dispatch to the right dump path:
postgres→ schema + JSON-row table contentmysql/mariadb→ table list (extension to full INSERTs is v0.2)sqlite→ file-copy of the .db (matches the desktop app’s plain SQLite path — the format is byte-identical)redis→ JSONL of every key + GET value- other engines →
skipped: <dialect> not yet supported by quay-cli
- Persist
last_run_at+last_statusback toschedules.json.
The daemon never panics. Every per-schedule path is wrapped — a single bad schedule (unreachable DB, expired creds, invalid cron) gets recorded as a failed status and the loop carries on.
Cloud upload (Pro Plus)
Section titled “Cloud upload (Pro Plus)”The CLI doesn’t yet upload to S3 / R2 / GCS — that’s the desktop
app’s proplus_cloud_upload command. v0.2 of the CLI will call the
same path. Until then: schedules with a dest_id set still run their
local dump correctly; the upload step just doesn’t happen.
Stopping cleanly
Section titled “Stopping cleanly”# launchdlaunchctl unload ~/Library/LaunchAgents/com.unclez.quay.daemon.plist
# systemdsystemctl --user stop quay-daemonThe daemon picks up the SIGTERM on the next tick boundary and exits cleanly — the in-flight tick (if any) finishes first.
Failure modes worth knowing
Section titled “Failure modes worth knowing”| Symptom | Likely cause | Fix |
|---|---|---|
| Daemon starts then immediately exits | schedules.json malformed JSON | Open it, fix; the daemon refuses to load partial state |
Schedule shows failed: connection X not found | Connection profile renamed/deleted in the desktop app | Re-link the schedule to the new connection_id |
Schedule shows failed: <db connect error> | Network / credentials | Test the connection from the desktop app first |
| Same schedule fires twice | Daemon was killed mid-fire and restarted; tick logic re-fires for the missed minute | Acceptable — the dump is idempotent (timestamped output path) |
| Schedule never fires | Cron expression is in your TZ, not UTC | Convert to UTC |