Skip to content

quay daemon

The daemon is the long-running process that makes 24/7 backups actually 24/7. Run it once and forget it; the launchd / systemd unit handles restart-on-failure + restart-on-login for you.

Terminal window
quay daemon

That blocks the terminal. You’ll see one log line per fire:

2026-05-09T22:35:00.012Z INFO quay: quay daemon starting; tick interval = 60s
2026-05-09T03:00:00.103Z INFO quay: schedule=ddf7a4d9... dialect=postgres output=/Users/zaid/Documents/Quay-backups/prod-pg-2026-05-09T03-00-00Z.sql ok: 84 tables, 12,402 rows

Stop with Ctrl-C. State (last_run_at + last_status) is persisted synchronously after each fire, so a crash mid-loop doesn’t lose progress.

Default: 60 seconds. The daemon checks each enabled schedule’s cron expression every tick to see if it should fire (specifically: “did the cron’s next firing after the last tick fall at or before now?”).

Override with --tick-seconds:

Terminal window
quay daemon --tick-seconds 30 # twice a minute
quay daemon --tick-seconds 5 # for testing

In production leave it at 60. Lower values just burn CPU without firing anything more often (cron resolves to the minute).

The daemon writes to stdout + stderr. Under launchd / systemd the unit file routes these to log files:

  • macOS launchd: /tmp/quay-daemon.log + /tmp/quay-daemon.err
  • Linux systemd: journalctl --user -u quay-daemon -f

Tune verbosity via RUST_LOG:

Terminal window
RUST_LOG=quay=debug quay daemon

For every enabled: true schedule in schedules.json:

  1. Parse its cron expression. Skip with a warning if invalid.
  2. Compute “next fire after the last tick”. If that’s now, fire.
  3. Resolve the connection from connections.json (skip with a clear status if not found — the schedule is otherwise left intact).
  4. Dispatch to the right dump path:
    • postgres → schema + JSON-row table content
    • mysql / mariadb → table list (extension to full INSERTs is v0.2)
    • sqlite → file-copy of the .db (matches the desktop app’s plain SQLite path — the format is byte-identical)
    • redis → JSONL of every key + GET value
    • other engines → skipped: <dialect> not yet supported by quay-cli
  5. Persist last_run_at + last_status back to schedules.json.

The daemon never panics. Every per-schedule path is wrapped — a single bad schedule (unreachable DB, expired creds, invalid cron) gets recorded as a failed status and the loop carries on.

The CLI doesn’t yet upload to S3 / R2 / GCS — that’s the desktop app’s proplus_cloud_upload command. v0.2 of the CLI will call the same path. Until then: schedules with a dest_id set still run their local dump correctly; the upload step just doesn’t happen.

Terminal window
# launchd
launchctl unload ~/Library/LaunchAgents/com.unclez.quay.daemon.plist
# systemd
systemctl --user stop quay-daemon

The daemon picks up the SIGTERM on the next tick boundary and exits cleanly — the in-flight tick (if any) finishes first.

SymptomLikely causeFix
Daemon starts then immediately exitsschedules.json malformed JSONOpen it, fix; the daemon refuses to load partial state
Schedule shows failed: connection X not foundConnection profile renamed/deleted in the desktop appRe-link the schedule to the new connection_id
Schedule shows failed: <db connect error>Network / credentialsTest the connection from the desktop app first
Same schedule fires twiceDaemon was killed mid-fire and restarted; tick logic re-fires for the missed minuteAcceptable — the dump is idempotent (timestamped output path)
Schedule never firesCron expression is in your TZ, not UTCConvert to UTC