Snowflake · BigQuery · Databricks
Three cloud warehouses, all Pro tier. Connection is more involved than the SQL family — these use cloud-IAM-style auth, not user/password.
Snowflake
Section titled “Snowflake”| Field | Notes |
|---|---|
| Account locator | xy12345 or xy12345.us-east-1 |
| User · Password | SQL auth (key-pair auth supported via private_key field) |
| Warehouse | Optional — the default warehouse to USE on every query |
| Role | Optional — the default role |
| Database · Schema | Default catalog for the session |
Warehouse suspend/resume (Pro deep panel): see Snowflake suspend/resume guide — actually under the engine deep features. Lists every warehouse + STATE + SIZE; one-click Suspend stops it consuming credits immediately, one- click Resume warms it back up.
Native dump: SQL dump grammar adjusted to Snowflake’s
CREATE OR REPLACE TABLE … syntax + Snowflake-specific data types
(VARIANT, OBJECT, ARRAY). The dump is restoreable on Snowflake
or, with a --dialect=snowflake-to-postgres flag, translated for
restore on PG.
BigQuery
Section titled “BigQuery”| Field | Notes |
|---|---|
| Project ID | GCP project the dataset lives in |
| Service account JSON | The full JSON key blob (paste it; stored in connections.json like any other credential) |
| Default dataset | Optional |
Dry-run cost estimator (Pro deep panel): paste a query in the
panel, click Estimate. BigQuery’s dry-run API returns
total_bytes_processed; Quay computes the on-demand cost ($5 / TiB
scanned) and shows it as a card. Useful for “is this query going to
cost $0.04 or $400” before you click Run.
Native dump: BigQuery doesn’t support INSERT … VALUES for
large datasets (insertion goes through DML or batch loads), so
the SQL dump path emits CREATE TABLE + a bq load-equivalent
JSONL data file pair. Restore preview validates both.
Databricks
Section titled “Databricks”| Field | Notes |
|---|---|
| Workspace URL | dbc-xxxxxxxx-xxxx.cloud.databricks.com |
| Token | Personal access token (PAT) |
| HTTP path | The SQL warehouse’s HTTP path, from the Databricks UI |
| Default catalog · schema | Optional |
Quay supports Databricks SQL Warehouses (the serverless / pro clusters) — not the all-purpose clusters that need Spark sessions. The latter are out of scope; use the Databricks notebook UI.
Native dump: Spark SQL dialect with Delta-table-aware
CREATE TABLE … USING DELTA TBLPROPERTIES (...). Restore preview
flags Delta-specific clauses if the target isn’t Delta-capable.
Common patterns
Section titled “Common patterns”All three engines share these surfaces:
- No FK enforcement — these are warehouses, not OLTP. Quay’s FK- ordering logic still runs (informational warnings about referential integrity), but the dump doesn’t try to ALTER TABLE … FK either way.
- Schema diff — works between warehouses (e.g. Snowflake → BigQuery)
and gives a translation hint per type
(
VARIANT → JSON,STRING → STRING, etc.). - Long-running queries — warehouse engines can run multi-minute queries; Quay’s run button shows elapsed time + a Cancel button that issues the engine’s cancel command.
- Cost / billing exposure — every grid shows bytes-processed (BigQuery) or warehouse-credits (Snowflake) for the executed query, so you can spot expensive queries before they show up on the cloud bill.
What’s deferred
Section titled “What’s deferred”- AWS Redshift is reached via the Postgres dialect’s Redshift preset, not as a separate cloud-warehouse page.
- Athena / Glue — planned for v0.4.
- CockroachDB Cloud — uses the Postgres-CockroachDB preset.