Skip to content

Snowflake · BigQuery · Databricks

Three cloud warehouses, all Pro tier. Connection is more involved than the SQL family — these use cloud-IAM-style auth, not user/password.

FieldNotes
Account locatorxy12345 or xy12345.us-east-1
User · PasswordSQL auth (key-pair auth supported via private_key field)
WarehouseOptional — the default warehouse to USE on every query
RoleOptional — the default role
Database · SchemaDefault catalog for the session

Warehouse suspend/resume (Pro deep panel): see Snowflake suspend/resume guide — actually under the engine deep features. Lists every warehouse + STATE + SIZE; one-click Suspend stops it consuming credits immediately, one- click Resume warms it back up.

Native dump: SQL dump grammar adjusted to Snowflake’s CREATE OR REPLACE TABLE … syntax + Snowflake-specific data types (VARIANT, OBJECT, ARRAY). The dump is restoreable on Snowflake or, with a --dialect=snowflake-to-postgres flag, translated for restore on PG.

FieldNotes
Project IDGCP project the dataset lives in
Service account JSONThe full JSON key blob (paste it; stored in connections.json like any other credential)
Default datasetOptional

Dry-run cost estimator (Pro deep panel): paste a query in the panel, click Estimate. BigQuery’s dry-run API returns total_bytes_processed; Quay computes the on-demand cost ($5 / TiB scanned) and shows it as a card. Useful for “is this query going to cost $0.04 or $400” before you click Run.

Native dump: BigQuery doesn’t support INSERT … VALUES for large datasets (insertion goes through DML or batch loads), so the SQL dump path emits CREATE TABLE + a bq load-equivalent JSONL data file pair. Restore preview validates both.

FieldNotes
Workspace URLdbc-xxxxxxxx-xxxx.cloud.databricks.com
TokenPersonal access token (PAT)
HTTP pathThe SQL warehouse’s HTTP path, from the Databricks UI
Default catalog · schemaOptional

Quay supports Databricks SQL Warehouses (the serverless / pro clusters) — not the all-purpose clusters that need Spark sessions. The latter are out of scope; use the Databricks notebook UI.

Native dump: Spark SQL dialect with Delta-table-aware CREATE TABLE … USING DELTA TBLPROPERTIES (...). Restore preview flags Delta-specific clauses if the target isn’t Delta-capable.

All three engines share these surfaces:

  • No FK enforcement — these are warehouses, not OLTP. Quay’s FK- ordering logic still runs (informational warnings about referential integrity), but the dump doesn’t try to ALTER TABLE … FK either way.
  • Schema diff — works between warehouses (e.g. Snowflake → BigQuery) and gives a translation hint per type (VARIANT → JSON, STRING → STRING, etc.).
  • Long-running queries — warehouse engines can run multi-minute queries; Quay’s run button shows elapsed time + a Cancel button that issues the engine’s cancel command.
  • Cost / billing exposure — every grid shows bytes-processed (BigQuery) or warehouse-credits (Snowflake) for the executed query, so you can spot expensive queries before they show up on the cloud bill.
  • AWS Redshift is reached via the Postgres dialect’s Redshift preset, not as a separate cloud-warehouse page.
  • Athena / Glue — planned for v0.4.
  • CockroachDB Cloud — uses the Postgres-CockroachDB preset.