Skip to content

Vector DBs (Qdrant · Weaviate · Pinecone)

Pro-tier engines. Each has a connection, collection browser, and a shared Vector similarity search panel that drives the engine’s native KNN endpoint.

FieldNotes
URLhttps://your-cluster.qdrant.io:6333 or http://localhost:6333
API keyOptional (Qdrant Cloud; bare Qdrant has none)
FieldNotes
URLhttps://your-cluster.weaviate.network
API keyThe Authorization: Bearer <key> token
FieldNotes
Environmente.g. us-east-1-aws
API keyThe Pinecone API key
IndexThe Pinecone index name

Each engine calls them slightly differently — Quay normalises to “collection” for consistency:

  • Qdrant: collections + their points
  • Weaviate: classes + their objects
  • Pinecone: indexes + their vectors

The rail shows collections; clicking opens an info panel with vector dimension, distance metric (cosine / euclidean / dot), point count, and any associated payload schema.

Settings → Pro → Vector similarity search (deep panel). Same UI across all three engines — Quay translates to the engine’s REST shape internally:

  • Collection field — pick from the rail or paste a name
  • Top-K — defaults to 10
  • Query vector — paste a JSON array of floats ([0.1, 0.2, …]). Quay validates the dimension matches the collection.

Results render in the standard grid: top-K matches with their scores + payload preview. Click a result to jump to the underlying point/object in the browser.

The frontend dispatch is shared across the 3 engines:

// Qdrant
POST /collections/<name>/points/search
{ "vector": [...], "limit": K, "with_payload": true }
// Weaviate
POST /v1/graphql
{ "query": "{ Get { <Class>(nearVector: { vector: [...] }, limit: K) { _additional { id distance } } } }" }
// Pinecone
POST /query
{ "vector": [...], "topK": K, "includeMetadata": true }

Quay’s vector search is a query feature, not an AI feature. You bring the vector (or paste it). There’s no embedding model involved on Quay’s side — embedding text → vector is something you do in your application code, with whichever model suits your workload (OpenAI text-embedding-3, Sentence-Transformers, Cohere Embed, etc.). Quay’s job is “vector in, top-K out”.

This keeps the no-AI-dependency rule intact: vector similarity search is fully deterministic + works against vectors you’ve computed yourself, with no LLM call from Quay.

These engines don’t have a “SQL dump”-equivalent shape. Quay’s backup path emits the collection schema + a JSONL of every point ({"id": …, "vector": […], "payload": {…}}). Restore preview validates dimension consistency and payload-schema alignment with the target collection.

The dumps are large — a 1M-point 1536-dim collection is ~8 GB JSONL — so Quay streams the dump line-by-line to disk + shows live progress on bytes/points.

  • Sparse vectors (Qdrant 1.7+, Pinecone hybrid search) — payload structure recognised, but the search panel doesn’t yet let you provide a sparse vector alongside the dense one. Use the raw command runner for now.
  • Cluster topology / replica info — surfaced read-only.
  • Quantisation tuning — Qdrant supports per-collection quantisation; Quay shows the current setting + lets you edit, but doesn’t yet provide the per-quantisation-mode performance chart the Qdrant UI does.

Qdrant, Weaviate, and Pinecone cover the vast majority of the production vector-DB market. Milvus and Chroma are on the roadmap (Milvus mainly waiting on a stable Rust client; Chroma’s embedded-mode quirks need a careful integration). pgvector works through the Postgres dialect — <-> operator queries run fine through the standard SQL editor.