Launching on Product Hunt — May 1. Get notified & support the launch →
P
Launching May 1 on Product Hunt

Replace 5 databases with 1 universal engine.

Stop wrestling with fragmented data silos. NodeDB natively combines relational, vector (AI), graph, document, columnar, and scientific array data into a hyper-efficient Rust architecture. Your existing Postgres client just works.

Star on GitHub Get Early Access Join Discord
GraphRAG · vector search + graph expansion · one query
-- Semantic retrieval + graph context for your LLM. One statement.
GRAPH RAG FUSION ON entities
  QUERY           $1
  VECTOR_TOP_K    50
  EXPANSION_DEPTH 2
  EDGE_LABEL      'related_to'
  DIRECTION       both
  FINAL_TOP_K     10
  RRF_K           (60.0, 35.0)
  MAX_VISITED     1000;

Vector DB + graph DB + ranker, fused in one query. No pipelines. No Python glue. This is GraphRAG at the database layer.

8
Data engines
1
Rust binary
pgwire
Any Postgres client works
4.5MB
Embedded (WASM)

The stack you were going to build

Most AI product teams ship with Postgres + a vector DB + a graph DB + a cache + a search engine. That's five systems to deploy, five bills, five failure modes, and a mountain of glue code. NodeDB is one.

Before — 5 services, your ops team, 3am pages
  • Postgres for users, orders, billing
  • Pinecone / Weaviate for embeddings
  • Neo4j for social graph & recommendations
  • Redis for sessions & cache
  • Elasticsearch for product search
  • TileDB / Zarr for scientific arrays (genomics, climate, earth obs)
  • Glue code, dual writes, sync drift, cross-service joins in your app layer.
After — one connection string
  • NodeDB — relational tables
  • NodeDB — vector index with HNSW + PQ
  • NodeDB — property graph with 13 algorithms
  • NodeDB — key-value with O(1) hash lookups
  • NodeDB — full-text search with BM25
  • NodeDB — ND sparse arrays for genomics, climate, earth obs
  • Bitemporal on the engines that need it — audit, time-travel, GDPR-safe erasure built in
  • RLS + RBAC + audit log + tenant isolation — multi-tenant SaaS without writing row filters
  • One SQL planner joins them natively. Ship the app, not the plumbing.

The questions you're about to ask

Yes — deliberately. Sharing one process means cross-engine queries run in the same address space with zero network hops. The alternative (5 services) is a distributed monolith pretending not to be one, with worse latency and worse consistency. We scale horizontally with auto-rebalancing vShards, not by splitting engines.
Postgres is our wire protocol — that's why your existing client works unchanged. But pgvector bolts a vector index onto a row-store planner that doesn't understand hybrid ranking. We built the planner from scratch to route each sub-query to the engine that fits: graph traversals use CSR, vector search uses HNSW, analytics use columnar.
All first-class. Row-Level Security via CREATE RLS POLICY with $auth.id / $auth.role context — applied transparently to every engine, no app changes. Audit logging captures who did what and when, tamper-evident. Tenant isolation, RBAC, and RLS compose so you can run real multi-tenant SaaS on one cluster without writing your own row filters.
Yes — for the engines where it matters. Bitemporal covers graph, strict documents, columnar (plain + timeseries), arrays, and CRDT sync. Every write there records both valid time (when the fact was true in the real world) and system time (when the database learned it). You can run any query AS OF a past moment, audit a row's full history, or do GDPR-compliant erasure that preserves the audit trail. Vector, FTS, KV, and spatial indexes stay current-state-only by design — that's where you want speed, not history.
Real engine. ND coordinate-indexed sparse arrays with Hilbert/Z-order locality, per-tile MBR pruning, compression via our codec stack, WAL-durable, Raft-replicated. Built to replace TileDB / SciDB / Zarr for genomics (variant × sample × position), earth observation (lat × lon × band × time), and climate cubes. And yes — you can join an array slice against a vector ANN search in one SQL query.
Yes. NodeDB-Lite is a 4.5MB WASM + iOS/Android FFI build — the same 8 engines, embedded. Edge-to-cloud sync is CRDT-native via Loro, so offline writes merge deterministically when the device reconnects.

Ship AI products, not plumbing.

Get early access to the launch. We'll send you the repo, the benchmarks, and the design-partner invite.