Your SQL
Greenplum and SynxDB share the same Postgres lineage. Standard DML, DDL, window functions, CTEs, external tables — all carry over. Most workloads need little to no rewrite.
Use case · Greenplum migration
Your Greenplum deployment is stable, your team knows it inside out, and the SQL is all working. What you need is a modern engine underneath it, not a forklift to a new paradigm. SynxDB is the incremental next step — same lineage, same ops, cloud-native underneath.
SynxDB shares the Greenplum-to-Apache-Cloudberry lineage, so the migration is mostly connection-string work. Here's what doesn't change.
Greenplum and SynxDB share the same Postgres lineage. Standard DML, DDL, window functions, CTEs, external tables — all carry over. Most workloads need little to no rewrite.
Coordinator + segment topology, gpinitsystem, gpstate, gpbackup — the admin surface you already know. No new vocabulary, no new runbook.
dbt, your BI stack, your orchestration, your drivers. Anything that speaks Postgres wire protocol to Greenplum will speak it to SynxDB. Connection string change, not a rewrite.
gpbackup + gprestore still work. Parallel file transfer, partitioned table migration, and external-table patterns are all preserved. Terabytes-to-petabytes migration is a known quantity.
Greenplum's upstream open-source cadence has slowed. SynxDB tracks Apache Cloudberry and adds operational polish. The reasons to move.
Greenplum open-source development slowed after VMware acquired Pivotal and then Broadcom acquired VMware. SynxDB is built on Apache Cloudberry (Incubating) — actively developed, Apache 2.0, no acquisition risk.
PAX (hybrid row/column) tables, decoupled compute and storage, native cloud object-storage integration. Your cold data lives on S3; only the hot slice needs local disk.
SynxDB Cloud runs as a managed service on AWS. No more forklift upgrades, no more capacity planning by hand. SynxDB keeps the self-managed path for teams that need it.
Cost-based optimizer improvements from upstream Apache Cloudberry — better join reordering, better cardinality estimation, parallel-aware planning for modern CPUs.
Step 1
1
We review your schema, top queries, and operational patterns. Output is a compatibility report — what moves cleanly, what needs attention, and a rough sizing for the target cluster.
Step 2
2
Stand up a SynxDB cluster (Cloud or self-managed) sized for a representative slice of your workload. Run your top 10–20 queries side-by-side against Greenplum. Compare correctness and performance.
Step 3
3
Typical pattern: backfill historical data, dual-write fresh data for N weeks to validate consistency, then cutover reads at a defined switchover date. Rollback path stays open until you're confident.
Step 4
4
Run Greenplum in parallel for a grace period — usually one full close cycle — then retire. By this point your team has been operating on SynxDB for weeks; decommission is an ops event, not a migration event.
Early access is open. A starter cluster is free while we're in preview — no credit card, no sales call.