Back to Blog
8 min read

Z-Order vs V-Order: Choosing the Right Data Ordering

Both Z-Order and V-Order optimize data layout for faster queries, but they work differently and serve different purposes. Understanding when to use each is crucial for optimal performance.

How They Work

Z-Order (Multi-dimensional Sorting)

Z-Order interleaves bits from multiple columns to create locality:

Column A: region     Column B: date
─────────────────────────────────────
USA = 01            2024-01 = 01
EUR = 10            2024-02 = 10
APAC = 11           2024-03 = 11

Z-Order interleaves: (region_bit, date_bit, region_bit, date_bit...)

USA + 2024-01 = 0011
USA + 2024-02 = 0110
EUR + 2024-01 = 1001
EUR + 2024-02 = 1100

Result: Similar combinations are stored together
┌─────────────────────────────────────────────────────────────┐
│  File 1: USA/2024-01, USA/2024-02 data clustered            │
│  File 2: EUR/2024-01, EUR/2024-02 data clustered            │
│  File 3: APAC data clustered                                 │
└─────────────────────────────────────────────────────────────┘

V-Order (Value-based Sorting)

V-Order sorts data to maximize compression and statistics:

Before V-Order (write order):
┌─────────────────────────────────────────────────────────────┐
│  USA, Product A, 100                                        │
│  EUR, Product B, 200                                        │
│  USA, Product A, 150                                        │
│  EUR, Product A, 100                                        │
└─────────────────────────────────────────────────────────────┘

After V-Order (optimized):
┌─────────────────────────────────────────────────────────────┐
│  EUR, Product A, 100                                        │
│  EUR, Product B, 200                                        │
│  USA, Product A, 100                                        │
│  USA, Product A, 150                                        │
└─────────────────────────────────────────────────────────────┘

Benefits:
- Better RLE compression (EUR,EUR... USA,USA...)
- Better min/max statistics per row group
- More efficient dictionary encoding

Side-by-Side Comparison

comparison_matrix = {
    "characteristic": {
        "when_applied": {
            "Z-Order": "During OPTIMIZE operation (explicit)",
            "V-Order": "During write (automatic)"
        },
        "columns": {
            "Z-Order": "User specifies 1-4 columns",
            "V-Order": "Algorithm chooses based on all columns"
        },
        "optimization_target": {
            "Z-Order": "Data skipping for specific filters",
            "V-Order": "Compression and general query performance"
        },
        "maintenance": {
            "Z-Order": "Requires periodic OPTIMIZE ZORDER BY",
            "V-Order": "Automatic on every write"
        },
        "write_overhead": {
            "Z-Order": "High (rewrites entire table/partition)",
            "V-Order": "Low (applied during initial write)"
        },
        "best_for": {
            "Z-Order": "Known, specific query patterns",
            "V-Order": "General workloads, Direct Lake"
        }
    }
}

# Print comparison
for char, values in comparison_matrix["characteristic"].items():
    print(f"\n{char}:")
    print(f"  Z-Order: {values['Z-Order']}")
    print(f"  V-Order: {values['V-Order']}")

When to Use Each

Use V-Order When

vorder_use_cases = [
    {
        "scenario": "Power BI Direct Lake",
        "reason": "V-Order is specifically optimized for VertiPaq engine",
        "config": "Enable by default on all Direct Lake source tables"
    },
    {
        "scenario": "Unknown query patterns",
        "reason": "V-Order provides general optimization for any query",
        "config": "Enable during write for all tables"
    },
    {
        "scenario": "High write frequency",
        "reason": "V-Order applies automatically, no maintenance needed",
        "config": "Enable via table property or session config"
    },
    {
        "scenario": "Storage cost optimization",
        "reason": "V-Order improves compression ratios",
        "config": "Enable especially for large tables"
    }
]

# Enable V-Order for these cases
spark.conf.set("spark.sql.parquet.vorder.enabled", "true")

Use Z-Order When

zorder_use_cases = [
    {
        "scenario": "Point lookups on specific columns",
        "reason": "Z-Order creates locality for specific column combinations",
        "example": "SELECT * FROM orders WHERE customer_id = 12345"
    },
    {
        "scenario": "Range queries on known dimensions",
        "reason": "Z-Order helps skip files for range predicates",
        "example": "SELECT * FROM events WHERE date BETWEEN '2024-01-01' AND '2024-01-31' AND region = 'US'"
    },
    {
        "scenario": "Dashboard with fixed filters",
        "reason": "Z-Order on filter columns minimizes data scan",
        "example": "Dashboard always filters by date and region"
    },
    {
        "scenario": "Tables with dominant query patterns",
        "reason": "Optimize for the 80% case",
        "example": "Most queries filter on order_date and status"
    }
]

# Apply Z-Order for specific patterns
spark.sql("""
    OPTIMIZE orders
    ZORDER BY (order_date, customer_id)
""")

Combining V-Order and Z-Order

Strategy 1: V-Order Always, Z-Order Selectively

class CombinedOrderingStrategy:
    """Use V-Order as baseline, add Z-Order for hot columns."""

    def __init__(self, spark):
        self.spark = spark
        # Enable V-Order globally
        self.spark.conf.set("spark.sql.parquet.vorder.enabled", "true")

    def write_table(self, df, table_path: str):
        """Write with V-Order enabled."""
        df.write.format("delta") \
            .mode("overwrite") \
            .save(table_path)

    def add_zorder_optimization(
        self,
        table_path: str,
        columns: list[str],
        partition_filter: str = None
    ):
        """Add Z-Order on specific columns."""

        delta_table = DeltaTable.forPath(self.spark, table_path)

        if partition_filter:
            delta_table.optimize() \
                .where(partition_filter) \
                .executeZOrderBy(*columns)
        else:
            delta_table.optimize() \
                .executeZOrderBy(*columns)

    def full_optimization_workflow(
        self,
        df,
        table_path: str,
        zorder_columns: list[str] = None
    ):
        """Complete workflow: V-Order on write, Z-Order if needed."""

        # Step 1: Write with V-Order
        print("Writing with V-Order...")
        self.write_table(df, table_path)

        # Step 2: Apply Z-Order if specific columns matter
        if zorder_columns:
            print(f"Applying Z-Order on {zorder_columns}...")
            self.add_zorder_optimization(table_path, zorder_columns)

        print("Optimization complete")

# Usage
strategy = CombinedOrderingStrategy(spark)

# Simple case: V-Order only
strategy.write_table(reference_data, "Tables/products")

# Complex case: V-Order + Z-Order
strategy.full_optimization_workflow(
    df=transactions_df,
    table_path="Tables/transactions",
    zorder_columns=["transaction_date", "account_id"]
)

Strategy 2: Decision Framework

def choose_ordering_strategy(table_characteristics: dict) -> dict:
    """Decide on ordering strategy based on table characteristics."""

    query_patterns = table_characteristics.get("query_patterns", [])
    direct_lake = table_characteristics.get("direct_lake_source", False)
    write_frequency = table_characteristics.get("writes_per_day", 0)
    dominant_filters = table_characteristics.get("dominant_filter_columns", [])

    recommendation = {
        "vorder": True,  # Always recommend V-Order
        "zorder": False,
        "zorder_columns": [],
        "reasoning": []
    }

    # V-Order reasoning
    if direct_lake:
        recommendation["reasoning"].append(
            "V-Order recommended: Direct Lake source table"
        )
    else:
        recommendation["reasoning"].append(
            "V-Order recommended: General compression benefits"
        )

    # Z-Order decision
    if len(dominant_filters) > 0 and len(dominant_filters) <= 4:
        # Check if queries consistently filter on these columns
        filter_frequency = table_characteristics.get("filter_frequency", {})

        if any(filter_frequency.get(col, 0) > 0.5 for col in dominant_filters):
            recommendation["zorder"] = True
            recommendation["zorder_columns"] = dominant_filters[:4]
            recommendation["reasoning"].append(
                f"Z-Order recommended: {dominant_filters} used in >50% of queries"
            )

    # Warning about maintenance
    if recommendation["zorder"] and write_frequency > 100:
        recommendation["reasoning"].append(
            "WARNING: High write frequency. Consider Z-Order on recent partitions only"
        )

    return recommendation

# Example usage
strategy = choose_ordering_strategy({
    "query_patterns": ["aggregations", "filtered_lookups"],
    "direct_lake_source": True,
    "writes_per_day": 24,
    "dominant_filter_columns": ["order_date", "region"],
    "filter_frequency": {"order_date": 0.8, "region": 0.6}
})

print("Recommendations:")
for reason in strategy["reasoning"]:
    print(f"  - {reason}")

if strategy["zorder"]:
    print(f"  Z-Order columns: {strategy['zorder_columns']}")

Performance Comparison

Benchmark Setup

import time

class OrderingBenchmark:
    """Benchmark V-Order vs Z-Order performance."""

    def __init__(self, spark):
        self.spark = spark

    def create_test_tables(self, source_df, base_path: str):
        """Create tables with different ordering strategies."""

        # No optimization
        source_df.write.format("delta") \
            .mode("overwrite") \
            .save(f"{base_path}/no_optimization")

        # V-Order only
        self.spark.conf.set("spark.sql.parquet.vorder.enabled", "true")
        source_df.write.format("delta") \
            .mode("overwrite") \
            .save(f"{base_path}/vorder_only")

        # Z-Order only (on common filter columns)
        self.spark.conf.set("spark.sql.parquet.vorder.enabled", "false")
        source_df.write.format("delta") \
            .mode("overwrite") \
            .save(f"{base_path}/zorder_only")

        delta_table = DeltaTable.forPath(self.spark, f"{base_path}/zorder_only")
        delta_table.optimize().executeZOrderBy("order_date", "region")

        # Both V-Order and Z-Order
        self.spark.conf.set("spark.sql.parquet.vorder.enabled", "true")
        source_df.write.format("delta") \
            .mode("overwrite") \
            .save(f"{base_path}/both")

        delta_table = DeltaTable.forPath(self.spark, f"{base_path}/both")
        delta_table.optimize().executeZOrderBy("order_date", "region")

    def benchmark_query(self, table_path: str, query: str, runs: int = 5) -> float:
        """Benchmark average query time."""

        times = []
        for _ in range(runs):
            self.spark.catalog.clearCache()
            start = time.time()
            self.spark.sql(query.replace("{table}", table_path)).collect()
            times.append(time.time() - start)

        return sum(times[1:]) / (runs - 1)  # Exclude cold run

    def run_full_benchmark(
        self,
        source_df,
        base_path: str,
        test_queries: list[dict]
    ) -> dict:
        """Run full benchmark across all strategies."""

        # Create test tables
        self.create_test_tables(source_df, base_path)

        strategies = ["no_optimization", "vorder_only", "zorder_only", "both"]
        results = {s: {} for s in strategies}

        for query_info in test_queries:
            query_name = query_info["name"]
            query_sql = query_info["sql"]

            for strategy in strategies:
                table_path = f"{base_path}/{strategy}"
                avg_time = self.benchmark_query(
                    f"delta.`{table_path}`",
                    query_sql
                )
                results[strategy][query_name] = avg_time

        return results

# Run benchmark
benchmark = OrderingBenchmark(spark)

test_queries = [
    {
        "name": "full_scan",
        "sql": "SELECT COUNT(*), SUM(amount) FROM {table}"
    },
    {
        "name": "date_filter",
        "sql": "SELECT * FROM {table} WHERE order_date = '2024-01-15'"
    },
    {
        "name": "multi_filter",
        "sql": "SELECT * FROM {table} WHERE order_date BETWEEN '2024-01-01' AND '2024-01-31' AND region = 'US'"
    }
]

results = benchmark.run_full_benchmark(
    source_df=large_sales_df,
    base_path="Files/ordering_benchmark",
    test_queries=test_queries
)

# Print results
print("\nBenchmark Results (seconds):")
print("-" * 60)
for strategy, queries in results.items():
    print(f"\n{strategy}:")
    for query_name, time in queries.items():
        print(f"  {query_name}: {time:.3f}s")

Maintenance Considerations

V-Order Maintenance

# V-Order requires no special maintenance
# It's applied automatically on every write

# However, if you need to reapply to existing data:
def reapply_vorder(table_path: str):
    """Rewrite table with V-Order."""

    spark.conf.set("spark.sql.parquet.vorder.enabled", "true")

    df = spark.read.format("delta").load(table_path)

    df.write.format("delta") \
        .mode("overwrite") \
        .option("dataChange", "false") \
        .save(table_path)

Z-Order Maintenance

class ZOrderMaintenance:
    """Maintain Z-Order optimization over time."""

    def __init__(self, spark):
        self.spark = spark

    def needs_reoptimization(
        self,
        table_path: str,
        threshold_files: int = 100
    ) -> bool:
        """Check if table needs Z-Order reapplication."""

        detail = self.spark.sql(f"DESCRIBE DETAIL delta.`{table_path}`").first()
        history = self.spark.sql(
            f"DESCRIBE HISTORY delta.`{table_path}` LIMIT 10"
        )

        # Check if recent OPTIMIZE exists
        recent_optimize = history.filter("operation = 'OPTIMIZE'").count()

        return detail.numFiles > threshold_files or recent_optimize == 0

    def schedule_zorder(
        self,
        table_path: str,
        columns: list[str],
        partition_filter: str = None
    ):
        """Apply Z-Order optimization."""

        delta_table = DeltaTable.forPath(self.spark, table_path)

        if partition_filter:
            # Only Z-Order recent partitions for efficiency
            delta_table.optimize() \
                .where(partition_filter) \
                .executeZOrderBy(*columns)
        else:
            delta_table.optimize() \
                .executeZOrderBy(*columns)

    def maintain_table(
        self,
        table_path: str,
        zorder_columns: list[str],
        recent_partition_filter: str = None
    ):
        """Full maintenance routine."""

        if self.needs_reoptimization(table_path):
            print(f"Reoptimizing {table_path}...")
            self.schedule_zorder(
                table_path,
                zorder_columns,
                recent_partition_filter
            )
        else:
            print(f"{table_path} is well optimized")

# Usage
maintenance = ZOrderMaintenance(spark)

# Maintain with Z-Order on recent data only
maintenance.maintain_table(
    table_path="Tables/transactions",
    zorder_columns=["transaction_date", "customer_id"],
    recent_partition_filter="transaction_date >= date_sub(current_date(), 30)"
)

Best Practices

  1. Always enable V-Order: Low overhead, universal benefits
  2. Add Z-Order for specific needs: Known filter patterns
  3. Limit Z-Order columns: Maximum 3-4 columns
  4. Z-Order recent partitions: Avoid full table rewrites
  5. Monitor query patterns: Adjust Z-Order columns based on actual usage

Conclusion

V-Order and Z-Order serve different purposes and work well together. Use V-Order as your baseline optimization for all tables, then layer Z-Order on top for tables with specific, known filter patterns.

The combination provides both general compression benefits (V-Order) and targeted data skipping (Z-Order) for optimal query performance across all workloads.

Michael John Peña

Michael John Peña

Senior Data Engineer based in Sydney. Writing about data, cloud, and technology.