[HTML payload içeriği buraya]
31.9 C
Jakarta
Tuesday, May 12, 2026

Obtain 2x quicker knowledge lake question efficiency with Apache Iceberg on Amazon Redshift


With the rising adoption of open desk codecs like Apache Iceberg, Amazon Redshift continues to advance its capabilities for open format knowledge lakes. In 2025, Amazon Redshift delivered a number of efficiency optimizations that improved question efficiency over twofold for Iceberg workloads on Amazon Redshift Serverless, delivering distinctive efficiency and cost-effectiveness on your knowledge lake workloads.

On this submit, we describe among the optimizations that led to those efficiency positive factors. Knowledge lakes have change into a basis of recent analytics, serving to organizations retailer huge quantities of structured and semi-structured knowledge in cost-effective knowledge codecs like Apache Parquet whereas sustaining flexibility by open desk codecs. This structure creates distinctive efficiency optimization alternatives throughout your entire question processing pipeline.

Efficiency enhancements

Our newest enhancements span a number of areas of the Amazon Redshift SQL question processing engine, together with vectorized scanners that speed up execution, optimum question plans powered by just-in-time (JIT) runtime statistics, distributed Bloom filters, and new decorrelation guidelines.

The next chart summarizes the efficiency enhancements achieved up to now in 2025, as measured by {industry} commonplace 10 TB TPC-DS and TPC-H benchmarks run on Iceberg tables on an 88 RPU Redshift Serverless endpoint.

Discover one of the best efficiency on your workloads

The efficiency outcomes offered on this submit are primarily based on benchmarks derived from the industry-standard TPC-DS and TPC-H benchmarks, and have the next traits:

  • The schema and knowledge of Iceberg tables are used unmodified from TPC-DS. Tables are partitioned to replicate real-world knowledge group patterns.
  • The queries are generated utilizing the official TPC-DS and TPC-H kits with question parameters generated utilizing the default random seed of the kits.
  • The TPC-DS take a look at contains all 99 TPC-DS SELECT queries. It doesn’t embrace upkeep and throughput steps. The TPC-H take a look at contains all 22 TPC-H SELECT queries.
  • Benchmarks are run out of the field: no guide tuning or stats assortment is completed for the workloads.

Within the following sections, we talk about key efficiency enhancements delivered in 2025.

Sooner knowledge lake scans

To enhance knowledge lake learn efficiency, the Amazon Redshift crew constructed a very new scan layer designed from the ground-up for knowledge lakes. This new scan layer features a purpose-built I/O subsystem, incorporating good prefetch capabilities to cut back knowledge latency. Moreover, the brand new scan layer is optimized for processing Apache Parquet information, essentially the most generally used file format for Iceberg, by quick vectorized scans.

This new scan layer additionally contains subtle knowledge pruning mechanisms that function at each partition and file ranges, dramatically decreasing the amount of knowledge that must be scanned. This pruning functionality works in concord with the good prefetch system, making a coordinated strategy that maximizes effectivity all through your entire knowledge retrieval course of.

JIT ANALYZE for Iceberg tables

In contrast to conventional knowledge warehouses, knowledge lakes usually lack complete table- and column-level statistics in regards to the underlying knowledge, making it difficult for the planner and optimizer within the question engine to decide on up-front which execution plan shall be most optimum. Sub-optimal plans can result in slower and fewer predictable efficiency.

JIT ANALYZE is a brand new Amazon Redshift function that mechanically collects and makes use of statistics for Iceberg tables throughout question execution—minimizing guide statistics assortment whereas giving the planner and optimizer within the question engine the knowledge it must generate optimum question plans. The system makes use of clever heuristics to determine queries that may profit from statistics, performs quick file-level sampling utilizing Iceberg metadata, and extrapolates inhabitants statistics utilizing superior strategies.

JIT ANALYZE delivers out-of-the-box efficiency practically equal to queries which have pre-calculated statistics, whereas offering the muse for a lot of different efficiency optimizations. Some TPC-DS queries improved by 50 occasions quicker with these statistics.

Question optimizations

For correlated subqueries equivalent to those who include EXISTS/IN clauses, Amazon Redshift makes use of decorrelation guidelines to rewrite the queries. In lots of instances, these decorrelation guidelines weren’t producing optimum plans, leading to question execution efficiency regressions. To deal with this, we launched a brand new inner be a part of sort, SEMI JOIN, and a brand new decorrelation rule primarily based on this be a part of sort. This decorrelation rule helps in producing essentially the most optimum plans, thereby enhancing execution efficiency. As an illustration, one of many TPC-DS queries that comprises EXIST clause ran 7 occasions quicker with this optimization.

We launched distributed Bloom filter optimization for knowledge lake workloads. Distributed Bloom filters create Bloom filters domestically in each compute node after which distributes them to each different node. Distributing Bloom filters can considerably cut back the quantity of knowledge that must be despatched over the community for the be a part of by filtering out the tuples earlier. This supplies good efficiency positive factors for big, advanced knowledge lake queries that course of and be a part of giant quantities of knowledge.

Conclusion

These efficiency enhancements for Iceberg workloads signify a significant leap ahead in Redshift knowledge lake capabilities. By specializing in out-of-the-box efficiency, we’ve made it simple to realize distinctive question efficiency with out advanced tuning or optimization.

These enhancements show the facility of deep technical innovation mixed with sensible buyer focus. JIT ANALYZE reduces the operational burden of statistics administration whereas offering optimum question planning info. The brand new Redshift knowledge lake question engine on Redshift Serverless was rewritten from the bottom up for best-in-class scan efficiency, and lays the groundwork for extra superior efficiency optimizations. Semi-join optimizations deal with among the most difficult question patterns in analytical workloads. You possibly can run advanced analytical workloads in your Iceberg knowledge and get quick, predictable question efficiency.

Amazon Redshift is dedicated to being one of the best analytics engine for knowledge lake workloads, and these efficiency optimizations signify our continued funding in that aim.

To study extra about Amazon Redshift and its efficiency capabilities, go to the Amazon Redshift product web page. To get began with Redshift, you’ll be able to attempt Amazon Redshift Serverless and begin querying knowledge in minutes with out having to arrange and handle knowledge warehouse infrastructure. For extra particulars on efficiency greatest practices, see the Amazon Redshift Database Developer Information. To remain up-to-date with the newest developments in Amazon Redshift, subscribe to the What’s New in Amazon Redshift RSS feed.


Particular because of this submit’s contributors: Martin Milenkoski, Gerard Louw, Konrad Werblinski, Mengchu Cai, Mehmet Bulut, Mohammed Alkateb, and Sanket Hase

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles