The Amazon EMR runtime for Apache Spark is a performance-optimized runtime for Apache Spark that’s 100% API appropriate with open supply Apache Spark. With Amazon EMR launch 7.9.0, the EMR runtime for Apache Spark introduces important efficiency enhancements for encrypted workloads, supporting Spark model 3.5.5.
For compliance and safety necessities, many shoppers have to allow Apache Spark’s native storage encryption (spark.io.encryption.enabled = true) along with Amazon Easy Storage Service (Amazon S3) encryption (equivalent to server-side encryption (SSE) or AWS Key Administration Service (AWS KMS)). This characteristic encrypts shuffle information, cached knowledge, and different intermediate knowledge written to native disk throughout Spark operations, defending delicate knowledge at relaxation on Amazon EMR cluster cases.
Industries topic to rules such because the Well being Insurance coverage Portability and Accountability Act (HIPAA) for healthcare, Cost Card Business Information Safety Normal (PCI-DSS) for monetary companies, Normal Information Safety Regulation (GDPR) for private knowledge, and Federal Danger and Authorization Administration Program (FedRAMP) for presidency usually require encryption of all knowledge at relaxation, together with short-term information on native storage. Whereas Amazon S3 encryption protects knowledge in object storage, Spark’s I/O encryption secures the intermediate shuffle and spill knowledge that Spark writes to native disk throughout distributed processing—knowledge that by no means reaches Amazon S3 however may comprise delicate data extracted from supply datasets. Typically, encrypted operations require extra computational overhead that may affect total job efficiency.
With the built-in encryption optimizations of Amazon EMR 7.9.0, clients may see important efficiency enhancements of their Apache Spark functions with out requiring any software modifications. In our efficiency benchmark assessments, derived from TPC-DS efficiency assessments at 3 TB scale, we noticed as much as 20% sooner efficiency with the EMR 7.9 optimized Spark runtime in comparison with Spark with out these optimizations. Particular person outcomes could range relying on particular workloads and configurations.
On this submit, we analyze the outcomes from our benchmark assessments evaluating the Amazon EMR 7.9 optimized Spark runtime in opposition to Spark 3.5.5 with out encryption optimizations. We stroll by way of an in depth value evaluation and supply step-by-step directions to breed the benchmark.
Outcomes noticed
To guage the efficiency enhancements, we used an open supply Spark efficiency take a look at utility derived from the TPC-DS efficiency take a look at toolkit. We ran the assessments on two nine-node (eight core nodes and one major node) r5d.4xlarge Amazon EMR 7.9.0 clusters, evaluating two configurations:
- Baseline: EMR 7.9.0 cluster with a bootstrap motion putting in Spark 3.5.5 with out encryption optimizations
- Optimized: EMR 7.9.0 cluster utilizing the EMR Spark 3.5.5 runtime with encryption optimizations
Each assessments used knowledge saved in Amazon Easy Storage Service (Amazon S3). All knowledge processing was configured identically apart from the Spark runtime model.
To take care of benchmarking consistency and guarantee a constant, equal comparability, we disabled Dynamic Useful resource Allocation (DRA) in each take a look at configurations. This strategy eliminates variability from dynamic scaling and so we will measure pure computational efficiency enhancements.
The next desk reveals the entire job runtime for all queries (in seconds) within the 3 TB question dataset between the baseline and Amazon EMR 7.9 optimized configurations:
| Configuration | Complete runtime (seconds) | Geometric imply (seconds) | Efficiency enchancment |
| Baseline (Spark 3.5.5 with out optimization) | 1,485 | 10.24 | |
| EMR 7.9 (with encryption optimization) | 1,176 | 8.15 | 20% sooner |
We noticed that our TPC-DS assessments with the Amazon EMR 7.9 optimized Spark runtime accomplished about 20% sooner based mostly on whole runtime and 20% sooner based mostly on geometric imply in comparison with the baseline configuration.
The encryption optimizations in Amazon EMR 7.9 ship efficiency advantages by way of:
- Improved shuffle and decryption operations decreasing overhead throughout knowledge trade with out compromising safety
- Higher reminiscence administration for intermediate outcomes
Value evaluation
The efficiency enhancements of the Amazon EMR 7.9 optimized Spark runtime instantly translate to decrease prices. We realized an roughly 20% value financial savings operating the benchmark software with encryption optimizations in comparison with the baseline configuration, due to decreased hours of EMR, Amazon Elastic Compute Cloud (Amazon EC2) and Amazon Elastic Block Retailer (Amazon EBS) utilizing Normal Objective SSD (gp2).
The next desk summarizes the price comparability within the us-east-1 AWS Area:
| Configuration | Runtime (hours) | Estimated value | Complete EC2 cases | Complete vCPU | Complete reminiscence (GiB) | Root gadget (EBS) |
| Baseline: Spark 3.5.5 with out optimization, 1 major and eight core nodes | 0.41 | $5.28 | 9 | 144 | 1152 | 64 GiB gp2 |
| Amazon EMR 7.9 with optimization, 1 major and eight core nodes | 0.33 | $4.25 | 9 | 144 | 1152 | 64 GiB gp2 |
Value breakdown
Formulation used:
- Amazon EMR value – Variety of cases × EMR hourly fee × Runtime hours
- Amazon EC2 value – Variety of cases × EC2 hourly fee × Runtime hour)
- Amazon EBS value – (EBS value per GB per thirty days ÷ hours in a month) × EBS quantity dimension × variety of cases × runtime hours
Word: EBS is priced month-to-month ($0.1 per GB per thirty days), so we divide by 730 hours to transform to an hourly fee. EMR and EC2 are already priced hourly, so no conversion is required.
Baseline configuration (0.41 hours):
- Amazon EMR value – 9 × $0.27 × 0.41 = $1.00
- Amazon EC2 value – 9 × $1.152 × 0.41 = $4.25
- Amazon EBS value – ($0.1/730 × 64 × 9 × 0.41) = $0.032
- Complete value – $5.28
EMR 7.9 optimized configuration (0.33 hours):
- Amazon EMR value – (9 × $0.27 × 0.33) = $0.80
- Amazon EC2 value – (9 × $1.152 × 0.33) = $3.42
- Amazon EBS value – ($0.1/730 × 64 × 9 × 0.33) = $0.025
- Complete value: $4.25
Complete value financial savings: 20% per benchmark run, which scales linearly together with your manufacturing workload frequency.
Arrange EMR benchmarking
For detailed directions and scripts, see the companion GitHub repository.
Stipulations
To arrange Amazon EMR benchmarking, begin by finishing the next prerequisite steps:
- Configure your AWS Command Line Interface (AWS CLI) by operating
aws configureto level to your benchmarking account, - Create an S3 bucket for take a look at knowledge and outcomes.
- Copy the TPC-DS 3TB supply knowledge from a publicly out there dataset to your S3 bucket utilizing the next command:
Exchange
<YOUR-BUCKET-NAME>with the title of the S3 bucket you created in step 2. - Construct or obtain the benchmark software JAR file (spark-benchmark-assembly-3.3.0.jar)
- Guarantee you’ve gotten acceptable AWS Identification Entry Administration (IAM) roles for EMR cluster creation and Amazon S3 entry
Deploy the baseline EMR cluster (with out optimization)
Step 1: Launch EMR 7.9.0 cluster with bootstrap motion
The baseline configuration makes use of a bootstrap motion to put in Spark 3.5.5 with out encryption optimizations. Now we have made the bootstrap script publicly out there in an S3 bucket on your comfort.
Create the default Amazon EMR roles:
Now create the cluster:
Word: The bootstrap script is out there in a public S3 bucket at s3://spark-ba/install-spark-3-5-5-no-encryption.sh. This script installs Apache Spark 3.5.5 with out the encryption optimizations current within the Amazon EMR runtime.
Step 2: Submit the benchmark job to the baseline cluster
Subsequent submit the Spark job utilizing the next instructions:
Deploy the optimized EMR cluster (with encryption optimization)
Step 1: Launch EMR 7.9.0 cluster with Spark runtime
The optimized configuration makes use of the EMR 7.9.0 Spark runtime with none bootstrap actions:
Instance:
Step 2: Submit the benchmark job to optimized cluster
ext submit the Spark job utilizing the next instructions:
Benchmark command parameters defined
The Amazon EMR Spark step makes use of the next parameters:
- EMR step configuration:
- Sort=Spark: Specifies this can be a Spark software step
- Title=”EMR-7.9-Baseline-Spark-3.5.5″: Human-readable title for the step
- ActionOnFailure=CONTINUE: Proceed with different steps if this one fails
- Spark submit arguments:
- –deploy-mode shopper: Run the driving force on the grasp node (not cluster mode)
- –class com.amazonaws.eks.tpcds.BenchmarkSQL: Most important class for the TPC-DS benchmark
- Utility parameters:
- JAR file:
s3://<YOUR-BUCKET-NAME>/jar/spark-benchmark-assembly-3.3.0.jar - Enter knowledge
: s3://<YOUR-BUCKET-NAME>/weblog/BLOG_TPCDS-TEST-3T-partitioned(3 TB TPC-DS dataset) - Output location:
s3://<YOUR-BUCKET-NAME>/weblog/BASELINE_TPCDS-TEST-3T-RESULT(S3 path for outcomes) - TPC-DS instruments path:
/choose/tpcds-kit/instruments(native path on EMR nodes) - Format:
parquet(output format) - Scale issue:
3000(3 TB dataset dimension) - Iterations:
3(run every question 3 occasions for averaging) - Gather outcomes: false (don’t gather outcomes to driver)
- Question record:
"q1-v2.4,q10-v2.4,...,ss_max-v2.4"(all 104 TPC-DS queries) - Last parameter:
true(allow detailed logging and metrics)
- JAR file:
- Question protection:
- All 104 normal TPC-DS benchmark queries (
q1-v2.4by way ofq99-v2.4) - Plus the
ss_max-v2.4question for extra testing - Every question runs 3 occasions to calculate common efficiency
- All 104 normal TPC-DS benchmark queries (
Summarize the outcomes
- Obtain the take a look at end result information from each output S3 places:
- The CSV information comprise 4 columns (with out headers):
- Question title
- Median time (seconds)
- Minimal time (seconds)
- Most time (seconds)
- Calculate efficiency metrics for comparability:
- Common time per question:
AVERAGE(median, min, max)for every question - Complete runtime: Sum of all median occasions
- Geometric imply:
GEOMEAN(common occasions)throughout all queries - Speedup: Calculate the ratio between baseline and optimized for every question
- Common time per question:
- Create comparability evaluation:
Speedup = (Baseline Time - Optimized Time) / Baseline Time * 100%
Testing configuration particulars
The next desk summarizes the take a look at atmosphere used for this submit:
| Parameter | Worth |
| EMR launch | emr-7.9.0 (each configurations) |
| Baseline Spark model | 3.5.5 (put in by way of bootstrap motion) |
| Baseline bootstrap script | s3://spark-ba/install-spark-3-5-5-no-encryption.sh (public) |
| Optimized spark model | Amazon EMR Spark runtime |
| Cluster dimension | 9 nodes (1 major and eight core) |
| Occasion sort | r5d.4xlarge |
| vCPUs per node | 16 |
| Reminiscence per node | 128 GB |
| Occasion storage | 600 GB SSD |
| EBS quantity | 64 GB gp2 (2 volumes per occasion) |
| Complete vCPUs | 144 (9 × 16) |
| Complete reminiscence | 1152 GB (9 × 128) |
| Dataset | TPC-DS 3TB (Parquet format) |
| Queries | 104 queries (TPC-DS v2.4) |
| Iterations | 3 runs per question |
| DRA | Disabled for constant benchmarking |
Clear up
To keep away from incurring future fees, delete the sources you created:
- Terminate each EMR clusters:
- Delete S3 take a look at outcomes if now not wanted:
- Take away IAM roles if created particularly for testing
Key findings
- As much as 20% efficiency enchancment utilizing the Amazon EMR 7.9’s Spark runtime with no code modifications required
- 20% value financial savings due to decreased runtime
- Important beneficial properties for shuffle-heavy, join-intensive workloads
- 100% API compatibility with open supply Apache Spark
- Easy migration from customized Spark builds to EMR runtime
- Simple benchmarking utilizing publicly out there bootstrap scripts
Conclusion
You possibly can run your Apache Spark workloads as much as 20% sooner and at decrease value with out making any modifications to your functions through the use of the Amazon EMR 7.9.0 optimized Spark runtime. This enchancment is achieved by way of quite a few optimizations within the EMR Spark runtime, together with enhanced encryption dealing with, improved knowledge serialization, and optimized shuffle operations.
To be taught extra about Amazon EMR 7.9 and greatest practices, see the EMR documentation. For configuration steerage and tuning recommendation, subscribe to the AWS Large Information Weblog.
Associated sources:
In the event you’re operating Spark workloads on Amazon EMR immediately, we encourage you to check the EMR 7.9 Spark runtime together with your manufacturing workloads and measure the enhancements particular to your use case.
