[HTML payload içeriği buraya]
34.4 C
Jakarta
Tuesday, May 12, 2026

Introducing the Apache Spark troubleshooting agent for Amazon EMR and AWS Glue


The newly launched Apache Spark troubleshooting agent can get rid of hours of guide investigation for information engineers and scientists working with Amazon EMR or AWS Glue. As an alternative of navigating a number of consoles, sifting via in depth log information, and manually analyzing efficiency metrics, now you can diagnose Spark failures utilizing easy pure language prompts. The agent mechanically analyzes your workloads and delivers actionable suggestions. remodeling a time-consuming troubleshooting course of right into a streamlined, environment friendly expertise.

On this submit, we present you ways the Apache Spark troubleshooting agent helps analyze Apache Spark points by offering detailed root causes and actionable suggestions. You’ll learn to streamline your troubleshooting workflow by integrating this agent along with your current monitoring options throughout Amazon EMR and AWS Glue.

Apache Spark powers essential ETL pipelines, real-time analytics, and machine studying workloads throughout hundreds of organizations. Nevertheless, constructing and sustaining Spark functions stays an iterative course of the place builders spend vital time troubleshooting. Spark software builders encounter operational challenges due to some completely different causes:

  • Advanced connectivity and configuration choices to a wide range of sources with Spark – Though this makes Spark a preferred information processing platform, it usually makes it difficult to search out the basis reason for inefficiencies or failures when Spark configurations aren’t optimally or accurately configured.
  • Spark’s in-memory processing mannequin and distributed partitioning of datasets throughout its employees – Though good for parallelism, this usually makes it tough for customers to establish inefficiencies. This ends in sluggish software execution or root reason for failures brought on by useful resource exhaustion points similar to out of reminiscence and disk exceptions.
  • Lazy analysis of Spark transformations – Though lazy analysis optimizes efficiency, it makes it difficult to precisely and shortly establish the appliance code and logic that triggered the failure from the distributed logs and metrics emitted from completely different executors.

Apache Spark troubleshooting agent structure

This part describes the parts of the troubleshooting agent and the way they connect with your improvement atmosphere. The troubleshooting agent gives a single conversational entry level on your Spark functions throughout Amazon EMR, AWS Glue, and Amazon SageMaker Notebooks. As an alternative of navigating completely different consoles, APIs, and log areas for every service, you work together with one Mannequin Context Protocol (MCP) server via pure language utilizing any MCP-compatible AI assistant of your selection, together with customized brokers you develop utilizing frameworks similar to Strands Brokers.

Working as a totally managed cloud-hosted MCP server, the agent removes the necessity to keep native servers whereas conserving your information and code remoted and safe in a single-tenant system design. Operations are read-only and backed by AWS Id and Entry Administration (IAM) permissions; the agent solely has entry to sources and actions your IAM function grants. Moreover, instrument calls are mechanically logged to AWS CloudTrail, offering full auditability and compliance visibility. This mixture of managed infrastructure, granular IAM controls, and CloudTrail integration confirms your Spark diagnostic workflows stay safe, compliant, and totally auditable.

The agent builds on years of AWS experience operating hundreds of thousands of Spark functions at scale. It mechanically analyzes Spark Historical past Server information, distributed executor logs, configuration patterns, and error stack traces and extracts related options and alerts to floor insights that will in any other case require guide correlation throughout a number of information sources and deep understanding of Spark and repair internals.

Getting began 

Full the next steps to get began with the Apache Spark troubleshooting agent.

Conditions

Confirm you meet or have accomplished the next conditions.

System necessities:

  • Python 3.10 or larger
  • Set up the uv package deal supervisor. For directions, see putting in uv.
  • AWS Command Line Interface (AWS CLI) (model 2.30.0 or later) put in and configured with applicable credentials.

IAM permissions: Your AWS IAM profile wants permissions to invoke the MCP server and entry your Spark workload sources. The AWS CloudFormation template within the setup documentation creates an IAM function with the required permissions. You may also manually add the required IAM permissions.

Arrange utilizing AWS CloudFormation

First, deploy the AWS CloudFormation template offered within the setup documentation. This template mechanically creates the IAM roles with the permissions required to invoke the MCP server.

  1. Deploy the template inside the identical AWS Area you run your workloads in. For this submit, we’ll use us-east-1.
  2. From the AWS CloudFormation Outputs tab, copy and execute the atmosphere variable command:
    export SMUS_MCP_REGION=us-east-1 && export IAM_ROLE=arn:aws:iam::111122223333:function/spark-troubleshooting-role-xxxxxx

  3. Configure your AWS CLI profile:
    aws configure set profile.smus-mcp-profile.role_arn ${IAM_ROLE}
    aws configure set profile.smus-mcp-profile.source_profile default
    aws configure set profile.smus-mcp-profile.area ${SMUS_MCP_REGION}

Arrange utilizing Kiro CLI

You need to use Kiro CLI to work together with the Apache Spark troubleshooting agent immediately out of your terminal.

Set up and configuration:

  1. Set up Kiro CLI.
  2. Add each MCP servers, utilizing the atmosphere variables from the earlier Arrange utilizing AWS CloudFormation part:
    # Add Spark Troubleshooting MCP Server
    kiro-cli-chat mcp add 
        --name "sagemaker-unified-studio-mcp-troubleshooting" 
        --command "uvx" 
        --args "["mcp-proxy-for-aws@latest","https://sagemaker-unified-studio-mcp.${SMUS_MCP_REGION}.api.aws/spark-troubleshooting/mcp", "--service", "sagemaker-unified-studio-mcp", "--profile", "smus-mcp-profile", "--region", "${SMUS_MCP_REGION}", "--read-timeout", "180"]" 
        --timeout 180000 
        --scope world
    # Add Spark Code Advice MCP Server
    kiro-cli-chat mcp add 
        --name "sagemaker-unified-studio-mcp-code-rec" 
        --command "uvx" 
        --args "["mcp-proxy-for-aws@latest","https://sagemaker-unified-studio-mcp.${SMUS_MCP_REGION}.api.aws/spark-code-recommendation/mcp", "--service", "sagemaker-unified-studio-mcp", "--profile", "smus-mcp-profile", "--region", "${SMUS_MCP_REGION}", "--read-timeout", "180"]" 
        --timeout 180000 
        --scope world

  3. Confirm your setup by operating the /instruments command in Kiro CLI to see the accessible Apache Spark troubleshooting instruments.

Arrange utilizing Kiro IDE

Kiro IDE gives a visible improvement atmosphere with built-in AI help for interacting with the Apache Spark troubleshooting agent.

Set up and configuration:

  1. Set up Kiro IDE.
  2. MCP configuration is shared throughout Kiro CLI and Kiro IDE. Open the command palette utilizing Ctrl + Shift + P (Home windows / Linux) or Cmd + Shift + P (macOS) and Seek for Kiro: Open MCP Config
  3. Confirm the contents of your mcp.json match the Arrange utilizing Kiro CLI part.

Utilizing the troubleshooting agent

Subsequent, we offer 3 reference architectures for options to make use of the troubleshooting agent in your current workflows with ease. We additionally present the reference code and AWS CloudFormation templates for these architectures within the Amazon EMR Utilities GitHub repository.

Answer 1 – Conversational troubleshooting: Troubleshooting failed Apache Spark functions with Kiro CLI

When Spark functions fail throughout your information platform, your debugging method would sometimes contain navigating completely different consoles for Amazon EMR, Amazon EC2, Amazon EMR Serverless, and AWS Glue, manually reviewing Spark Historical past Server logs, checking error stack traces, analyzing useful resource utilization patterns, then correlating this data to search out the basis trigger and repair. The Apache Spark troubleshooting agent automates this whole workflow via pure language, offering a unified troubleshooting expertise throughout the three platforms. Merely describe your failed functions, for instance:

# Amazon EMR-EC2
Debug my failing Amazon EMR-EC2 step. Cluster id: 'j-xxxxx' Step id: 's-xxxxx'
# Amazon EMR Serverless
Troubleshoot my Amazon EMR Serverless job. Utility id: 'xxxxx' Job run id: 'xxxxx'
# AWS Glue
Analyze my failed AWS Glue job. Job identify: 'my-etl-job' Job run id: 'jr_xxxxx'

The agent mechanically extracts Spark occasion logs and metrics, analyzes the error patterns, and gives a transparent root trigger rationalization together with suggestions, all via the identical conversational interface. The next video demonstrates the entire troubleshooting workflow throughout Amazon EMR-EC2, Amazon EMR Serverless, and AWS Glue utilizing Kiro CLI:

Answer 2 – Agent-driven notifications: Combine the Apache Spark troubleshooting agent right into a monitoring workflow 

Along with troubleshooting from the command line, the troubleshooting agent can plug into your monitoring infrastructure to supply improved failure notifications.

Manufacturing information pipelines require fast visibility when failures happen. Conventional monitoring programs can warn you when a Spark job fails, however diagnosing the basis trigger nonetheless requires guide investigation and an evaluation of what went unsuitable earlier than remediation can start.

With the Apache Spark troubleshooting agent, you may combine it into your current monitoring workflows to obtain root causes and proposals as quickly as you obtain a failure notification. Right here, we exhibit two integration patterns that end in automated root trigger evaluation inside your current workflows.

Apache Airflow Integration

This primary integration sample makes use of Apache Airflow callbacks to mechanically set off troubleshooting when Spark job operators fail.

When any Amazon EMR, Amazon EC2, Amazon EMR Serverless, or AWS Glue job operator fails in an Apache Airflow DAG,

  1. A callback invokes the Spark troubleshooting agent inside a separate DAG.
  2. The Spark troubleshooting agent analyzes the problem, establishes the basis trigger, and identifies code repair suggestions.
  3. The Spark troubleshooting agent sends a complete diagnostic report back to a configured Slack channel.

The answer is accessible within the Amazon EMR Utilities GitHub repository (documentation) for fast integration into your current Apache Airflow deployments with a 1-line change to your Airflow DAGs. The next video demonstrates this integration:

Amazon EventBridge integration

For event-driven architectures, this second sample makes use of Amazon EventBridge to mechanically invoke the troubleshooting agent when Spark jobs fail throughout your AWS atmosphere.

This integration makes use of an AWS Lambda operate that interacts with the Apache Spark troubleshooting agent via the Strands MCP Consumer.

When Amazon EventBridge detects failures from Amazon EMR-EC2 steps, Amazon EMR Serverless job runs, or AWS Glue job runs, it triggers the AWS Lambda operate which:

  1. Makes use of the Apache Spark troubleshooting agent to investigate the failure
  2. Identifies the basis trigger and generates code repair suggestions
  3. Constructs a complete evaluation abstract
  4. Sends the abstract to Amazon SNS
  5. Delivers the evaluation to your configured locations (e-mail, Slack, or different SNS subscribers)

This serverless method gives centralized failure evaluation throughout all of your Spark platforms with out requiring modifications to particular person pipelines. The next video demonstrates this integration:

A reference implementation of this resolution is accessible within the Amazon EMR Utilities GitHub repository (documentation).

Answer 3 – Clever Dashboards: Use the Apache Spark troubleshooting agent with Kiro IDE to visualise account degree software failures: what failed, why failed and tips on how to repair

Understanding the well being of your Spark workloads throughout a number of platforms requires consolidating information from Amazon EMR (each EC2 and Serverless) and AWS Glue. Groups sometimes construct customized monitoring options by writing scripts to question a number of APIs, combination metrics, and generate stories which will be time consuming and require energetic upkeep.

With Kiro IDE and the Apache Spark troubleshooting agent, you may construct complete monitoring dashboards conversationally. As an alternative of writing customized code to combination workload metrics, you may describe what you need to observe, and the agent generates a whole dashboard exhibiting total efficiency metrics, error class distributions for failures, success charges throughout platforms, and significant failures requiring fast consideration. In contrast to conventional dashboards that solely present conventional KPIs and metrics on what software failed, this dashboard makes use of the Spark troubleshooting agent to supply insights to customers on why the functions failed, and how they are often mounted. The next video demonstrates constructing a multi-platform monitoring dashboard utilizing Kiro IDE:

The immediate used inside the demo:

Construct complete monitoring dashboard for all of my Amazon EMR-EC2 steps, Amazon EMR Serverless jobs, and AWS Glue jobs for the final 30 days. Area: us-east-2. 
Execution Plan:
1. Listing all of my Spark functions throughout these providers from the final 30 days. You possibly can retailer any intermediate ends in information on this folder as .json, however VALIDATE outputs earlier than transferring onto the following step. It is crucial to test the outcomes earlier than contemplating this accomplished. You possibly can write python script helpers to attain this. Deal with throttling and different exceptions gracefully. Be sure you cowl all platforms: Amazon EMR-EC2, Amazon EMR Serverless, and AWS Glue.
2. Use the spark-troubleshooting-mcp to assemble failure insights for every of my functions. Save this as .json as nicely. 
3. Then, use this data to assist construct the dashboard as HTML. Identify the file dashboard.html.
Dashboard Necessities:
- Data from all of my Amazon EMR-EC2, Amazon EMR Serverless, and AWS Glue functions needs to be current
- total success charges throughout platforms
- error class distributions for failures as a pie chart
- failures from final 30 days requiring consideration with root causes and proposals. Embrace error class and present the basis causes and proposals as they're returned by the spark-troubleshooting-mcp
- configuration comparisons per every platform. Configuration contains variations, employee sorts / DPUs, and so forth.

Clear up

To keep away from incurring future AWS costs, delete the sources you created throughout this walkthrough:

  • Delete the AWS CloudFormation stack.
  • For those who created an Amazon EventBridge rule for integration, delete these sources.

Conclusion

On this submit, we demonstrated how the Apache Spark troubleshooting agent transforms hours of guide investigation into pure language conversations, considerably lowering troubleshooting time from hours to minutes and making Spark experience accessible to all. By integrating pure language diagnostics into your current improvement instruments—whether or not Kiro CLI, Kiro IDE, or different MCP-compatible AI assistants—your groups can give attention to constructing modern functions as a substitute of debugging failures.


Particular thanks

A particular due to everybody who contributed from engineering and science to the launch of the Spark troubleshooting agent and the distant MCP service: Tony Rusignuolo, Anshi Shrivastava, Martin Ma, Hirva Patel, Pranjal Srivastava, Weijing Cai, Rupak Ravi, Bo Li, Vaibhav Naik, XiaoRun Yu, Tina Shao, Pramod Chunduri, Ray Liu, Yueying Cui, Savio Dsouza, Kinshuk Pahare, Tim Kraska, Santosh Chandrachood, Paul Meighan and Rick Sears.

A particular due to all of our companions who contributed to the launch of the Spark troubleshooting agent and the distant MCP service: Karthik Prabhakar, Suthan Phillips, Basheer Sheriff, Kamen Sharlandjiev, Archana Inapudi, Vara Bonthu, McCall Peltier, Lydia Kautsky, Larry Weber, Jason Berkovitz, Jordan Vaughn, Amar Wakharkar, Subramanya Vajiraya, Boyko Radulov and Ishan Gaur.

Concerning the authors

Jake Zych

Jake is a Software program Improvement Engineer at AWS Analytics. He has a deep curiosity in distributed programs and generative AI. In his spare time, Jake likes to create video content material and play board video games.

Maheedhar Reddy Chappidi

Maheedhar is a Senior Software program Improvement Engineer at AWS Analytics. He’s keen about constructing fault-tolerant, dependable distributed programs at scale and generative AI functions for Information Integration. Exterior of labor, Maheedhar enjoys listening to podcasts and taking part in along with his two-year-old youngster.

Vishal Kajjam

Vishal is a Senior Software program Improvement Engineer at AWS Analytics. He’s keen about distributed computing and utilizing ML/AI for designing and constructing end-to-end options to handle clients’ information integration wants. In his spare time, he enjoys spending time with household and mates.

Arunav Gupta

Arunav is a Software program Improvement Engineer at AWS Analytics. He’s keen about generative AI and orchestration and their makes use of in bettering developer quality-of-life. In his free time, Arunav enjoys competing in a karting league and exploring new espresso outlets in New York.

Wei Tang

Wei is a Software program Improvement Engineer at AWS Analytics. She is robust developer with deep pursuits in fixing recurring buyer issues with distributed programs and AI/ML.

Andrew Kim

Andrew is a Software program Improvement Engineer at AWS Analytics, with a deep ardour for distributed programs structure and AI-driven options, specializing in clever information integration workflows and cutting-edge function improvement on Apache Spark. Andrew focuses on re-inventing and simplifying options to complicated technical issues, and he enjoys creating net apps and producing music in his free time.

Jeremy Samuel

Jeremy is a Software program Improvement Engineer at AWS Analytics. He has a robust curiosity in creating distributed programs and generative AI. In his spare time, he enjoys taking part in video video games and listening to music.

Kartik Panjabi

Kartik is a Software program Improvement Supervisor at AWS Analytics. His staff builds generative AI options for the Information Integration and distributed system for information integration.

Shubham Mehta

Shubham is a Senior Product Supervisor at AWS Analytics. He leads generative AI function improvement throughout providers similar to AWS Glue, Amazon EMR, and Amazon MWAA, utilizing AI/ML to simplify and improve the expertise of information practitioners constructing information functions on AWS.

Vidyashankar Sivakumar

Vidyashankar is an utilized scientist within the Information Processing and Experiences group, the place he works on DevOps brokers that simplify and optimize the shopper journey for AWS Massive Information processing providers similar to Amazon EMR and AWS Glue. Exterior of labor, Vidyashankar enjoys listening to podcasts on present affairs, AI/ML, and AIOps, in addition to following cricket.

Muhammad Ali Gulzar

Muhammad is an Amazon Scholar within the Information Processing Brokers Science staff, and an assistant professor within the Pc Science Division at Virginia Tech. Gulzar’s analysis pursuits lie on the intersection of software program engineering and large information programs.

Mukul Prasad

Mukul is a Senior Utilized Science Supervisor within the Information Processing and Experiences group. He leads the Information Processing Brokers Science staff growing DevOps brokers to simplify and optimize the shopper journey in utilizing AWS Massive Information processing providers together with Amazon EMR, AWS Glue, and Amazon SageMaker Unified Studio. Exterior of labor, Mukul enjoys meals, journey, pictures, and Cricket.

Mohit Saxena

Mohit is a Senior Software program Improvement Supervisor at AWS Analytics. He leads improvement of distributed programs with AI/ML-driven capabilities and Brokers to simplify and optimize the expertise of information practitioners that construct massive information functions with Apache Spark, Amazon S3 and information lakes/warehouses on the cloud.

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles