[HTML payload içeriği buraya]
30.4 C
Jakarta
Tuesday, May 12, 2026

Implement fine-grained entry management for Iceberg tables utilizing Amazon EMR on EKS built-in with AWS Lake Formation


The rise of distributed knowledge processing frameworks similar to Apache Spark has revolutionized the way in which organizations handle and analyze large-scale knowledge. Nonetheless, as the amount and complexity of knowledge proceed to develop, the necessity for fine-grained entry management (FGAC) has develop into more and more vital. That is notably true in eventualities the place delicate or proprietary knowledge should be shared throughout a number of groups or organizations, similar to within the case of open knowledge initiatives. Implementing strong entry management mechanisms is essential to take care of safe and managed entry to knowledge saved in Open Desk Format (OTF) inside a contemporary knowledge lake.

One strategy to addressing this problem is by utilizing Amazon EMR on Amazon Elastic Kubernetes Service (Amazon EKS) and incorporating FGAC mechanisms. With Amazon EMR on EKS, you’ll be able to run open supply huge knowledge frameworks similar to Spark on Amazon EKS. This integration offers the scalability and adaptability of Kubernetes, whereas additionally utilizing the information processing capabilities of Amazon EMR.

On February 6th 2025, AWS launched fine-grained entry management primarily based on AWS Lake Formation for EMR on EKS from Amazon EMR 7.7 and better model. Now you can considerably improve your knowledge governance and safety frameworks utilizing this function.

On this submit, we exhibit the best way to implement FGAC on Apache Iceberg tables utilizing EMR on EKS with Lake Formation.

Knowledge mesh use case

With FGAC in a knowledge mesh structure, area homeowners can handle entry to their knowledge merchandise at a granular stage. This decentralized strategy permits for higher agility and management, ensuring knowledge is accessible solely to approved customers and companies inside or throughout domains. Insurance policies could be tailor-made to particular knowledge merchandise, contemplating elements like knowledge sensitivity, consumer roles, and supposed use. This localized management enhances safety and compliance whereas supporting the self-service nature of the information mesh.

FGAC is very helpful in enterprise domains that take care of delicate knowledge, similar to healthcare, finance, authorized, human assets, and others. On this submit, we give attention to examples from the healthcare area, showcasing how we will obtain the next:

  • Share affected person knowledge securely – Knowledge mesh permits totally different departments inside a hospital to handle their very own affected person knowledge as unbiased domains. FGAC makes positive solely approved personnel can entry particular affected person information or knowledge parts primarily based on their roles and need-to-know foundation.
  • Facilitate analysis and collaboration – Researchers can entry de-identified affected person knowledge from numerous hospital domains by means of the information mesh structure, enabling collaboration between multidisciplinary groups throughout totally different healthcare establishments, fostering data sharing, and accelerating analysis and discovery. FGAC helps compliance with privateness rules (similar to HIPAA) by proscribing entry to delicate knowledge parts or permitting entry solely to aggregated, anonymized datasets.
  • Enhance operational effectivity – Knowledge mesh can streamline knowledge sharing between hospitals and insurance coverage firms, simplifying billing and claims processing. FGAC makes positive solely approved personnel inside every group can entry the required knowledge, defending delicate monetary data.

Resolution overview

On this submit, we discover the best way to implement FGAC on Iceberg tables inside an EMR on EKS software, utilizing the capabilities of Lake Formation. For particulars on the best way to implement FGAC on Amazon EMR on EC2, seek advice from Nice-grained entry management in Amazon EMR Serverless with AWS Lake Formation.

The next parts play important roles on this resolution design:

  • Apache Iceberg OTF:
    • Excessive-performance desk format for large-scale analytics
    • Helps schema evolution, ACID transactions, and time journey
    • Suitable with Spark, Trino, Presto, and Flink
    • Amazon S3 Tables totally managed Iceberg tables for analytics workload
  • AWS Lake Formation:
    • FGAC for knowledge lakes
    • Column-, row-, and cell-level safety controls
  • Knowledge mesh producers and customers:
    • Producers: Create and serve domain-specific knowledge merchandise
    • Shoppers: Entry and combine knowledge merchandise
    • Allows self-service knowledge consumption

To exhibit how you should utilize Lake Formation to implement cross-account FGAC inside an EMR on EKS surroundings, we create tables within the AWS Glue Knowledge Catalog in a central AWS account appearing as producer and provision totally different consumer personas to mirror numerous roles and entry ranges in a separate AWS account appearing as a number of customers. Shoppers could be unfold throughout a number of accounts in real-world eventualities.

The next diagram illustrates the high-level resolution structure.

AWS Healthcare Data Architecture: FGAC using Lake Formation Integration with EMR on EKS

Determine 1: Excessive Stage Resolution Structure

To exhibit the cross-account knowledge sharing and knowledge filtering with Lake Formation FGAC, the answer deploys two totally different Iceberg tables with different entry for various customers. The permission mapping for customers are with cross-account desk shares and knowledge cell filters.

It has two totally different groups with totally different ranges of Lake Formation permissions to entry Sufferers and Claims Iceberg tables. The next desk summarizes the answer’s consumer personas.

Persona/Desk IdentifySufferersClaims

Sufferers Care Crew

(team1 job execution position)

  • Exclude a column ssn
  • Embrace rows solely from Texas and New York states
Full desk entry

Claims Care Crew

(team2 job execution position)

No entryFull desk entry

Stipulations

This resolution requires an AWS account with an AWS Id and Entry Administration (IAM) energy consumer position that may create and work together with AWS companies, together with Amazon EMR, Amazon EKS, AWS Glue, Lake Formation, and Amazon Easy Storage Service (Amazon S3). Extra particular necessities for every account are detailed within the related sections.

Clone the venture

To get began, obtain the venture both to your pc or the AWS CloudShell console:

git clone https://github.com/aws-samples/sample-emr-on-eks-fgac-iceberg
 cd sample-emr-on-eks-fgac-iceberg

Arrange infrastructure in producer account

To arrange the infrastructure within the producer account, you should have the next further assets:

The setup script deploys the next infrastructure:

  • An S3 bucket to retailer pattern knowledge in Iceberg desk format, registered as an information location in Lake Formation
  • An AWS Glue database named healthcare_db
  • Two AWS Glue tables: Sufferers and Claims Iceberg tables
  • A Lake Formation knowledge entry IAM position
  • Cross-account permissions enabled for the buyer account:
    • Enable the buyer to explain the database healthcare_db within the producer account
    • Enable to entry the Sufferers desk utilizing an information cell filter, primarily based on row-level chosen state, and exclude column ssn
    • Enable full desk entry to the Claims desk

Run the next producer_iceberg_datalake_setup.sh script to create a growth surroundings within the producer account. Replace its parameters based on your necessities:

export AWS_REGION=us-west-2
export PRODUCER_AWS_ACCOUNT=<YOUR_PRODUCER_AWS_ACCOUNT_ID> 
export CONSUMER_AWS_ACCOUNT=<YOUR_CONSUMER_AWS_ACCOUNT_ID> 
./producer_iceberg_datalake_setup.sh 
# run the clean-up script earlier than re-run the setup if wanted
./producer_clean_up.sh

Allow cross-account Lake Formation entry in producer account

A shopper account ID and an EMR on EKS Engine session tag should set within the producer’s surroundings. It permits the buyer to entry the producer’s AWS Glue tables ruled by Lake Formation. Full the next steps to allow cross-account entry:

  1. Open the Lake Formation console within the producer account.
  2. Select Utility integration settings beneath Administration within the navigation pane.
  3. Choose Enable exterior engines to filter knowledge in Amazon S3 places registered with Lake Formation.
  4. For Session tag values, enter EMR on EKS Engine.
  5. For AWS account IDs, enter your shopper account ID.
  6. Select Save.
Comprehensive AWS Lake Formation application integration settings interface for managing third-party data access.

Determine 2: Producer Account – Lake Formation third-party engine configuration display screen with session tags, account IDs, and knowledge entry permissions.

Validate FGAC setup in producer surroundings

To validate the FGAC setup within the producer account, verify the Iceberg tables, knowledge filter, and FGAC permission settings.

Iceberg tables

Two AWS Glue tables in Iceberg format have been created by producer_iceberg_datalake_setup.sh. On the Lake Formation console, select Tables beneath Knowledge Catalog within the navigation pane to see the tables listed.

AWS Lake Formation Tables interface showing a success message for updated external data filtering settings, with a table list displaying healthcare database tables in Apache Iceberg format.

Determine 3: Lake Formation interface displaying claims and sufferers tables from healthcare_db with Apache Iceberg format.

The next screenshot reveals an instance of the sufferers desk knowledge.

Patients table data

Determine 4: Sufferers desk knowledge

The next screenshot reveals an instance of the claims desk knowledge.

claims table data

Determine 5: Claims desk knowledge

Knowledge cell filter in opposition to sufferers desk

After efficiently operating the producer_iceberg_datalake_setup.sh script, a brand new knowledge cell filter named patients_column_row_filter was created in Lake Formation. This filter performs two features:

  • Exclude the ssn column from the sufferers desk knowledge
  • Embrace rows the place the state is Texas or New York

To view the information cell filter, select Knowledge filters beneath Knowledge Catalog within the navigation pane of the Lake Formation console, and open the filter. Select View permission to view the permission particulars.

Data cell filter

Determine 6: Column and Row stage filter configuration for sufferers desk

FGAC permissions permitting cross-account entry

To view all of the FGAC permissions, select Knowledge permissions beneath Permissions within the navigation pane of the Lake Formation console, and filter by the database title healthcare_db.

Be sure to revoke knowledge permissions with the IAMAllowedPrincipals principal related to the healthcare_db tables, as a result of it can trigger cross-account knowledge sharing to fail, notably with AWS Useful resource Entry Supervisor (AWS RAM).

Data permissions overview

Determine 7: Lake Formation knowledge permissions interface displaying filtered healthcare database assets with granular entry controls

The next desk summarizes the general FGAC setup.

Useful resource SortUseful resourcePermissionsGrant Permissions
DatabaseDescribeDescribe
Knowledge Cell Filter
patients_column_row_filter

ChooseChoose
DeskChoose, DescribeChoose, Describe

Arrange infrastructure in shopper account

To arrange the infrastructure within the shopper account, you should have the next further assets:

  • eksctl and kubectl packages should be put in
  • An IAM position within the shopper account should be a Lake Formation administrator to run consumer_emr_on_eks_setup.sh script
  • The Lake Formation admin should settle for the AWS RAM useful resource share invitations utilizing the AWS RAM console, if the buyer account is outdoors of the producer’s organizational unit
RAM resource share screen

Determine 8: Client account – Cross-account RAM share for Lake Formation useful resource

The setup script deploys the next infrastructure:

  • An EKS cluster known as fgac-blog with two namespaces:
    • Person namespace: lf-fgac-user
    • System namespace:lf-fgac-secure
  • An EMR on EKS digital cluster emr-on-eks-fgac-blog:
    • Arrange with a safety configuration emr-on-eks-fgac-sec-conifg
    • Two EMR on EKS job execution IAM roles:
      • Position for the Sufferers Care Crew (team1): emr_on_eks_fgac_job_team1_execution_role
      • Position for Claims Care Crew (team2): emr_on_eks_fgac_job_team2_execution_role
    • A question engine IAM position utilized by FGAC safe house: emr_on_eks_fgac_query_execution_role
  • An S3 bucket to retailer PySpark job scripts and logs
  • An AWS Glue native database named consumer_healthcare_db
  • Two useful resource hyperlinks to cross-account shared AWS Glue tables: rl_patients and rl_claims
  • Lake Formation permission on Amazon EMR IAM roles

Run the next consumer_emr_on_eks_setup.sh script to arrange a growth surroundings within the shopper account. Replace the parameters based on your use case:

export AWS_REGION=us-west-2 
export PRODUCER_AWS_ACCOUNT=<YOUR_PRODUCER_AWS_ACCOUNT_ID> 
export EKSCLUSTER_NAME=fgac-blog 
./consumer_emr_on_eks_setup.sh 
# run the clean-up script earlier than re-run the setup if wanted
./consumer_clean_up.sh

Allow cross-account Lake Formation entry in shopper account

The buyer account should add the buyer account ID with an EMR on EKS Engine session tag in Lake Formation. This session tag shall be utilized by EMR on EKS job execution IAM roles to entry Lake Formation tables. Full the next steps:

  1. Open the Lake Formation console within the shopper account.
  2. Select Utility integration settings beneath Administration within the navigation pane.
  3. Choose Enable exterior engines to filter knowledge in Amazon S3 places registered with Lake Formation.
  4. For Session tag values, enter EMR on EKS Engine.
  5. For AWS account IDs, enter your shopper account ID.
  6. Select Save.

Determine 9: Client Account – Lake Formation third-party engine configuration display screen with session tags, account IDs, and knowledge entry permissions

Validate FGAC setup in shopper surroundings

To validate the FGAC setup within the producer account, verify the EKS cluster, namespaces, and Spark job scripts to check knowledge permissions.

EKS cluster

On the Amazon EKS console, select Clusters within the navigation pane and ensure the EKS cluster fgac-blog is listed.

EKS Cluster view page

Determine 10: Client Account – EKS Cluster console web page

Namespaces in Amazon EKS

Kubernetes makes use of namespaces as logical partitioning system for organizing objects similar to Pods and Deployments. Namespaces additionally function as a privilege boundary within the Kubernetes role-based entry management (RBAC) system. Multi-tenant workloads in Amazon EKS could be secured utilizing namespaces.

This resolution creates two namespaces:

  • lf-fgac-user
  • lf-fgac-secure

The StartJobRun API makes use of the backend workflows to submit a Spark job’s UserComponents (JobRunner, Driver, Executors) within the consumer namespace, and the corresponding system parts within the system namespace to perform the specified FGAC behaviors.

You may confirm the namespaces with the next command:kubectl get namespaceThe next screenshot reveals an instance of the anticipated output.

Namespace summary page

Determine 11: EKS Cluster namespaces

Spark job script to check Sufferers Care Crew’s knowledge permissions

Beginning with Amazon EMR model 6.6.0, you should utilize Spark on EMR on EKS with the Iceberg desk format. For extra data on how Iceberg works in an immutable knowledge lake, see Construct a high-performance, ACID compliant, evolving knowledge lake utilizing Apache Iceberg on Amazon EMR.

The next script is a snippet of the PySpark job that retrieves filtered knowledge for the Claims and Affected person tables:

    print("Affected person Care Crew PySpark job operating on EMR on EKS! to question Sufferers and Claims tables!")
    print("This job queries Sufferers and Claims tables!")
    df1 = spark.sql('SELECT * FROM dev.${CONSUMER_DATABASE}.${rl_patients}')
    print("Sufferers tables knowledge:")
    print("Be aware: Sufferers desk is filtered on SSN column and it reveals information just for Texas and New York states")
    df1.present(20)
    df2 = spark.sql('SELECT p.state,
                            c.claim_id,
                            c.claim_date, 
                            p.patient_name, 
                            c.diagnosis_code, 
                            c.procedure_code, 
                            c.quantity, 
                            c.standing, 
                            c.provider_id 
                    FROM dev.${CONSUMER_DATABASE}.${rl_claims} c 
                    JOIN dev.${CONSUMER_DATABASE}.${rl_patients} p
                   ON c.patient_id = p.patient_id 
                   ORDER BY p.state, c.claim_date')
    print("Present solely related Claims knowledge for Sufferers chosen from Texas and New York state:")
    df2.present(20)
    print("Job Full")
....	

Spark job script to check Claims Care Crew’s knowledge permissions

The next script is a snippet of the PySpark job that retrieves knowledge from the Claims desk:

    print("Claims Crew PySpark job operating on EMR on EKS to question Claims desk!")
    print("Be aware: Claims Crew has full entry to Claims desk!")
    df = spark.sql('SELECT * FROM     dev.${CONSUMER_DATABASE}.${rl_claims}')
    df.present(20)
....

Validate job execution roles for EMR on EKS

The Sufferers Care Crew makes use of the emr_on_eks_fgac_job_team1_execution_role IAM position to execute a PySpark job on EMR on EKS. The job execution position has permission to question each the Sufferers and Claims tables.

The Claims Care Crew makes use of the emr_on_eks_fgac_job_team2_execution_role IAM position to execute jobs on EMR on EKS. The job execution position solely has permission to entry Claims knowledge.

Each IAM job execution roles have the next permissions:

{
    "Model": "2012-10-17",
    "Assertion": [
        {
            "Sid": "EmrGetCertificate",
            "Effect": "Allow",
            "Action": "emr-containers:CreateCertificate",
            "Resource": "*"
        },
        {
            "Sid": "LakeFormationManagedAccess",
            "Effect": "Allow",
            "Action": [
                "lakeformation:GetDataAccess",
                "glue:GetTable",
                "glue:GetCatalog",
                "glue:Create*",
                "glue:Update*"
            ],
            "Useful resource": "*"
        },
        {
            "Sid": "EmrSparkJobAccess",
            "Impact": "Enable",
            "Motion": [
                "s3:PutObject",
                "s3:GetObject",
                "s3:DeleteObject",
                "s3:ListBucket"
            ],
            "Useful resource": [
                "arn:aws:s3:::${S3_BUCKET}*"
            ]
        }
        }
    ]
}

The next code is the job execution IAM position belief coverage:

{
    "Model": "2012-10-17",
    "Assertion": [
        {
            "Sid": "TrustQueryEngineRoleToAssume",
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::$CONSUMER_ACCOUNT:role/$query_engine_role"
            },
            "Action": [
                "sts:AssumeRole",
                "sts:TagSession"
            ],
            "Situation": {
                "StringLike": {
                    "aws:RequestTag/LakeFormationAuthorizedCaller": "EMR on EKS Engine"
                }
            }
        },
        {
            "Sid": "TrustQueryEngineRoleToAssumeRoleOnly",
            "Impact": "Enable",
            "Principal": {
                "AWS": "arn:aws:iam::$CONSUMER_ACCOUNT:position/$query_engine_role"
            },
            "Motion": "sts:AssumeRole"
        },
        {
            "Impact": "Enable",
            "Principal": {
                "Federated": "arn:aws:iam::$CONSUMER_ACCOUNT oidc-provider/oidc.eks.$AWS_REGION.amazonaws.com/id/xxxxx"
            },
            "Motion": "sts:AssumeRoleWithWebIdentity",
            "Situation": {
                "StringLike": {
                    "oidc.eks.$AWS_REGION.amazonaws.com/id/xxxxx:sub": "system:serviceaccount:lf-fgac-user:emr-containers-sa-*-*-$CONSUMER_ACCOUNT-<hash36ofiamrole>"
                }
            }
        }
    ]
}

The next code is the question engine IAM position coverage (emr_on_eks_fgac_query_execution_role-policy):

{
    "Model": "2012-10-17",
    "Assertion": [
        {
            "Sid": "AssumeJobExecutionRole",
            "Effect": "Allow",
            "Action": [
                "sts:AssumeRole",
                "sts:TagSession"
            ],
            "Useful resource": ["arn:aws:iam::$CONSUMER_ACCOUNT:role/emr_on_eks_fgac_job_team1_execution_role",
                "arn:aws:iam::$CONSUMER_ACCOUNT:role/emr_on_eks_fgac_job_team2_execution_role"],
            "Situation": {
                "StringLike": {
                    "aws:RequestTag/LakeFormationAuthorizedCaller": "EMR on EKS Engine"
                }
            }
        },
        {
            "Sid": "AssumeJobExecutionRoleOnly",
            "Impact": "Enable",
            "Motion": [
                "sts:AssumeRole"
            ],
            "Useful resource": [
                "arn:aws:iam::$CONSUMER_ACCOUNT:role/emr_on_eks_fgac_job_team1_execution_role",
                "arn:aws:iam::$CONSUMER_ACCOUNT:role/emr_on_eks_fgac_job_team2_execution_role"
            ]
    ]
}

The next code is the question engine IAM position belief coverage:

{
    "Model": "2012-10-17",
    "Assertion": [
        {
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::$CONSUMER_ACCOUNT:root"
            },
            "Action": "sts:AssumeRole",
            "Condition": {}
        },
        {
            "Effect": "Allow",
            "Principal": {
                "Federated": "arn:aws:iam::$CONSUMER_ACCOUNT:oidc-provider/xxxxx"
            },
            "Action": "sts:AssumeRoleWithWebIdentity",
            "Condition": {
                "StringLike": {
                    "xxxxxx:sub": "system:serviceaccount:lf-fgac-secure:emr-containers-sa-*-*-$CONSUMER_ACCOUNT-<hash36ofiamrole>"
                }
            }
        }
    ]
}

Run PySpark jobs on EMR on EKS with FGAC

For extra particulars about the best way to work with Iceberg tables in EMR on EKS jobs, seek advice from Utilizing Apache Iceberg with Amazon EMR on EKS. Full the next steps to run the PySpark jobs on EMR on EKS with FGAC:

  1. Run the next instructions to run the sufferers and claims jobs:
bash /tmp/submit-patients-job.sh
bash /tmp/submit-claims-job.sh

  1. Watch the applying logs from the Spark driver pod:

kubectl logs drive-pod-name -c spark-kubernetes-driver -n lf-fgac-user -f

Alternatively, you’ll be able to navigate to the Amazon EMR console, open your digital cluster, and select the open icon subsequent to the job to open the Spark UI and monitor the job progress.

Spark UI navigation

Determine 12: EMR on EKS job runs

View PySpark jobs output on EMR on EKS with FGAC

In Amazon S3, navigate to the Spark output logs folder:

s3://blog-emr-eks-fgac-test-<acct-id>-us-west-2-dev/spark-logs/<emr-on-eks-cluster-id>/jobs/<patients-job-id>/containers/spark-xxxxxx/spark-xxxxx-driver/stdout.gz

S3 path to view logs

Determine 13: EMR on EKS job’s stdout.gz location on S3 Bucket

The Sufferers Care Crew PySpark job has question entry to the Sufferers and Claims tables. The Sufferers desk has filtered out the SSN column and solely reveals information for Texas and New York declare information, as laid out in our FGAC setup.

The next screenshot reveals the Claims desk for less than Texas and New York.

Claims data in consumer view

Determine 14: EMR on EKS Spark job output

The next screenshot reveals the Sufferers desk with out the SSN column.

Patients data in consumer view

Determine 15: EMR on EKS Spark job output

Equally, navigate to the Spark output log folder for the Claims Care Crew job:

s3://blog-emr-eks-fgac-test-<acct-id>-us-west-2-dev/spark-logs/<emr-on-eks-cluster-id>/jobs/<claims-job-id>/containers/spark-xxxxxx/spark-xxxxx-driver/stdout.gz

As proven within the following screenshot, the Claims Care Crew solely has entry to the Claims desk, so when the job tried to entry the Sufferers desk, it obtained an entry denied error.

Access denied for Claims team

Determine 16: EMR on EKS Spark job output

Issues and limitations

Though the strategy mentioned on this submit offers beneficial insights and sensible implementation methods, it’s vital to acknowledge the important thing concerns and limitations earlier than you begin utilizing this function. To be taught extra about utilizing EMR on EKS with Lake Formation, seek advice from How Amazon EMR on EKS works with AWS Lake Formation.

Clear up

To keep away from incurring future expenses, delete the assets generated for those who don’t want the answer anymore. Run the next cleanup scripts (change the AWS Area if obligatory).Run the next script within the shopper account:

export AWS_REGION=us-west-2
export PRODUCER_AWS_ACCOUNT=<YOUR_PRODUCER_AWS_ACCOUNT_ID>
export EKSCLUSTER_NAME=fgac-blog
./consumer_clean_up.sh

Run the next script within the producer account:

export AWS_REGION=us-west-2
export PRODUCER_AWS_ACCOUNT=<YOUR_PRODUCER_AWS_ACCOUNT_ID>
export CONSUMER_AWS_ACCOUNT=<YOUR_CONSUMER_AWS_ACCOUNT_ID>
./producer_clean_up.sh

Conclusion

On this submit, we demonstrated the best way to combine Lake Formation with EMR on EKS to implement fine-grained entry management on Iceberg tables. This integration gives organizations a contemporary strategy to implementing detailed knowledge permissions inside a multi-account open knowledge lake surroundings. By centralizing knowledge administration in a main account and punctiliously regulating consumer entry in secondary accounts, this technique can simplify governance and improve safety.

For extra details about Amazon EMR 7.7 in reference to EMR on EKS, see Amazon EMR on EKS 7.7.0 releases. To be taught extra about utilizing Lake Formation with EMR on EKS, see Allow Lake Formation with Amazon EMR on EKS.

We encourage you to discover this resolution to your particular use instances and share your suggestions and questions within the feedback part.


In regards to the authors

Janakiraman Shanmugam

Janakiraman Shanmugam

Janakiraman is a Senior Knowledge Architect at Amazon Net Companies . He has a spotlight in Knowledge & Analytics and enjoys serving to clients to resolve Huge knowledge & machine studying issues. Outdoors of the workplace, he likes to be along with his family and friends and spend time outside.

Tejal Patel

Tejal Patel

Tejal is Sr. Supply Advisor from AWS Skilled Companies group, specializing in Knowledge Analytics and ML options. She helps clients design scalable and revolutionary options with the AWS Cloud. Outdoors of her skilled life, Tejal enjoys spending time together with her household and mates.

Prabhakaran Thatchinamoorthy

Prabhakaran Thatchinamoorthy

Prabhakaran is a Software program Engineer at Amazon Net Companies, engaged on the EMR on EKS service. He focuses on constructing and working multi-tenant knowledge processing platforms on Kubernetes at scale. His areas of curiosity embody open-source batch and streaming frameworks, knowledge tooling, and DataOps.

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles