[HTML payload içeriği buraya]
26.9 C
Jakarta
Sunday, April 26, 2026

Analyzing your knowledge catalog: Question SageMaker Catalog metadata with SQL


As your knowledge and machine studying (ML) property develop, monitoring which property lack documentation or monitoring asset registration tendencies turns into difficult with out customized reporting infrastructure. You want visibility into your catalog’s well being, with out the overhead of managing ETL jobs. The metadata characteristic of Amazon SageMaker gives this functionality to customers. Changing catalog asset metadata into Apache Iceberg tables saved in Amazon S3 Tables removes the necessity to construct and keep customized ETL pipelines. Your staff can then question asset metadata immediately utilizing commonplace SQL instruments. Now you can reply governance questions like asset registration tendencies, classification standing, and metadata completeness utilizing commonplace SQL queries by way of instruments like Amazon Athena, Amazon SageMaker Unified Studio notebooks, and BIsystems.

This automated method reduces ETL improvement time and offers your staff visibility into catalog well being, compliance gaps, and asset lifecycle patterns. The exported tables embody technical metadata, enterprise metadata, challenge possession particulars, and timestamps, partitioned by snapshot date to allow time journey queries and historic evaluation. Groups can use this functionality to proactively monitor catalog well being, determine gaps in documentation, observe asset lifecycle patterns, and ensure that governance insurance policies are constantly utilized.

How metadata export works

After you allow the metadata export characteristic, it runs routinely on a every day schedule:

  1. SageMaker Catalog creates the infrastructure — An Amazon Easy Storage Service (Amazon S3) desk bucket named aws-sagemaker-catalog is created with an asset_metadata namespace and an empty asset desk.
  2. Day by day snapshots are captured — A scheduled job runs as soon as per day round midnight (native time per AWS Area) to export up to date asset metadata.
  3. Metadata is structured and partitioned — The export captures technical metadata (resource_id, resource_type), enterprise metadata (asset_name, business_description), challenge possession particulars, and timestamps, partitioned by snapshot_date for question efficiency.
  4. Information turns into queryable — Inside 24 hours, the asset desk seems in Amazon SageMaker Unified Studio underneath the aws-sagemaker-catalog bucket and turns into accessible by way of Amazon Athena, Studio notebooks, or exterior BI instruments.
  5. Groups question utilizing commonplace SQL — Information groups can now reply questions like “What number of property had been registered final month?” or “Which property lack enterprise descriptions?” with out constructing customized ETL pipelines.

The export evaluates catalog property and their metadata properties within the area, changing them into Apache Iceberg desk format. The information flows into downstream analytics operations instantly, with no separate ETL or batch processes to take care of. The exported metadata turns into a part of a queryable knowledge lake that helps time-travel queries and historic evaluation.

On this submit, we exhibit how you can use the metadata export functionality in Amazon SageMaker Catalog and carry out analytics on these tables. We discover the next particular use-cases.

  • Audit historic adjustments to analyze what an asset seemed like at a particular cut-off date.
  • Monitor asset progress view how the information catalog has grown over the past 30 days.
  • Monitor metadata enhancements to see which property gained descriptions or possession over time.

Resolution overview

AWS Cloud architecture diagram showing data pipeline from Amazon SageMaker Catalog to Amazon S3 Tables with daily export, connecting to query engines including Amazon Athena, Amazon Redshift, and Apache Spark

Determine 1 – SageMaker catalog export to S3 Tables

The structure consists of three key parts:

  1. Amazon SageMaker Catalog exports asset metadata every day to Amazon S3.
  2. S3 Tables shops metadata as Apache Iceberg tables within the aws-sagemaker-catalog bucket with ACID compliance and time journey.
  3. Question engines (Amazon Athena, Amazon Redshift, and Apache Spark) entry metadata utilizing commonplace SQL from the asset_metadata.asset desk.

What metadata is uncovered?

SageMaker Catalog exports metadata within the asset_metadata.asset desk:

Metadata KindFieldsDescription
Technical metadataresource_id, resource_type_enum, account_id, areaUseful resource identifiers (ARN), varieties (GlueTable, RedshiftTable, S3Collection), and site
Namespace hierarchycatalog, namespace, resource_nameOrganizational construction for property
Enterprise metadataasset_name, business_descriptionHuman-readable names and descriptions
Possessionextended_metadata['owningEntityId']Asset possession info
Timestampsasset_created_time, asset_updated_time, snapshot_timeCreation
Customized metadataextended_metadata['form-name.field-name']Person-defined metadata types as key-value pairs

The snapshot_time column helps point-in-time evaluation and question of historic catalog states.

Conditions

To comply with together with this submit, you will need to have the next:

For SageMaker Unified Studio area setup directions, discuss with the SageMaker Unified Studio Getting began information.

After you full the stipulations, full the next steps.

  1. Add this coverage to our IAM consumer or position to allow metadata export. If utilizing SageMaker Unified Studio to question the catalog, add this coverage to the AmazonSageMakerAdminIAMExecutionRole managed position.
{ "Model": "2012-10-17", 
"Assertion": [ 
{
 "Effect": "Allow",
 "Action": [ "datazone:GetDataExportConfiguration",
 "datazone:PutDataExportConfiguration"
 ],
 "Useful resource": "*"
 },
 {
 "Impact": "Permit",
 "Motion": [
 "s3tables:CreateTableBucket",
 "s3tables:PutTableBucketPolicy"
 ],
 "Useful resource": "arn:aws:s3tables:*:*:bucket/aws-sagemaker-catalog" 
} 
]
}
  1. Grant describe and choose permissions for SageMaker Catalog with AWS Lake Formation. This step will be carried out within the AWS Lake Formation console.
    1. Choose Permissions -> Information permissions and select Grant.

      AWS Lake Formation Grant Permissions interface showing principal type selection with IAM users and roles option selected and AmazonSageMakerAdminIAMExecutionRole assigned

      Determine 2 – AWS Lake Formation grant permission

    2. Beneath Principal kind, choose Principals, IAM customers and roles and the AWS managed AmazonSageMakerAdminIAMExecutionRole execution position.
    3. Select Named Information Catalog sources.
    4. Beneath Catalogs, seek for and choose <account-id>:s3tablecatalog/aws-sagemaker-catalog.
    5. Beneath Databases, choose asset_metadata database.
      AWS Lake Formation Grant Permissions page showing Named Data Catalog resources method with s3tablescatalog/aws-sagemaker-catalog selected, asset_metadata database, and asset table configured

      Determine 3 – AWS Lake Formation catalog, database, and desk

      AWS Lake Formation Grant Permissions interface showing table permissions with Select and Describe checked, grantable permissions section, and All data access radio button selected

      Determine 4 – AWS Lake Formation grant permission

    6. For Desk, choose asset.
    7. Beneath Desk permissions, verify Choose and Describe.
    8. Select Grant to save lots of the permissions.

Allow knowledge export utilizing the AWS CLI

Configure metadata export utilizing the PutDataExportConfiguration API. The Amazon DataZone service routinely creates an S3 desk bucket named aws-sagemaker-catalog with an asset_metadata namespace, and schedules a every day export job. Asset metadata is exported as soon as every day round midnight native time per AWS Area.

The SageMaker Area identifier is on the market on area element web page within the AWS Administration Console. Accessing the asset desk by way of the S3 Tables console or the Information tab in SageMaker Unified Studio can require as much as 24 hours.

AWS CLI command to allow SageMaker catalog export:

aws datazone put-data-export-configuration --domain-identifier <domain-id> --region <area> --enable-export

Use this AWS CLI command to validate the configuration is enabled:

aws datazone get-data-export-configuration --domain-identifier <domain-id> --region <area>
{
    "isExportEnabled": true,
    "standing": "COMPLETED",
    "s3TableBucketArn": "arn:aws:s3tables:<area>:<account-id>:bucket/aws-sagemaker-catalog",
    "createdAt": "2025-11-26T18:24:02.150000+00:00",
    "updatedAt": "2026-02-23T19:33:40.987000+00:00"
}

Entry the exported asset desk

  1. Navigate to Amazon SageMaker Domains within the AWS Administration Console.
  2. Choose your area and choose Open.
    Amazon SageMaker Domains management page showing an Identity Center based domain with Available status, created February 26, 2026, with Open unified studio button highlighted

    Determine 5 – Open Amazon SageMaker Unified Studio

  3. In SageMaker Unified Studio, select a challenge from the Choose a challenge dropdown checklist.
  4. To question SageMaker catalog knowledge, choose Construct within the menu bar after which select Question Editor. To create a brand new challenge, comply with the directions within the Amazon SageMaker Unified Studio Person Information.
    SageMaker Unified Studio project overview dashboard showing IDE and Applications, Data Analysis and Integration with Query Editor highlighted, Orchestration, and Machine Learning and Generative AI categories

    Determine 6 – Open SageMaker Unified Studio Question Editor

The asset_metadata.asset desk is on the market in Information explorer. Use Information explorer to view the schema and question knowledge to carry out analytics from.

  1. Develop Catalogs in Information explorer. Then, choose and increase s3tablecatalog, aws-sagemaker-catalog, asset_metadata, and asset.
  2. Take a look at querying the catalog with SELECT * FROM asset_metadata.asset LIMIT 10;.
SageMaker Unified Studio Query Editor with Data Explorer showing Lakehouse hierarchy including s3tablescatalog, aws-sagemaker-catalog, asset_metadata database, and asset table schema with SQL SELECT query

Determine 7 – Question SageMaker catalog

Queries for observability and analytics

With setup full, execute queries to realize insights on catalog utilization and adjustments. To watch asset progress, and look at how the information catalog has grown over the past 5 days:

SELECT 
    DATE (snapshot_time) as date,
    COUNT (*) as total_assets
FROM asset_metadata.asset
WHERE 
     DATE (snapshot_time) >= CURRENT_DATE - INTERVAL '5' DAY
GROUP BY DATE (snapshot_time)
ORDER BY date DESC;

SageMaker Unified Studio Query Editor showing SQL aggregation query on asset_metadata.asset table with results displaying date and total_assets columns, returning 42 assets for March 7-8, 2026"

Determine 8 – Question asset progress

Use the catalog to trace metadata adjustments to find out which property gained descriptions or possession over time. Use this question to determine property that gained enterprise descriptions over the previous 5 days by evaluating at the moment’s snapshot with the sooner snapshot.

SELECT
    t.asset_id,
    t.resource_name,
    p.business_description as description_before,
    t.business_description as description_now
FROM asset_metadata.asset t
JOIN asset_metadata.asset p ON t.asset_id = p.asset_id
WHERE DATE(t.snapshot_time) = CURRENT_DATE
    AND DATE(p.snapshot_time) = CURRENT_DATE - INTERVAL '5' DAY
    AND p.business_description IS NULL
    AND t.business_description IS NOT NULL;

Examine asset values at a particular cut-off date utilizing this question to retrieve metadata from any snapshot date.

SELECT
     asset_id,
     resource_name,
     business_description,
     extended_metadata['owningEntityId'] as proprietor,
     snapshot_time
FROM asset_metadata.asset
WHERE asset_id = 'your-asset-id'
     AND DATE(snapshot_time) = DATE('2025-11-26');

Clear up sources

To keep away from ongoing fees, clear up the sources created on this walkthrough:

  1. Disable metadata export:

Disable the every day metadata export to cease new snapshots:

aws datazone put-data-export-configuration 
  --domain-identifier <domain-id. 
  --no-enable-export 
  --region <area>

  1. Delete S3 Tables sources:

Optionally, delete the S3 Tables namespace containing the exported metadata to take away historic snapshots and cease storage fees. For directions on how you can delete S3 tables, see Deleting an Amazon S3 desk within the Amazon Easy Storage Service Person Information.

Conclusion

On this submit, you enabled the metadata export characteristic of SageMaker Catalog and used SQL queries to realize visibility into your asset stock. The characteristic converts asset metadata into Apache Iceberg tables partitioned by snapshot date, so you possibly can carry out time-travel queries, monitor catalog progress, observe metadata completeness, and audit historic asset states. This gives a repeatable, low-overhead approach to keep catalog well being and meet governance necessities over time.

To study extra about Amazon SageMaker Catalog, see the Amazon SageMaker Catalog documentation. To discover Apache Iceberg desk codecs and time-travel queries, see the Amazon S3 Tables documentation.


In regards to the Authors

Photo of Author Ramesh Singh

Ramesh is a Senior Product Supervisor Technical (Exterior Providers) at AWS in Seattle, Washington, presently with the Amazon SageMaker staff. He’s keen about constructing high-performance ML/AI and analytics merchandise that assist enterprise clients obtain their essential targets utilizing cutting-edge know-how.

Photo of Author Pradeep Misra

Pradeep is a Principal Analytics and Utilized AI Options Architect at AWS. He’s keen about fixing buyer challenges utilizing knowledge, analytics, and Utilized AI. Exterior of labor, he likes exploring new locations and taking part in badminton together with his household. He additionally likes doing science experiments, constructing LEGOs, and watching anime together with his daughters.

Photo of Author - Rohith Kayathi

Rohith is a Senior Software program Engineer at Amazon Internet Providers (AWS) working with Amazon SageMaker staff. He leads enterprise knowledge catalog, generative AI–powered metadata curation, and lineage options. He’s keen about constructing large-scale distributed programs, fixing advanced issues, and setting the bar for engineering excellence for his staff.

Photo of AUthor - Steve Phillips

Steve is a Principal Technical Account Supervisor and Analytics specialist at AWS within the North America area. Steve presently focuses on knowledge warehouse architectural design, knowledge lakes, knowledge ingestion pipelines, and cloud distributed architectures.

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles