[HTML payload içeriği buraya]
31.7 C
Jakarta
Sunday, November 24, 2024

Combine Amazon Bedrock with Amazon Redshift ML for generative AI functions


Amazon Redshift has enhanced its Redshift ML function to assist integration of enormous language fashions (LLMs). As a part of these enhancements, Redshift now allows native integration with Amazon Bedrock. This integration allows you to use LLMs from easy SQL instructions alongside your knowledge in Amazon Redshift, serving to you to construct generative AI functions shortly. This highly effective mixture allows prospects to harness the transformative capabilities of LLMs and seamlessly incorporate them into their analytical workflows.

With this new integration, now you can carry out generative AI duties reminiscent of language translation, textual content summarization, textual content technology, buyer classification, and sentiment evaluation in your Redshift knowledge utilizing well-liked basis fashions (FMs) reminiscent of Anthropic’s Claude, Amazon Titan, Meta’s Llama 2, and Mistral AI. You need to use the CREATE EXTERNAL MODEL command to level to a text-based mannequin in Amazon Bedrock, requiring no mannequin coaching or provisioning. You may invoke these fashions utilizing acquainted SQL instructions, making it extra simple than ever to combine generative AI capabilities into your knowledge analytics workflows.

Answer overview

As an example this new Redshift machine studying (ML) function, we are going to construct an answer to generate personalised food regimen plans for sufferers based mostly on their circumstances and drugs. The next determine reveals the steps to construct the answer and the steps to run it.

The steps to construct and run the answer are the next:

  1. Load pattern sufferers’ knowledge
  2. Put together the immediate
  3. Allow LLM entry
  4. Create a mannequin that references the LLM mannequin on Amazon Bedrock
  5. Ship the immediate and generate a customized affected person food regimen plan

Pre-requisites

  1. An AWS account.
  2. An Amazon Redshift Serverless workgroup or provisioned knowledge warehouse. For setup directions, see Making a workgroup with a namespace or Create a pattern Amazon Redshift knowledge warehouse, respectively. The Amazon Bedrock integration function is supported in each Amazon Redshift provisioned and serverless.
  3. Create or replace an AWS Id and Entry Administration (IAM position) for Amazon Redshift ML integration with Amazon Bedrock.
  4. Affiliate the IAM position to a Redshift occasion.
  5. Customers ought to have the required permissions to create fashions.

Implementation

The next are the answer implementation steps. The pattern knowledge used within the implementation is for illustration solely. The identical implementation method will be tailored to your particular knowledge units and use instances.

You may obtain a SQL pocket book to run the implementation steps in Redshift Question Editor V2. For those who’re utilizing one other SQL editor, you possibly can copy and paste the SQL queries both from the content material of this publish or from the pocket book.

Load pattern sufferers’ knowledge:

  1. Open Amazon Redshift Question Editor V2 or one other SQL editor of your selection and hook up with the Redshift knowledge warehouse.
  2. Run the next SQL to create the patientsinfo desk and cargo pattern knowledge.
-- Create desk

CREATE TABLE patientsinfo (
pid integer ENCODE az64,
pname varchar(100),
situation character various(100) ENCODE lzo,
treatment character various(100) ENCODE lzo
);

  1. Obtain the pattern file, add it into your S3 bucket, and cargo the information into the patientsinfo desk utilizing the next COPY command.
-- Load pattern knowledge
COPY patientsinfo
FROM 's3://<<your_s3_bucket>>/sample_patientsinfo.csv'
IAM_ROLE DEFAULT
csv
DELIMITER ','
IGNOREHEADER 1;

Put together the immediate:

  1. Run the next SQL to combination affected person circumstances and drugs.
SELECT
pname,
listagg(distinct situation,',') inside group (order by pid) over (partition by pid) as circumstances,
listagg(distinct treatment,',') inside group (order by pid) over (partition by pid) as drugs
FROM patientsinfo

The next is the pattern output displaying aggregated circumstances and drugs. The output contains a number of rows, which shall be grouped within the subsequent step.

  1. Construct the immediate to mix affected person, circumstances, and drugs knowledge.
SELECT
pname || ' has ' || circumstances || ' taking ' || drugs as patient_prompt
FROM (
    SELECT pname, 
    listagg(distinct situation,',') inside group (order by pid) over (partition by pid) as circumstances,
    listagg(distinct treatment,',') inside group (order by pid) over (partition by pid) as drugs
    FROM patientsinfo) 
GROUP BY 1

The next is the pattern output displaying the outcomes of the absolutely constructed immediate concatenating the sufferers, circumstances, and drugs into single column worth.

  1. Create a materialized view with the previous SQL question because the definition. This step isn’t necessary; you’re creating the desk for readability. Word that you simply may see a message indicating that materialized views with column aliases received’t be incrementally refreshed. You may safely ignore this message for the aim of this illustration.
CREATE MATERIALIZED VIEW mv_prompts AUTO REFRESH YES
AS
(
SELECT pid,
pname || ' has ' || circumstances || ' taking ' || drugs as patient_prompt
FROM (
SELECT pname, pid,
listagg(distinct situation,',') inside group (order by pid) over (partition by pid) as circumstances,
listagg(distinct treatment,',') inside group (order by pid) over (partition by pid) as drugs
FROM patientsinfo)
GROUP BY 1,2
)

  1. Run the next SQL to assessment the pattern output.
SELECT * FROM mv_prompts restrict 5;

The next is a pattern output with a materialized view.

Allow LLM mannequin entry:

Carry out the next steps to allow mannequin entry in Amazon Bedrock.

  1. Navigate to the Amazon Bedrock console.
  2. Within the navigation pane, select Mannequin Entry.

  1. Select Allow particular fashions.
    You will need to have the required IAM permissions to allow entry to accessible Amazon Bedrock FMs.

  1. For this illustration, use Anthropic’s Claude mannequin. Enter Claude within the search field and choose Claude from the record. Select Subsequent to proceed.

  1. Evaluate the choice and select Submit.

Create a mannequin referencing the LLM mannequin on Amazon Bedrock:

  1. Navigate again to Amazon Redshift Question Editor V2 or, in case you didn’t use Question Editor V2, to the SQL editor you used to attach with Redshift knowledge warehouse.
  2. Run the next SQL to create an exterior mannequin referencing the anthropic.claude-v2 mannequin on Amazon Bedrock. See Amazon Bedrock mannequin IDs for easy methods to discover the mannequin ID.
CREATE EXTERNAL MODEL patient_recommendations
FUNCTION patient_recommendations_func
IAM_ROLE '<<present the arn of IAM position created in pre-requisites>>'
MODEL_TYPE BEDROCK
SETTINGS (
    MODEL_ID 'anthropic.claude-v2',
    PROMPT 'Generate personalised food regimen plan for following affected person:');

Ship the immediate and generate a customized affected person food regimen plan:

  1. Run the next SQL to move the immediate to the perform created within the earlier step.
SELECT patient_recommendations_func(patient_prompt) 
FROM mv_prompts restrict 2;

  1. You’ll get the output with the generated food regimen plan. You may copy the cells and paste in a textual content editor or export the output to view the leads to a spreadsheet in case you’re utilizing Redshift Question Editor V2.

You have to to broaden the row measurement to see the whole textual content.

Extra customization choices

The earlier instance demonstrates an easy integration of Amazon Redshift with Amazon Bedrock. Nonetheless, you possibly can additional customise this integration to fit your particular wants and necessities.

  • Inference capabilities as leader-only capabilities: Amazon Bedrock mannequin inference capabilities can run as chief node-only when the question doesn’t reference tables. This may be useful if you wish to shortly ask an LLM a query.

You may run following SQL with no FROM clause. It will run as leader-node solely perform as a result of it doesn’t want knowledge to fetch and move to the mannequin.

SELECT patient_recommendations_func('Generate food regimen plan for pre-diabetes');

It will return a generic 7-day food regimen plan for pre-diabetes. The next determine is an output pattern generated by the previous perform name.

  • Inference with UNIFIED request sort fashions: On this mode, you possibly can move extra optionally available parameters together with enter textual content to customise the response. Amazon Redshift passes these parameters to the corresponding parameters for the Converse API.

Within the following instance, we’re setting the temperature parameter to a customized worth. The parameter temperature impacts the randomness and creativity of the mannequin’s outputs. The default worth is 1 (the vary is 0–1.0).

SELECT patient_recommendations_func(patient_prompt,object('temperature', 0.2)) 
FROM mv_prompts
WHERE pid=101;

The next is a pattern output with a temperature of 0.2. The output contains suggestions to drink fluids and keep away from sure meals.

Regenerate the predictions, this time setting the temperature to 0.8 for a similar affected person.

SELECT patient_recommendations_func(patient_prompt,object('temperature', 0.8)) 
FROM mv_prompts
WHERE pid=101;

The next is a pattern output with a temperature of 0.8. The output nonetheless contains suggestions on fluid consumption and meals to keep away from, however is extra particular in these suggestions.

Word that the output received’t be the identical each time you run a selected question. Nonetheless, we need to illustrate that the mannequin conduct is influenced by altering parameters.

  • Inference with RAW request sort fashions: CREATE EXTERNAL MODEL helps Amazon Bedrock-hosted fashions, even those who aren’t supported by the Amazon Bedrock Converse API. In these instances, the request_type must be uncooked and the request must be constructed throughout inference. The request is a mixture of a immediate and optionally available parameters.

Just be sure you allow entry to the Titan Textual content G1 – Categorical mannequin in Amazon Bedrock earlier than operating the next instance. It is best to observe the identical steps as described beforehand in Allow LLM mannequin entry to allow entry to this mannequin.

-- Create mannequin with REQUEST_TYPE as RAW

CREATE EXTERNAL MODEL titan_raw
FUNCTION func_titan_raw
IAM_ROLE '<<present the arn of IAM position created in pre-requisites>>'
MODEL_TYPE BEDROCK
SETTINGS (
MODEL_ID 'amazon.titan-text-express-v1',
REQUEST_TYPE RAW,
RESPONSE_TYPE SUPER);

-- Must assemble the request throughout inference.
SELECT func_titan_raw(object('inputText', 'Generate personalised food regimen plan for following: ' || patient_prompt, 'textGenerationConfig', object('temperature', 0.5, 'maxTokenCount', 500)))
FROM mv_prompts restrict 1;

The next determine reveals the pattern output.

  • Fetch run metrics with RESPONSE_TYPE as SUPER: For those who want extra details about an enter request reminiscent of complete tokens, you possibly can request the RESPONSE_TYPE to be tremendous if you create the mannequin.
-- Create Mannequin specifying RESPONSE_TYPE as SUPER.

CREATE EXTERNAL MODEL patient_recommendations_v2
FUNCTION patient_recommendations_func_v2
IAM_ROLE '<<present the arn of IAM position created in pre-requisites>>'
MODEL_TYPE BEDROCK
SETTINGS (
MODEL_ID 'anthropic.claude-v2',
PROMPT 'Generate personalised food regimen plan for following affected person:',
RESPONSE_TYPE SUPER);

-- Run the inference perform
SELECT patient_recommendations_func_v2(patient_prompt)
FROM mv_prompts restrict 1;

The next determine reveals the output, which incorporates the enter tokens, output tokens, and latency metrics.

Concerns and greatest practices

There are some things to bear in mind when utilizing the strategies described on this publish:

  • Inference queries may generate throttling exceptions due to the restricted runtime quotas for Amazon Bedrock. Amazon Redshift retries requests a number of instances, however queries can nonetheless be throttled as a result of throughput for non-provisioned fashions could be variable.
  • The throughput of inference queries is proscribed by the runtime quotas of the completely different fashions supplied by Amazon Bedrock in numerous AWS Areas. For those who discover that the throughput isn’t sufficient on your software, you possibly can request a quota improve on your account. For extra data, see Quotas for Amazon Bedrock.
  • For those who want steady and constant throughput, contemplate getting provisioned throughput for the mannequin that you simply want from Amazon Bedrock. For extra data, see Improve mannequin invocation capability with Provisioned Throughput in Amazon Bedrock.
  • Utilizing Amazon Redshift ML with Amazon Bedrock incurs extra prices. The price is model- and Area-specific and relies on the variety of enter and output tokens that the mannequin will course of. For extra data, see Amazon Bedrock Pricing.

Cleanup

To keep away from incurring future costs, delete the Redshift Serverless occasion or Redshift provisioned knowledge warehouse created as a part of the prerequisite steps.

Conclusion

On this publish, you realized easy methods to use the Amazon Redshift ML function to invoke LLMs on Amazon Bedrock from Amazon Redshift. You have been supplied with step-by-step directions on easy methods to implement this integration, utilizing illustrative datasets. Moreover, examine varied choices to additional customise the mixing to assist meet your particular wants. We encourage you to strive Redshift ML integration with Amazon Bedrock and share your suggestions with us.


In regards to the Authors

Satesh Sonti is a Sr. Analytics Specialist Options Architect based mostly out of Atlanta, specialised in constructing enterprise knowledge providers, knowledge warehousing, and analytics options. He has over 19 years of expertise in constructing knowledge belongings and main advanced knowledge providers for banking and insurance coverage purchasers throughout the globe.

Nikos Koulouris is a Software program Growth Engineer at AWS. He obtained his PhD from College of California, San Diego and he has been working within the areas of databases and analytics.

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles