[HTML payload içeriği buraya]
27.4 C
Jakarta
Monday, November 25, 2024

Introducing Llama 3.2 fashions from Meta in Amazon Bedrock: A brand new era of multimodal imaginative and prescient and light-weight fashions


Voiced by Polly

In July, we introduced the provision of Llama 3.1 fashions in Amazon Bedrock. Generative AI expertise is bettering at unbelievable velocity and in the present day, we’re excited to introduce the brand new Llama 3.2 fashions from Meta in Amazon Bedrock.

Llama 3.2 provides multimodal imaginative and prescient and light-weight fashions representing Meta’s newest development in massive language fashions (LLMs) and offering enhanced capabilities and broader applicability throughout numerous use instances. With a give attention to accountable innovation and system-level security, these new fashions show state-of-the-art efficiency on a variety of business benchmarks and introduce options that assist you to construct a brand new era of AI experiences.

These fashions are designed to encourage builders with picture reasoning and are extra accessible for edge purposes, unlocking extra potentialities with AI.

The Llama 3.2 assortment of fashions are supplied in numerous sizes, from light-weight text-only 1B and 3B parameter fashions appropriate for edge gadgets to small and medium-sized 11B and 90B parameter fashions able to refined reasoning duties together with multimodal help for prime decision photographs. Llama 3.2 11B and 90B are the primary Llama fashions to help imaginative and prescient duties, with a brand new mannequin structure that integrates picture encoder representations into the language mannequin. The brand new fashions are designed to be extra environment friendly for AI workloads, with diminished latency and improved efficiency, making them appropriate for a variety of purposes.

All Llama 3.2 fashions help a 128K context size, sustaining the expanded token capability launched in Llama 3.1. Moreover, the fashions provide improved multilingual help for eight languages together with English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai.

Along with the present textual content succesful Llama 3.1 8B, 70B, and 405B fashions, Llama 3.2 helps multimodal use instances. Now you can use 4 new Llama 3.2 fashions — 90B, 11B, 3B, and 1B — from Meta in Amazon Bedrock to construct, experiment, and scale your inventive concepts:

Llama 3.2 90B Imaginative and prescient (textual content + picture enter) – Meta’s most superior mannequin, perfect for enterprise-level purposes. This mannequin excels at normal information, long-form textual content era, multilingual translation, coding, math, and superior reasoning. It additionally introduces picture reasoning capabilities, permitting for picture understanding and visible reasoning duties. This mannequin is good for the next use instances: picture captioning, image-text retrieval, visible grounding, visible query answering and visible reasoning, and doc visible query answering.

Llama 3.2 11B Imaginative and prescient (textual content + picture enter) – Nicely-suited for content material creation, conversational AI, language understanding, and enterprise purposes requiring visible reasoning. The mannequin demonstrates robust efficiency in textual content summarization, sentiment evaluation, code era, and following directions, with the added capability to purpose about photographs. This mannequin use instances are much like the 90B model: picture captioning, image-text-retrieval, visible grounding, visible query answering and visible reasoning, and doc visible query answering.

Llama 3.2 3B (textual content enter) – Designed for purposes requiring low-latency inferencing and restricted computational assets. It excels at textual content summarization, classification, and language translation duties. This mannequin is good for the next use instances: cellular AI-powered writing assistants and customer support purposes.

Llama 3.2 1B (textual content enter) – Essentially the most light-weight mannequin within the Llama 3.2 assortment of fashions, good for retrieval and summarization for edge gadgets and cellular purposes. This mannequin is good for the next use instances: private info administration and multilingual information retrieval.

As well as, Llama 3.2 is constructed on prime of the Llama Stack, a standardized interface for constructing canonical toolchain parts and agentic purposes, making constructing and deploying simpler than ever. Llama Stack API adapters and distributions are designed to most successfully leverage the Llama mannequin capabilities and it offers prospects the power to benchmark Llama fashions throughout totally different distributors.

Meta has examined Llama 3.2 on over 150 benchmark datasets spanning a number of languages and performed intensive human evaluations, demonstrating aggressive efficiency with different main basis fashions. Let’s see how these fashions work in observe.

Utilizing Llama 3.2 fashions in Amazon Bedrock
To get began with Llama 3.2 fashions, I navigate to the Amazon Bedrock console and select Mannequin entry on the navigation pane. There, I request entry for the brand new Llama 3.2 fashions: Llama 3.2 1B, 3B, 11B Imaginative and prescient, and 90B Imaginative and prescient.

To check the brand new imaginative and prescient functionality, I open one other browser tab and obtain from the Our World in Information web site the Share of electrical energy generated by renewables chart in PNG format. The chart could be very excessive decision and I resize it to be 1024 pixel broad.

Again within the Amazon Bedrock console, I select Chat underneath Playgrounds within the navigation pane, choose Meta because the class, and select the Llama 3.2 90B Imaginative and prescient mannequin.

I exploit Select information to pick the resized chart picture and use this immediate:

Based mostly on this chart, which international locations in Europe have the very best share?

I select Run and the mannequin analyzes the picture and returns its outcomes:

Using Meta Llama 3.2 models in the Amazon Bedrock console

I also can entry the fashions programmatically utilizing the AWS Command Line Interface (AWS CLI) and AWS SDKs. In comparison with utilizing the Llama 3.1 fashions, I solely must replace the mannequin IDs as described within the documentation. I also can use the brand new cross-region inference endpoint for the US and the EU Areas. These endpoints work for any Area inside the US and the EU respectively. For instance, the cross-region inference endpoints for the Llama 3.2 90B Imaginative and prescient mannequin are:

  • us.meta.llama3-2-90b-instruct-v1:0
  • eu.meta.llama3-2-90b-instruct-v1:0

Right here’s a pattern AWS CLI command utilizing the Amazon Bedrock Converse API. I exploit the --query parameter of the CLI to filter the consequence and solely present the textual content content material of the output message:

aws bedrock-runtime converse --messages '[{ "role": "user", "content": [ { "text": "Tell me the three largest cities in Italy." } ] }]' --model-id us.meta.llama3-2-90b-instruct-v1:0 --query 'output.message.content material[*].textual content' --output textual content

In output, I get the response message from the "assistant".

The three largest cities in Italy are:

1. Rome (Roma) - inhabitants: roughly 2.8 million
2. Milan (Milano) - inhabitants: roughly 1.4 million
3. Naples (Napoli) - inhabitants: roughly 970,000

It’s not a lot totally different in case you use one of many AWS SDKs. For instance, right here’s how you should use Python with the AWS SDK for Python (Boto3) to research the identical picture as within the console instance:

import boto3

MODEL_ID = "us.meta.llama3-2-90b-instruct-v1:0"
# MODEL_ID = "eu.meta.llama3-2-90b-instruct-v1:0"

IMAGE_NAME = "share-electricity-renewable-small.png"

bedrock_runtime = boto3.shopper("bedrock-runtime")

with open(IMAGE_NAME, "rb") as f:
    picture = f.learn()

user_message = "Based mostly on this chart, which international locations in Europe have the very best share?"

messages = [
    {
        "role": "user",
        "content": [
            {"image": {"format": "png", "source": {"bytes": image}}},
            {"text": user_message},
        ],
    }
]

response = bedrock_runtime.converse(
    modelId=MODEL_ID,
    messages=messages,
)
response_text = response["output"]["message"]["content"][0]["text"]
print(response_text)

Llama 3.2 fashions are additionally out there in Amazon SageMaker JumpStart, a machine studying (ML) hub that makes it simple to deploy pre-trained fashions utilizing the console or programmatically by means of the SageMaker Python SDK. From SageMaker JumpStart, you too can entry and deploy new safeguard fashions that may assist classify the security stage of mannequin inputs (prompts) and outputs (responses), together with Llama Guard 3 11B Imaginative and prescient, that are designed to help accountable innovation and system-level security.

As well as, you’ll be able to simply fine-tune Llama 3.2 1B and 3B fashions with SageMaker JumpStart in the present day. Advantageous-tuned fashions can then be imported as customized fashions into Amazon Bedrock. Advantageous-tuning for the complete assortment of Llama 3.2 fashions in Amazon Bedrock and Amazon SageMaker JumpStart is coming quickly.

The publicly out there weights of Llama 3.2 fashions make it simpler to ship tailor-made options for customized wants. For instance, you’ll be able to fine-tune a Llama 3.2 mannequin for a selected use case and carry it into Amazon Bedrock as a customized mannequin, probably outperforming different fashions in domain-specific duties. Whether or not you’re fine-tuning for enhanced efficiency in areas like content material creation, language understanding, or visible reasoning, Llama 3.2’s availability in Amazon Bedrock and SageMaker empowers you to create distinctive, high-performing AI capabilities that may set your options aside.

Extra on Llama 3.2 mannequin structure
Llama 3.2 builds upon the success of its predecessors with a sophisticated structure designed for optimum efficiency and flexibility:

Auto-regressive language mannequin – At its core, Llama 3.2 makes use of an optimized transformer structure, permitting it to generate textual content by predicting the following token primarily based on the earlier context.

Advantageous-tuning strategies – The instruction-tuned variations of Llama 3.2 make use of two key strategies:

  • Supervised fine-tuning (SFT) – This course of adapts the mannequin to comply with particular directions and generate extra related responses.
  • Reinforcement studying with human suggestions (RLHF) – This superior method aligns the mannequin’s outputs with human preferences, enhancing helpfulness and security.

Multimodal capabilities – For the 11B and 90B Imaginative and prescient fashions, Llama 3.2 introduces a novel strategy to picture understanding:

  • Individually educated picture reasoning adaptor weights are built-in with the core LLM weights.
  • These adaptors are linked to the principle mannequin by means of cross-attention mechanisms. Cross-attention permits one part of the mannequin to give attention to related elements of one other part’s output, enabling info move between totally different sections of the mannequin.
  • When a picture is enter, the mannequin treats the picture reasoning course of as a “instrument use” operation, permitting for classy visible evaluation alongside textual content processing. On this context, instrument use is the generic time period used when a mannequin makes use of exterior assets or capabilities to reinforce its capabilities and full duties extra successfully.

Optimized inference – All fashions help grouped-query consideration (GQA), which reinforces inference velocity and effectivity, notably useful for the bigger 90B mannequin.

This structure permits Llama 3.2 to deal with a variety of duties, from textual content era and understanding to complicated reasoning and picture evaluation, all whereas sustaining excessive efficiency and adaptableness throughout totally different mannequin sizes.

Issues to know
Llama 3.2 fashions from Meta at the moment are usually out there in Amazon Bedrock within the following AWS Areas:

  • Llama 3.2 1B and 3B fashions can be found within the US West (Oregon) and Europe (Frankfurt) Areas, and can be found within the US East (Ohio, N. Virginia) and Europe (Eire, Paris) Areas through cross-region inference.
  • Llama 3.2 11B Imaginative and prescient and 90B Imaginative and prescient fashions can be found within the US West (Oregon) Area, and can be found within the US East (Ohio, N. Virginia) Areas through cross-region inference.

Examine the full AWS Area checklist for future updates. To estimate your prices, go to the Amazon Bedrock pricing web page.

To lean extra about how you should use Llama 3.2 11B and 90B fashions to help imaginative and prescient duties, learn the Imaginative and prescient use instances with Llama 3.2 11B and 90B fashions from Meta put up on the AWS Machine Studying weblog channel.

AWS and Meta are additionally collaborating to carry smaller Llama fashions to on-device purposes, that includes the brand new 1B and 3B fashions. For extra info, see the Alternatives for telecoms with small language fashions: Insights from AWS and Meta put up on the AWS for Industries weblog channel.

To be taught extra about Llama 3.2 options and capabilities, go to the Llama fashions part of the Amazon Bedrock documentation. Give Llama 3.2 a attempt within the Amazon Bedrock console in the present day, and ship suggestions to AWS re:Submit for Amazon Bedrock.

You could find deep-dive technical content material and uncover how our Builder communities are utilizing Amazon Bedrock at neighborhood.aws. Tell us what you construct with Llama 3.2 in Amazon Bedrock!

Danilo



Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles