At this time, we’re asserting the final availability of an extra 18 totally managed open weight fashions in Amazon Bedrock from Google, MiniMax AI, Mistral AI, Moonshot AI, NVIDIA, OpenAI, and Qwen, together with the brand new Mistral Massive 3 and Ministral 3 3B, 8B, and 14B fashions.
With this launch, Amazon Bedrock now gives almost 100 serverless fashions, providing a broad and deep vary of fashions from main AI corporations, so clients can select the exact capabilities that greatest serve their distinctive wants. By carefully monitoring each buyer wants and technological developments, we recurrently broaden our curated number of fashions based mostly on buyer wants and technological developments to incorporate promising new fashions alongside established business favorites.
This ongoing enlargement of high-performing and differentiated mannequin choices helps clients keep on the forefront of AI innovation. You may entry these fashions on Amazon Bedrock by the unified API, consider, swap, and undertake new fashions with out rewriting purposes or altering infrastructure.
New Mistral AI fashions
These 4 Mistral AI fashions at the moment are accessible first on Amazon Bedrock, every optimized for various efficiency and price necessities:
- Mistral Massive 3 – This open weight mannequin is optimized for long-context, multimodal, and instruction reliability. It excels in lengthy doc understanding, agentic and power use workflows, enterprise data work, coding help, superior workloads comparable to math and coding duties, multilingual evaluation and processing, and multimodal reasoning with imaginative and prescient.
- Ministral 3 3B – The smallest within the Ministral 3 household is edge-optimized for single GPU deployment with sturdy language and imaginative and prescient capabilities. It reveals sturdy efficiency in picture captioning, textual content classification, real-time translation, knowledge extraction, quick content material era, and light-weight real-time purposes on edge or low-resource units.
- Ministral 3 8B – One of the best-in-class Ministral 3 mannequin for textual content and imaginative and prescient is edge-optimized for single GPU deployment with excessive efficiency and minimal footprint. This mannequin is good for chat interfaces in constrained environments, picture and doc description and understanding, specialised agentic use instances, and balanced efficiency for native or embedded programs.
- Ministral 3 14B – Essentially the most succesful Ministral 3 mannequin delivers state-of the-art textual content and imaginative and prescient efficiency optimized for single GPU deployment. You should use superior native agentic use instances and personal AI deployments the place superior capabilities meet sensible {hardware} constraints.
Extra open weight mannequin choices
You should use these open weight fashions for a variety of use instances throughout industries:
| Mannequin supplier | Mannequin title | Description | Use instances |
| Gemma 3 4B | Environment friendly textual content and picture mannequin that runs regionally on laptops. Multilingual help for on-device AI purposes. | On-device AI for cellular and edge purposes, privacy-sensitive native inference, multilingual chat assistants, picture captioning and outline, and light-weight content material era. | |
| Gemma 3 12B | Balanced textual content and picture mannequin for workstations. Multi-language understanding with native deployment for privacy-sensitive purposes. | Workstation-based AI purposes; native deployment for enterprises; multilingual doc processing, picture evaluation and Q&A; and privacy-compliant AI assistants. | |
| Gemma 3 27B | Highly effective textual content and picture mannequin for enterprise purposes. Multi-language help with native deployment for privateness and management. | Enterprise native deployment, high-performance multimodal purposes, superior picture understanding, multilingual customer support, and data-sensitive AI workflows. | |
| Moonshot AI | Kimi K2 Pondering | Deep reasoning mannequin that thinks whereas utilizing instruments. Handles analysis, coding and sophisticated workflows requiring a whole bunch of sequential actions. | Advanced coding initiatives requiring planning, multistep workflows, knowledge evaluation and computation, and long-form content material creation with analysis. |
| MiniMax AI | MiniMax M2 | Constructed for coding brokers and automation. Excels at multi-file edits, terminal operations and executing lengthy tool-calling chains effectively. | Coding brokers and built-in growth setting (IDE) integration, multi-file code enhancing, terminal automation and DevOps, long-chain device orchestration, and agentic software program growth. |
| Mistral AI | Magistral Small 1.2 | Excels at math, coding, multilingual duties, and multimodal reasoning with imaginative and prescient capabilities for environment friendly native deployment. | Math and coding duties, multilingual evaluation and processing, and multimodal reasoning with imaginative and prescient. |
| Voxtral Mini 1.0 | Superior audio understanding mannequin with transcription, multilingual help, Q&A, and summarization. | Voice-controlled purposes, quick speech-to-text conversion, and offline voice assistants. | |
| Voxtral Small 1.0 | Options state-of-the-art audio enter with best-in-class textual content efficiency; excels at speech transcription, translation, and understanding. | Enterprise speech transcription, multilingual customer support, and audio content material summarization. | |
| NVIDIA | NVIDIA Nemotron Nano 2 9B | Excessive effectivity LLM with hybrid transformer Mamba design, excelling in reasoning and agentic duties. | Reasoning, device calling, math, coding, and instruction following. |
| NVIDIA Nemotron Nano 2 VL 12B | Superior multimodal reasoning mannequin for video understanding and doc intelligence, powering Retrieval-Augmented Era (RAG) and multimodal agentic purposes. | Multi-image and video understanding, visible Q&A, and summarization. | |
| OpenAI | gpt-oss-safeguard-20b | Content material security mannequin that applies your customized insurance policies. Classifies dangerous content material with explanations for belief and security workflows. | Content material moderation and security classification, customized coverage enforcement, user-generated content material filtering, belief and security workflows, and automatic content material triage. |
| gpt-oss-safeguard-120b | Bigger content material security mannequin for complicated moderation. Applies customized insurance policies with detailed reasoning for enterprise belief and security groups. | Enterprise content material moderation at scale, complicated coverage interpretation, multilayered security classification, regulatory compliance checking, high-stakes content material evaluate. | |
| Qwen | Qwen3-Subsequent-80B-A3B | Quick inference with hybrid consideration for ultra-long paperwork. Optimized for RAG pipelines, device use & agentic workflows with fast responses. | RAG pipelines with lengthy paperwork, agentic workflows with device calling, code era and software program growth, multi-turn conversations with prolonged context, multilingual content material era. |
| Qwen3-VL-235B-A22B | Understands pictures and video. Extracts textual content from paperwork, converts screenshots to working code, and automates clicking by interfaces. | Extracting textual content from pictures and PDFs, changing UI designs or screenshots to working code, automating clicks and navigation in purposes, video evaluation and understanding, studying charts and diagrams. |
When implementing publicly accessible fashions, give cautious consideration to knowledge privateness necessities in your manufacturing environments, test for bias in output, and monitor your outcomes for knowledge safety, accountable AI, and mannequin analysis.
You may entry the enterprise-grade safety features of Amazon Bedrock and implement safeguards custom-made to your software necessities and accountable AI insurance policies with Amazon Bedrock Guardrails. You can even consider and evaluate fashions to determine the optimum fashions in your use instances by utilizing Amazon Bedrock mannequin analysis instruments.
To get began, you possibly can rapidly take a look at these fashions with just a few prompts within the playground of the Amazon Bedrock console or use any AWS SDKs to incorporate entry to the Bedrock InvokeModel and Converse APIs. You can even use these fashions with any agentic framework that helps Amazon Bedrock and deploy the brokers utilizing Amazon Bedrock AgentCore and Strands Brokers. To be taught extra, go to Code examples for Amazon Bedrock utilizing AWS SDKs within the Amazon Bedrock Consumer Information.
Now accessible
Examine the full Area listing for availability and future updates of recent fashions or search your mannequin title within the AWS CloudFormation assets tab of AWS Capabilities by Area. To be taught extra, take a look at the Amazon Bedrock product web page and the Amazon Bedrock pricing web page.
Give these fashions a attempt within the Amazon Bedrock console at this time and ship suggestions to AWS re:Put up for Amazon Bedrock or by your regular AWS Help contacts.
— Channy
Up to date on 4 December — Amazon Bedrock now helps Responses API on new OpenAI API-compatible service endpoints for GPT OSS 20B and 120B fashions. To be taught extra, go to Generate responses utilizing OpenAI APIs.


