[HTML payload içeriği buraya]
28 C
Jakarta
Sunday, May 17, 2026

Saying new fine-tuning fashions and strategies in Azure AI Foundry


At the moment, we’re excited to announce two main enhancements to mannequin fine-tuning in Azure AI Foundry—Reinforcement Fantastic-Tuning (RFT) with o4-mini, coming quickly, and Supervised Fantastic-Tuning (SFT) for the 4.1-nano mannequin, accessible now.

At the moment, we’re excited to announce three main enhancements to mannequin fine-tuning in Azure AI Foundry—Reinforcement Fantastic-Tuning (RFT) with o4-mini (coming quickly), Supervised Fantastic-Tuning (SFT) for the GPT-4.1-nano and Llama 4 Scout mannequin (accessible now). These updates replicate our continued dedication to empowering organizations with instruments to construct extremely personalized, domain-adapted AI techniques for real-world impression. 

With these new fashions, we’re unblocking two main avenues of LLM customization: GPT-4.1-nano is a robust small mannequin, perfect for distillation, whereas o4-mini is the primary reasoning mannequin you possibly can fine-tune, and Llama 4 Scout is a best-in-class open supply mannequin. 

Reinforcement Fantastic-Tuning with o4-mini 

Reinforcement Fantastic-Tuning introduces a brand new degree of management for aligning mannequin conduct with complicated enterprise logic. By rewarding correct reasoning and penalizing undesirable outputs, RFT improves mannequin decision-making in dynamic or high-stakes environments.

Coming quickly for the o4-mini mannequin, RFT unlocks new potentialities to be used circumstances requiring adaptive reasoning, contextual consciousness, and domain-specific logic—all whereas sustaining quick inference efficiency.

Actual world impression: DraftWise 

DraftWise, a authorized tech startup, used reinforcement fine-tuning (RFT) in Azure AI Foundry Fashions to reinforce the efficiency of reasoning fashions tailor-made for contract era and evaluation. Confronted with the problem of delivering extremely contextual, legally sound solutions to attorneys, DraftWise fine-tuned Azure OpenAI fashions utilizing proprietary authorized knowledge to enhance response accuracy and adapt to nuanced person prompts. This led to a 30% enchancment in search consequence high quality, enabling attorneys to draft contracts sooner and deal with high-value advisory work. 

Reinforcement fine-tuning on reasoning fashions is a possible recreation changer for us. It’s serving to our fashions perceive the nuance of authorized language and reply extra intelligently to complicated drafting directions, which guarantees to make our product considerably extra helpful to attorneys in actual time.

—James Ding, founder and CEO of DraftWise.

When do you have to use Reinforcement Fantastic-Tuning?

Reinforcement Fantastic-Tuning is greatest suited to use circumstances the place adaptability, iterative studying, and domain-specific conduct are important. It is best to think about RFT in case your situation entails: 

  1. Customized Rule Implementation: RFT thrives in environments the place resolution logic is very particular to your group and can’t be simply captured by means of static prompts or conventional coaching knowledge. It permits fashions to be taught versatile, evolving guidelines that replicate real-world complexity. 
  1. Area-Particular Operational Requirements: Supreme for situations the place inner procedures diverge from trade norms—and the place success will depend on adhering to these bespoke requirements. RFT can successfully encode procedural variations, corresponding to prolonged timelines or modified compliance thresholds, into the mannequin’s conduct. 
  1. Excessive Determination-Making Complexity: RFT excels in domains with layered logic and variable-rich resolution bushes. When outcomes depend upon navigating quite a few subcases or dynamically weighing a number of inputs, RFT helps fashions generalize throughout complexity and ship extra constant, correct choices. 

Instance: Wealth advisory at Contoso Wellness 

To showcase the potential of RFT, think about Contoso Wellness, a fictitious wealth advisory agency. Utilizing RFT, the o4-mini mannequin realized to adapt to distinctive enterprise guidelines, corresponding to figuring out optimum shopper interactions based mostly on nuanced patterns just like the ratio of a shopper’s web price to accessible funds. This enabled Contoso to streamline their onboarding processes and make extra knowledgeable choices sooner.

Supervised Fantastic-Tuning now accessible for GPT-4.1-nano 

We’re additionally bringing Supervised Fantastic-Tuning (SFT) to the GPT-4.1-nano mannequin—a small however highly effective basis mannequin optimized for high-throughput, cost-sensitive workloads. With SFT, you possibly can instill your mannequin with company-specific tone, terminology, workflows, and structured outputs—all tailor-made to your area. This mannequin will probably be accessible for fine-tuning within the coming days. 

Why Fantastic-tune GPT-4.1-nano? 

  • Precision at Scale: Tailor the mannequin’s responses whereas sustaining velocity and effectivity. 
  • Enterprise-Grade Output: Guarantee alignment with enterprise processes and tone-of-voice. 
  • Light-weight and Deployable: Good for situations the place latency and price matter—corresponding to customer support bots, on-device processing, or high-volume doc parsing. 

In comparison with bigger fashions, 4.1-nano delivers sooner inference and decrease compute prices, making it nicely suited to large-scale workloads like: 

  • Buyer help automation, the place fashions should deal with 1000’s of tickets per hour with constant tone and accuracy. 
  • Inner data assistants that observe firm fashion and protocol in summarizing documentation or responding to FAQs. 

As a small, quick, however extremely succesful mannequin, GPT-4.1-nano makes an ideal candidate for distillation as nicely. You should use fashions like GPT-4.1 or o4 to generate coaching knowledge—or seize manufacturing visitors with saved completions—and train 4.1-nano to be simply as sensible!

Fine-tune gpt-4.1-nano demo in Azure AI Foundry.

Llama 4 Fantastic-Tuning now accessible 

We’re additionally excited to announce help for fine-tuning Meta’s Llama 4 Scout—a leading edge,17 billion lively parameter mannequin which affords an trade main context window of 10M tokens whereas becoming on a single H100 GPU for inferencing. It’s a best-in-class mannequin, and extra highly effective than all earlier era llama fashions. 

Llama 4 fine-tuning is out there in our managed compute providing, permitting you to fine-tune and inference utilizing your individual GPU quota. Out there in each Azure AI Foundry and as Azure Machine Studying elements, you’ve entry to further hyperparameters for deeper customization in comparison with our serverless expertise.

Get began with Azure AI Foundry right this moment

Azure AI Foundry is your basis for enterprise-grade AI tuning. These fine-tuning enhancements unlock new frontiers in mannequin customization, serving to you construct clever techniques that assume and reply in ways in which replicate your small business DNA.

  • Use Reinforcement Fantastic-tuning with o4-mini to construct reasoning engines that be taught from expertise and evolve over time. Coming quickly in Azure AI Foundry, with regional availability for East US2 and Sweden Central. 
  • Use Supervised Fantastic-Tuning with 4.1-nano to scale dependable, cost-efficient, and extremely personalized mannequin behaviors throughout your group. Out there now in Azure AI Foundry in North Central US and Sweden Central. 
  • Strive Llama 4 scout nice tuning to customise a best-in-class open supply mannequin. Out there now in Azure AI Foundry mannequin catalog and Azure Machine Studying. 

With Azure AI Foundry, fine-tuning isn’t nearly accuracy—it’s about belief, effectivity, and adaptableness at each layer of your stack. 

Discover additional: 

We’re simply getting began. Keep tuned for extra mannequin help, superior tuning strategies, and instruments that can assist you construct AI that’s smarter, safer, and uniquely yours. 



Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles