At present, we’re excited to announce two main enhancements to mannequin fine-tuning in Azure AI Foundry—Reinforcement Wonderful-Tuning (RFT) with o4-mini, coming quickly, and Supervised Wonderful-Tuning (SFT) for the 4.1-nano mannequin, obtainable now.
At present, we’re excited to announce three main enhancements to mannequin fine-tuning in Azure AI Foundry—Reinforcement Wonderful-Tuning (RFT) with o4-mini (coming quickly), Supervised Wonderful-Tuning (SFT) for the GPT-4.1-nano and Llama 4 Scout mannequin (obtainable now). These updates replicate our continued dedication to empowering organizations with instruments to construct extremely custom-made, domain-adapted AI methods for real-world influence.
With these new fashions, we’re unblocking two main avenues of LLM customization: GPT-4.1-nano is a robust small mannequin, excellent for distillation, whereas o4-mini is the primary reasoning mannequin you’ll be able to fine-tune, and Llama 4 Scout is a best-in-class open supply mannequin.
Reinforcement Wonderful-Tuning with o4-mini
Reinforcement Wonderful-Tuning introduces a brand new degree of management for aligning mannequin habits with complicated enterprise logic. By rewarding correct reasoning and penalizing undesirable outputs, RFT improves mannequin decision-making in dynamic or high-stakes environments.
Coming quickly for the o4-mini mannequin, RFT unlocks new prospects to be used instances requiring adaptive reasoning, contextual consciousness, and domain-specific logic—all whereas sustaining quick inference efficiency.
Actual world influence: DraftWise
DraftWise, a authorized tech startup, used reinforcement fine-tuning (RFT) in Azure AI Foundry Fashions to boost the efficiency of reasoning fashions tailor-made for contract era and evaluation. Confronted with the problem of delivering extremely contextual, legally sound options to legal professionals, DraftWise fine-tuned Azure OpenAI fashions utilizing proprietary authorized knowledge to enhance response accuracy and adapt to nuanced consumer prompts. This led to a 30% enchancment in search end result high quality, enabling legal professionals to draft contracts quicker and deal with high-value advisory work.
Reinforcement fine-tuning on reasoning fashions is a possible sport changer for us. It’s serving to our fashions perceive the nuance of authorized language and reply extra intelligently to complicated drafting directions, which guarantees to make our product considerably extra helpful to legal professionals in actual time.
—James Ding, founder and CEO of DraftWise.
When must you use Reinforcement Wonderful-Tuning?
Reinforcement Wonderful-Tuning is greatest fitted to use instances the place adaptability, iterative studying, and domain-specific habits are important. It is best to think about RFT in case your situation includes:
- Customized Rule Implementation: RFT thrives in environments the place resolution logic is extremely particular to your group and can’t be simply captured by way of static prompts or conventional coaching knowledge. It allows fashions to study versatile, evolving guidelines that replicate real-world complexity.
- Area-Particular Operational Requirements: Ultimate for situations the place inside procedures diverge from trade norms—and the place success is dependent upon adhering to these bespoke requirements. RFT can successfully encode procedural variations, comparable to prolonged timelines or modified compliance thresholds, into the mannequin’s habits.
- Excessive Choice-Making Complexity: RFT excels in domains with layered logic and variable-rich resolution bushes. When outcomes depend upon navigating quite a few subcases or dynamically weighing a number of inputs, RFT helps fashions generalize throughout complexity and ship extra constant, correct selections.
Instance: Wealth advisory at Contoso Wellness
To showcase the potential of RFT, think about Contoso Wellness, a fictitious wealth advisory agency. Utilizing RFT, the o4-mini mannequin discovered to adapt to distinctive enterprise guidelines, comparable to figuring out optimum consumer interactions based mostly on nuanced patterns just like the ratio of a consumer’s web value to obtainable funds. This enabled Contoso to streamline their onboarding processes and make extra knowledgeable selections quicker.
Supervised Wonderful-Tuning now obtainable for GPT-4.1-nano
We’re additionally bringing Supervised Wonderful-Tuning (SFT) to the GPT-4.1-nano mannequin—a small however highly effective basis mannequin optimized for high-throughput, cost-sensitive workloads. With SFT, you’ll be able to instill your mannequin with company-specific tone, terminology, workflows, and structured outputs—all tailor-made to your area. This mannequin can be obtainable for fine-tuning within the coming days.
Why Wonderful-tune GPT-4.1-nano?
- Precision at Scale: Tailor the mannequin’s responses whereas sustaining velocity and effectivity.
- Enterprise-Grade Output: Guarantee alignment with enterprise processes and tone-of-voice.
- Light-weight and Deployable: Excellent for situations the place latency and value matter—comparable to customer support bots, on-device processing, or high-volume doc parsing.
In comparison with bigger fashions, 4.1-nano delivers quicker inference and decrease compute prices, making it nicely fitted to large-scale workloads like:
- Buyer help automation, the place fashions should deal with hundreds of tickets per hour with constant tone and accuracy.
- Inside information assistants that observe firm fashion and protocol in summarizing documentation or responding to FAQs.
As a small, quick, however extremely succesful mannequin, GPT-4.1-nano makes an incredible candidate for distillation as nicely. You should utilize fashions like GPT-4.1 or o4 to generate coaching knowledge—or seize manufacturing visitors with saved completions—and educate 4.1-nano to be simply as sensible!

Llama 4 Wonderful-Tuning now obtainable
We’re additionally excited to announce help for fine-tuning Meta’s Llama 4 Scout—a leading edge,17 billion energetic parameter mannequin which affords an trade main context window of 10M tokens whereas becoming on a single H100 GPU for inferencing. It’s a best-in-class mannequin, and extra highly effective than all earlier era llama fashions.
Llama 4 fine-tuning is on the market in our managed compute providing, permitting you to fine-tune and inference utilizing your personal GPU quota. Out there in each Azure AI Foundry and as Azure Machine Studying parts, you will have entry to extra hyperparameters for deeper customization in comparison with our serverless expertise.
Get began with Azure AI Foundry at present
Azure AI Foundry is your basis for enterprise-grade AI tuning. These fine-tuning enhancements unlock new frontiers in mannequin customization, serving to you construct clever methods that assume and reply in ways in which replicate your online business DNA.
- Use Reinforcement Wonderful-tuning with o4-mini to construct reasoning engines that study from expertise and evolve over time. Coming quickly in Azure AI Foundry, with regional availability for East US2 and Sweden Central.
- Use Supervised Wonderful-Tuning with 4.1-nano to scale dependable, cost-efficient, and extremely custom-made mannequin behaviors throughout your group. Out there now in Azure AI Foundry in North Central US and Sweden Central.
- Strive Llama 4 scout superb tuning to customise a best-in-class open supply mannequin. Out there now in Azure AI Foundry mannequin catalog and Azure Machine Studying.
With Azure AI Foundry, fine-tuning isn’t nearly accuracy—it’s about belief, effectivity, and flexibility at each layer of your stack.
Discover additional:
We’re simply getting began. Keep tuned for extra mannequin help, superior tuning strategies, and instruments that will help you construct AI that’s smarter, safer, and uniquely yours.