Brokers are solely as succesful because the instruments you give them—and solely as reliable because the governance behind these instruments.
This weblog publish is the second out of a six-part weblog sequence referred to as Agent Manufacturing unit which is able to share greatest practices, design patterns, and instruments to assist information you thru adopting and constructing agentic AI.
Within the earlier weblog, we explored 5 frequent design patterns of agentic AI—from instrument use and reflection to planning, multi-agent collaboration, and adaptive reasoning. These patterns present how brokers may be structured to attain dependable, scalable automation in real-world environments.
Throughout the business, we’re seeing a transparent shift. Early experiments targeted on single-model prompts and static workflows. Now, the dialog is about extensibility—easy methods to give brokers a broad, evolving set of capabilities with out locking into one vendor or rewriting integrations for every new want. Platforms are competing on how rapidly builders can:
- Combine with lots of of APIs, companies, information sources, and workflows.
- Reuse these integrations throughout totally different groups and runtime environments.
- Keep enterprise-grade management over who can name what, when, and with what information.
The lesson from the previous 12 months of agentic AI evolution is easy: brokers are solely as succesful because the instruments you give them—and solely as reliable because the governance behind these instruments.
Extensibility by way of open requirements
Within the early phases of agent growth, integrating instruments was usually a bespoke, platform-specific effort. Every framework had its personal conventions for outlining instruments, passing information, and dealing with authentication. This created a number of constant blockers:
- Duplication of effort—the identical inside API needed to be wrapped in another way for every runtime.
- Brittle integrations—small modifications to schemas or endpoints might break a number of brokers without delay.
- Restricted reusability—instruments constructed for one crew or setting have been arduous to share throughout initiatives or clouds.
- Fragmented governance—totally different runtimes enforced totally different safety and coverage fashions.
As organizations started deploying brokers throughout hybrid and multi-cloud environments, these inefficiencies grew to become main obstacles. Groups wanted a technique to standardize how instruments are described, found, and invoked, whatever the internet hosting setting.
That’s the place open protocols entered the dialog. Simply as HTTP reworked the online by creating a standard language for purchasers and servers, open protocols for brokers intention to make instruments moveable, interoperable, and simpler to control.
One of the vital promising examples is the Mannequin Context Protocol (MCP)—an ordinary for outlining instrument capabilities and I/O schemas so any MCP-compliant agent can dynamically uncover and invoke them. With MCP:
- Instruments are self-describing, making discovery and integration quicker.
- Brokers can discover and use instruments at runtime with out handbook wiring.
- Instruments may be hosted anyplace—on-premises, in a associate cloud, or in one other enterprise unit—with out dropping governance.
Azure AI Foundry helps MCP, enabling you to carry present MCP servers immediately into your brokers. This provides you the advantages of open interoperability plus enterprise-grade safety, observability, and administration. Be taught extra about MCP at MCP Dev Days.

After getting an ordinary for portability by way of open protocols like MCP, the subsequent query turns into: what sorts of instruments ought to your brokers have, and the way do you arrange them to allow them to ship worth rapidly whereas staying adaptable?
In Azure AI Foundry, we consider this as constructing an enterprise toolchain—a layered set of capabilities that steadiness velocity (getting one thing priceless working as we speak), differentiation (capturing what makes your small business distinctive), and attain (connecting throughout all of the techniques the place work truly occurs).
1. Constructed-in instruments for fast worth: Azure AI Foundry consists of ready-to-use instruments for frequent enterprise wants: looking out throughout SharePoint and information lake, executing Python for information evaluation, performing multi-step internet analysis with Bing, and triggering browser automation duties. These aren’t simply conveniences—they let groups rise up purposeful, high-value brokers in days as an alternative of weeks, with out the friction of early integration work.

2. Customized instruments in your aggressive edge: Each group has proprietary techniques and processes that may’t be replicated by off-the-shelf instruments. Azure AI Foundry makes it easy to wrap these as agentic AI instruments—whether or not they’re APIs out of your ERP, a producing high quality management system, or a associate’s service. By invoking them by way of OpenAPI or MCP, these instruments develop into moveable and discoverable throughout groups, initiatives, and even clouds, whereas nonetheless benefiting from Foundry’s identification, coverage, and observability layers.

3. Connectors for optimum attain: By Azure Logic Apps, Foundry can join brokers to over 1,400 SaaS and on-premises techniques—CRM, ERP, ITSM, information warehouses, and extra. This dramatically reduces integration elevate, permitting you to plug into present enterprise processes with out constructing each connector from scratch.

One instance of this toolchain in motion comes from NTT DATA, which constructed brokers in Azure AI Foundry that combine Microsoft Cloth Knowledge Agent alongside different enterprise instruments. These brokers enable workers throughout HR, operations, and different capabilities to work together naturally with information—revealing real-time insights and enabling actions—lowering time-to-market by 50% and giving non‑technical customers intuitive, self-service entry to enterprise intelligence.
Extensibility should be paired with governance to maneuver from prototype to enterprise-ready automation. Azure AI Foundry addresses this with a secure-by-default strategy to instrument administration:
- Authentication and identification in built-in connectors: Enterprise-grade connectors—like SharePoint and Microsoft Cloth—already use on-behalf-of (OBO) authentication. When an agent invokes these instruments, Foundry ensures that the decision respects the top person’s permissions through managed Entra IDs, preserving present authorization guidelines. With Microsoft Entra Agent ID, each agentic challenge created in Azure AI Foundry robotically seems in an agent-specific utility view throughout the Microsoft Entra admin middle. This supplies safety groups with a unified listing view of all brokers and agent functions they should handle throughout Microsoft. This integration marks step one towards standardizing governance for AI brokers firm broad. Whereas Entra ID is native, Azure AI Foundry additionally helps integrations with exterior identification techniques. By federation, prospects who use suppliers reminiscent of Okta or Google Identification can nonetheless authenticate brokers and customers to name instruments securely.
- Customized instruments with OpenAPI and MCP: OpenAPI-specified instruments allow seamless connectivity utilizing managed identities, API keys, or unauthenticated entry. These instruments may be registered immediately in Foundry, and align with commonplace API design greatest practices. Foundry can also be increasing MCP safety to incorporate saved credentials, project-level managed identities, and third-party OAuth flows, together with safe personal networking—advancing towards a totally enterprise-grade, end-to-end MCP integration mannequin.
- API governance with Azure API Administration (APIM): APIM supplies a robust management aircraft for managing instrument calls: it allows centralized publishing, coverage enforcement (authentication, price limits, payload validation), and monitoring. Moreover, you may deploy self-hosted gateways inside VNets or on-prem environments to implement enterprise insurance policies near backend techniques. Complementing this, Azure API Middle acts as a centralized, design-time API stock and discovery hub—permitting groups to register, catalog, and handle personal MCP servers alongside different APIs. These capabilities present the identical governance you count on in your APIs—prolonged to agentic AI instruments with out extra engineering.
- Observability and auditability: Each instrument invocation in Foundry—whether or not inside or exterior—is traced with step-level logging. This consists of identification, instrument title, inputs, outputs, and outcomes, enabling steady reliability monitoring and simplified auditing.
Enterprise-grade administration ensures instruments are safe and observable—however success additionally relies on the way you design and function them from day one. Drawing on Azure AI Foundry steering and buyer expertise, just a few ideas stand out:
- Begin with the contract. Deal with each instrument like an API product. Outline clear inputs, outputs, and error behaviors, and preserve schemas constant throughout groups. Keep away from overloading a single instrument with a number of unrelated actions; smaller, single-purpose instruments are simpler to check, monitor, and reuse.
- Select the precise packaging. For proprietary APIs, determine early whether or not OpenAPI or MCP most closely fits your wants. OpenAPI instruments are easy for well-documented REST APIs, whereas MCP instruments excel when portability and cross-environment reuse are priorities.
- Centralize governance. Publish customized instruments behind Azure API Administration or a self-hosted gateway so authentication, throttling, and payload inspection are enforced persistently. This retains coverage logic out of instrument code and makes modifications simpler to roll out.
- Bind each motion to identification. All the time know which person or agent is invoking the instrument. For built-in connectors, leverage identification passthrough or OBO. For customized instruments, use Entra ID or the suitable API key/credential mannequin, and apply least-privilege entry.
- Instrument early. Add tracing, logging, and analysis hooks earlier than transferring to manufacturing. Early observability permits you to monitor efficiency traits, detect regressions, and tune instruments with out downtime.
Following these practices ensures that the instruments you combine as we speak stay safe, moveable, and maintainable as your agent ecosystem grows.
What’s subsequent
Partially three of the Agent Manufacturing unit sequence, we’ll give attention to observability for AI brokers—easy methods to hint each step, consider instrument efficiency, and monitor agent conduct in actual time. We’ll cowl the built-in capabilities in Azure AI Foundry, integration patterns with Azure Monitor, and greatest practices for turning telemetry into steady enchancment.
Did you miss the primary publish within the sequence? Test it out: The brand new period of agentic AI—frequent use instances and design patterns.
