In the past 100 yearsPresent IBm has increased and felled many alternative technical trends. What tends to win are technologies where it is chosen.
At VB transformation 2025 Today Armand Ruiz, VP of the AI ​​platform at IBM, described how Big Blue thinks about generative AI and the way his company users actually provide technology. An necessary topic that Ruiz emphasizes is that at this point it shouldn’t be about selecting an LLM provider or a single large language model (LLM). Enterprise customers are increasingly systematically rejecting single-vendor-Ki strategies in favor of multimodell approaches that correspond to certain LLMs of targeted applications.
IBM has its own open source AI models with the granite family, nevertheless it shouldn’t be to position this technology because the only selection and even the best selection for all workloads. This corporate behavior drives IBM to not position itself as a competitive model of the Foundation model, but as what Ruiz described as a control tower for AI workloads.
“When I sit in front of a customer, use every little thing you have got access to,” said Ruiz. “For the coding, they love anthropically and for another applications equivalent to for argumentation they like O3 after which they like for the LLM adjustment, with their very own data and a wonderful -tuning either our granite series or our Mistral with their small models and even llama.
The Multi-LLM gateway strategy
The response of IBM to this market reality is a newly published model -Gateway that gives a single API for switching between different LLMs and at the identical time maintains the observability and governance of all deployments.
With the technical architecture, customers can perform open source models on their very own inference stacks for sensitive use cases and at the identical time access public APIs equivalent to AWS DOME or Google Cloud from Gemini for less critical applications.
“This gateway offers our customers a single layer with a single API to change from an LLM to a different LLM and to present observability and governance in all places,” said Ruiz.
The approach directly contradicts the common provider technique to block customers into proprietary ecosystems. IBM shouldn’t be alone relating to tracking a multi-provider approach to model selection. In the past few months, several tools for model routing have occurred that aim to direct the workload to the corresponding model.
Agentorchestration protocols are created as a critical infrastructure
Apart from multi-model management, IBM deals with the emerging challenge of communication between agent-to-agent through open protocols.
The company developed ACP (Agent Communication Protocol) and contributed to the Linux Foundation. ACP is a competitive effort for Google's Agent2Agent (A2A) protocol, which Google contributed to the Linux Foundation.
Ruiz found that each protocols aim to facilitate communication between agents and reduce customer -specific development work. He assumes that the varied approaches will finally converge and that the differences between A2A and ACP are currently mostly technical.
The agents orchestration protocols offer AI systems standardized opportunities to interact over various platforms and providers.
The technical importance becomes clear when the corporate scale is taken into account: some IBM customers have already got over 100 agents in pilot programs. Without standardized communication protocols, every interaction between agent and agent requires custom development, which creates a non -sustainable integration burden.
AI is about changing workflows and the best way the work is finished
With regard to the best way Ruiz influences the AI ​​today, he suggests that it really must be greater than just chatbots.
“If you simply do chatbots or simply attempt to cost with AI, don't do a AI,” said Ruiz. “I believe AI is actually about changing the workflow and the best way the work is finished.”
The distinction between AI implementation and AI transformation focuses on how deep the technology is integrated into existing business processes. The internal HR sample of IBM shows this shift: Instead of employees who ask chatbots about HR information, special agents now cope with routine queries on compensation, setting and promoting campaigns, mechanically result in corresponding systems and only escalate on humans if vital.
“In the past, I spent lots of time talking to my HR partners for a lot of things. I now maintain most of it with an HR agent,” explained Ruiz. “Depending on the query when it’s about compensation or something about treating only the separation, ceasing or making someone, all of this stuff mix with different hr -internal systems, and these are like separate agents.”
This represents a fundamental architectural shift of human-computer interaction patterns for automated workflow automation. Instead of learning the staff who learn to interact with AI tools, the AI ​​learns to perform complete business processes from end-to-end.
Technical implication: Companies need to transcend the API integrations and liable for deep process instruments that enable AI agents to autonomously perform multi-stage workflows.
Strategic effects on the investment of corporations KI
The real provision data of IBM indicate several critical shifts for the corporate's corporate strategy:
Give up chatbot-first pondering: Organizations should discover complete workflows for transformation as an alternative of adding existing systems conversation interfaces. The aim is to eliminate human steps and never to enhance the interaction between the human computers.
Architect for multimodell flexibility: Instead of commitment to individual AI providers, corporations need integration platforms that enable switching between models based on application requirements and at the identical time maintain governance standards.
Invest in communication standards: Organizations should prioritize AI tools that support the emerging protocols equivalent to MCP, ACP and A2A as an alternative of proprietary integration approaches that create provider-lock-in.
“There is a lot to construct, and I keep saying that each AI has to learn, and particularly for managing directors, AI should be the primary executives and understand the concepts,” said Ruiz.

