Popular AI orchestration framework LlamaIndex has introduced Agent Document Workflow (ADW), a brand new architecture that the corporate says goes beyond retrieval-augmented generation (RAG) processes and increases agent productivity.
As orchestration frameworks proceed to enhance, this method could provide organizations with a option to improve the decision-making ability of their agents.
According to LlamaIndex, ADW will help agents “manage complex workflows that transcend easy extract or match.”
Some agent frameworks are based on RAG systems that provide agents with the knowledge they need to finish tasks. However, this method doesn’t allow agents to make decisions based on this information.
LlamaIndex gave some real-world examples of how ADW would work well. For example, during contract reviews, human analysts must extract key information, cross-reference regulatory requirements, discover potential risks, and develop recommendations. When utilized in this workflow, AI agents ideally follow the identical pattern, making decisions based on the documents they read to review the contract and knowledge from other documents.
“ADW addresses these challenges by treating documents as a part of broader business processes,” LlamaIndex said in a Blog post. “An ADW system can maintain status across steps, apply business rules, coordinate various components, and take actions based on document content—not only analyzing it.”
LlamaIndex has previously said that while RAG is a crucial technique, it remains to be primitive, especially for firms looking for more robust decision-making capabilities using AI.
Understand the context for decision making
LlamaIndex has developed reference architectures that mix its LlamaCloud parsing capabilities with agents. It “builds systems that may understand context, maintain state, and control multi-step processes.”
For this purpose, each workflow has a document that acts as an orchestrator. It can instruct agents to tap LlamaParse to extract information from data, maintain the state of the document context and process, after which retrieve reference material from one other knowledge base. From here, agents can begin generating recommendations for the contract review use case or other actionable decisions for various use cases.
“By maintaining status throughout the method, agents can handle complex multi-step workflows that transcend easy extract or match,” the corporate said. “This approach allows them to construct comprehensive context across the documents they process while coordinating the varied system components.”
Different agent frameworks
Agent orchestration is an emerging area, and plenty of firms are still exploring how agents – or multiple agents – work for them. Orchestrating AI agents and applications could develop into an even bigger issue this yr as agents move from single systems to multi-agent ecosystems.
AI agents are an extension of what RAG offers, namely the flexibility to seek out information based on corporate knowledge.
However, as increasingly firms start using AI agents, additionally they want them to take over lots of the tasks that human employees do. And for these more complicated use cases, “vanilla” RAG is just not enough. One of the advanced approaches that firms have considered is agent RAG, which expands agents' knowledge base. Models can resolve whether or not they need to seek out more information, what tool they’ll use to retrieve that information, and whether the context they simply retrieved is relevant before producing a result.