HomeArtificial IntelligenceAnchoring LLMs in point of fact: How one company used artificial intelligence...

Anchoring LLMs in point of fact: How one company used artificial intelligence to realize a 70% increase in productivity

Drip capitala Silicon Valley fintech startup, is using generative AI to realize a remarkable 70% productivity increase in cross-border trade finance deals. The company, which has raised greater than $500 million in debt and equity funding, uses large language models (LLMs) to automate document processing, improve risk assessment, and dramatically increase operational efficiency. This AI-driven approach has enabled Drip Capital to process hundreds of complex trade documents every day, significantly outperforming traditional manual methods.

Founded in 2016, Drip Capital has quickly turn out to be a serious player within the trade finance sector, operating within the US, India and Mexico. The company's revolutionary use of AI combines sophisticated prompt engineering with strategic human supervision to beat common challenges corresponding to hallucinations. This hybrid system is redesigning trade finance operations within the digital age, setting recent standards for efficiency in a historically paper-heavy industry.

Karl Boog, the corporate's Chief Business Officer, highlights the magnitude of the efficiency gains: “With what we’ve got achieved up to now, we’ve got been capable of increase our capability by 30 times.” This dramatic improvement demonstrates the transformative potential of generative AI within the fintech space and provides a compelling case study of how startups can use AI and LLMs to achieve a competitive advantage within the trillion-dollar global trade finance market.

At the guts of Drip Capital's AI strategy is using advanced document processing techniques. Tej Mulgaonkar, who leads the corporate's product development, explains their approach: “We process about a couple of thousand documents a day. We struggled with that for some time, obviously we arrange manual processes right from the beginning.”

Getting probably the most out of today's LLMs

The company's journey with AI began with experiments combining optical character recognition (OCR) and LLMs to digitize and interpret information from various trade documents. “We began experimenting with a mixture of OCR and LLMs working together to digitize after which interpret information,” Mulgaonkar said.

However, the trail to successful AI integration was not without its challenges. Like many firms exploring generative AI, Drip Capital initially struggled with hallucinations – situations where the AI ​​generated plausible but misinformation. Mulgaonkar acknowledges these initial hurdles: “We actually struggled slightly bit for some time. There were a number of hallucinations, a number of unreliable results.”

To overcome these challenges, Drip Capital took a scientific approach to prompt engineering. The company leveraged its extensive database of processed documents to refine and optimize the prompts used to instruct the AI. “We had lots of of hundreds of documents that we had processed over the course of seven years of operation, for which we mainly had the precise output data in our database,” Mulgaonkar explains. “We created a quite simple script that allowed us to pick out samples of input data, run through the prompts we wrote, get some output from a set of agents, after which compare those outputs to what we’ve got within the database as the precise source of truth.”

This iterative strategy of prompt refinement has significantly improved the accuracy of their AI system. Mulgaonkar notes, “The technical prompts have actually helped us significantly increase the accuracy of the LLMs.”

Drip Capital's approach to AI implementation is remarkably pragmatic. Rather than attempting to develop its own LLMs, sophisticated Retrieval Augmented Generation (RAG), or complex fine-tuning, the corporate has focused on optimizing using existing models through careful, timely development.

The triumphant return of Prompt Engineering

In early 2023, the Washington Post wrote explained prompt engineering “The hottest recent job in tech,” and showed how firms were scrambling to rent specialists who could coax optimal results out of AI systems through rigorously crafted text prompts. The article painted an image of prompt engineers as modern-day wizards, capable of unlock hidden skills in LLMs through their mastery of “prose programming.”

This enthusiasm was also shared by other major publications and organizations. The World Economic Forum, for instance, in its Jobs of tomorrow Report. The sudden interest led to a flood of online courses, certifications and job postings specifically tailored to short-term engineering positions.

However, the hype quickly met with skepticism. Critics argued that prompt engineering was a passing fad that will turn out to be obsolete as AI models improved and have become more intuitive to make use of. A March 2024 article in IEEE Spectrum boldly proclaimed “AI prompt engineering is dead” – suggesting that automated prompt optimization would soon make human prompt engineers obsolete. The article cited research showing that AI-generated prompts often perform higher than those created by human experts, leading some to query the long-term viability of the sphere.

Despite these criticisms, recent developments suggest that prompt engineering is way from dead—it continues to evolve and turn out to be more sophisticated. Drip Capital offers a compelling case study of how prompt engineering continues to play a critical role in leveraging AI for business operations.

Drip Capital has developed a classy process that mixes technical expertise with domain knowledge. The company's success shows that effective prompt engineering goes beyond simply creating the right sequence of words. It includes:

  1. Understanding the particular business context and requirements
  2. Developing strategies to keep up the accuracy and reliability of AI systems
  3. Create complex multi-step prompt strategies for advanced tasks corresponding to document processing
  4. Collaborate with material experts in finance and risk assessment to bring expertise to AI interactions

The company's AI system doesn’t work in isolation. Drip Capital understands the critical nature of its financial transactions and has implemented a hybrid approach that mixes AI processing with human oversight. “We have maintained a really minimal manual layer that works asynchronously,” explains Mulgaonkar. The documents are digitized by the LLMs and the module provisionally approves a transaction. And then in parallel, we’ve got agents review the three most important parts of the documents.”

This human-in-the-loop system provides an extra layer of verification that ensures the accuracy of key data points while enabling significant efficiency gains. As trust within the AI ​​system grows, Drip Capital desires to step by step reduce human involvement. “The idea is that we'll slowly phase this out as well,” Mulgaonkar explains. “As we proceed to gather data on accuracy, we hope to achieve enough trust and security to have the option to get rid of it altogether.”

Getting probably the most out of LLMs

Beyond document processing, Drip Capital can be exploring using AI in risk assessment. The company is experimenting with AI models that may predict liquidity forecasts and credit behavior based on extensive historical performance data. However, the corporate is proceeding cautiously on this area, taking into consideration compliance requirements within the financial sector.

Boog explains her approach to risk assessment: “The ideal is to essentially get to a comprehensive risk assessment… to have a choice engine that offers you the next probability of determining whether this account is riskier or not after which what the risks are.”

However, Boog and Mulgaonkar stress that human judgment stays essential in risk assessment, especially when there are anomalies or major risks. “Technology definitely helps, but you continue to need a human factor to observe the whole lot, especially when there are risks,” notes Boog.

Drip Capital's success in implementing AI is partly resulting from its data advantage. As a longtime player in trade finance, they’ve collected a wealth of historical data that serves as a solid foundation for his or her AI models. Boog highlights this advantage: “Because we did lots of of hundreds of transactions before adopting AI, we will learn a lot in the method. And then leveraging that data we have already got to further optimize things definitely helps us.”

Looking ahead, Drip Capital is cautiously optimistic about further AI integration. The company is exploring conversational AI opportunities for customer communications, although Mulgaonkar notes that current technologies are usually not yet as much as their needs: “I don't think you’ll be able to have a conversation with AI yet. It has reached the extent of a really intelligent IVR, but it surely's not likely something that may be handled entirely without AI.”

Drip Capital's experience with AI provides invaluable lessons for other firms within the financial sector and beyond. Their success demonstrates the potential of generative AI to rework operations when implemented rigorously, with a concentrate on practical applications and a commitment to maintaining high standards of accuracy and compliance.

As AI continues to evolve, Drip Capital's experience shows that firms don’t have to develop complex AI systems from scratch to realize significant advantages. Instead, a realistic approach that leverages existing models, focuses on timely development, and maintains human oversight can still deliver significant improvements in efficiency and productivity.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read