HomeArtificial IntelligenceData Lakehouse Onehouse secures $35 million to capitalize on the GenAI revolution

Data Lakehouse Onehouse secures $35 million to capitalize on the GenAI revolution

Nowadays, you may hardly spend an hour without reading about generative AI. While we’re still within the embryonic phase of what some have Given the “steam engine” of the Fourth Industrial Revolution, there may be little doubt that “GenAI” is about to remodel virtually every industry – from finance and healthcare to law and beyond.

Cool, user-focused applications could also be generating probably the most buzz, but the businesses driving this revolution are currently reaping probably the most advantages. Just this month, chipmaker Nvidia was briefly the world's most beneficial company, a $3.3 trillion behemoth driven largely by demand for AI computing power.

But along with GPUs (graphics processing units), firms also need an infrastructure to administer the flow of knowledge – to store, process, train, analyze and ultimately realize the total potential of AI.

An organization that desires to profit from that is A housea three-year-old Californian startup founded by Vinoth Chandarwho created Open Source Apache Hudi project while working as an information architect at Uber. Hudi brings the advantages of Data Warehouse To Data lakesmaking a so-called “data lakehouse” that supports actions corresponding to indexing and performing real-time queries on large data sets, be it structured, unstructured or semi-structured data.

For example, an e-commerce company that constantly collects customer data, including orders, feedback, and related digital interactions, needs a system that ingests all of this data and ensures it stays up to this point so it may well recommend products based on a user's activity. Hudi enables data ingestion from multiple sources with minimal latency and supports delete, update, and insert (“upsert”), which is critical for such real-time data use cases.

Onehouse builds on that with a completely managed data lakehouse that helps enterprises deploy Hudi. Or, as Chandar puts it, it “starts the ingestion and standardization of knowledge into open data formats” that may be used with just about all major tools in the info science, AI and machine learning ecosystems.

“Onehouse abstracts the low-level data infrastructure buildout and helps AI firms deal with their models,” Chandar told TechCrunch.

Today, Onehouse announced that it has raised $35 million in a Series B funding round to launch two recent products designed to enhance Hudi's performance and reduce the associated fee of cloud storage and processing.

Down within the (data) lake house

Onehouse commercial on a London billboard.
Photo credits: A house

Chandar launched Hudi as an internal project at Uber in 2016. Since then, the ride-hailing company has donated the project to the Apache Foundation in 2019, Hudi was accepted until like AmazonDisney and Walmart.

Chandar left Uber in 2019 and, after a transient stint at Confluent, founded Onehouse. The startup emerged from obscurity in 2022 with $8 million in seed capital and followed shortly thereafter with a $25 million Series A funding round. Both rounds were co-led by Greylock Partners and Addition.

These VC firms have come together again for the follow-up Series B round, but this time David Sacks' Craft Ventures is leading the round.

“The data lakehouse is quickly becoming the usual architecture for organizations trying to centralize their data to power recent services corresponding to real-time analytics, predictive ML and GenAI,” said Michael Robinson, partner at Craft Ventures, in an announcement.

For context, data warehouses and data lakes are similar of their role as a central repository for bringing data together. However, they do that in other ways: an information warehouse is right for processing and querying historical, structured data, while data lakes have emerged as a more flexible alternative to storing large amounts of raw data in its original format, as they support multiple data types and high-performance queries.

This makes data lakes ideal for AI and machine learning workloads since it is less expensive to store already transformed raw data while supporting more complex queries because the info may be stored in its original form.

The trade-off, nevertheless, is an entire recent set of complexities in data management that would degrade data quality given the range of knowledge types and formats. This is partly what Hudi goals to resolve by moving some key functions from data warehouses to data lakes, corresponding to ACID transactions to support data integrity and reliability and improve metadata management for more diverse data sets.

Configuring data pipelines in Onehouse
Configuring data pipelines in Onehouse.
Photo credits: A house

Because it's an open source project, any company can use Hudi. A fast take a look at the logos on Onehouse's website reveals some impressive users: AWS, Google, Tencent, Disney, Walmart, Bytedance, Uber, and Huawei, to call a couple of. But the incontrovertible fact that such well-known firms are using Hudi internally shows the trouble and resources required to construct it as a part of an on-premises data lakehouse setup.

“While Hudi offers extensive capabilities for ingesting, managing and reworking data, organizations still have to integrate about half a dozen open source tools to realize their goal of a production-grade data lakehouse,” Chandar said.

That's why Onehouse offers a completely managed, cloud-native platform that ingests, transforms and optimizes data in a fraction of the time.

“Users can get an open data lakehouse up and running in lower than an hour, with broad interoperability with all major cloud-native services, warehouses and data lake engines,” said Chandar.

The company was reticent to call its business customers, aside from the couple who Case studiesjust like the Indian unicorn Apna.

“Because we’re a young company, we are usually not currently publicly disclosing the total list of Onehouse’s business customers,” Chandar said.

With a fresh $35 million within the bank, Onehouse is now expanding its platform with a free tool called Onehouse LakeView, which provides visibility into Lakehouse functionality and provides insights into table statistics, trends, file sizes, timeline history, and more. This builds on existing statement metrics provided by the core Hudi project and provides additional context on workloads.

“Without LakeView, users must spend a number of time interpreting metrics and thoroughly understanding your entire stack to discover performance issues or inefficiencies in pipeline configuration,” said Chandar. “LakeView automates this and provides email alerts on good or bad trends and points out data management needs to enhance query performance.”

In addition, Onehouse is launching a brand new product called Table Optimizer, a managed cloud service that optimizes existing tables to hurry up data ingestion and conversion.

“Open and interoperable”

There are countless other notable players on this space that can not be ignored. Companies like Databricks and Snowflake are increasingly Adopting the Lakehouse Paradigm: Earlier this month, Databricks reportedly distributed $1 billion to accumulate an organization called Tabular with the goal of making a typical lakehouse standard.

Onehouse is certainly operating in a difficult space, but hopes that its deal with an “open and interoperable” system that more easily avoids lock-in to a particular vendor will help it compete. Essentially, it guarantees the flexibility to make a single copy of knowledge universally accessible from virtually anywhere, including Databricks, Snowflake, Cloudera, and native AWS services, without having to construct separate data silos for every of them.

As with Nvidia within the GPU space, one cannot ignore the opportunities that await every company in the info management space. Data is the cornerstone of AI development, and the dearth of sufficient good quality data is a serious reason why many AI projects failBut even when the info is there in mass quantities, firms still need the infrastructure to ingest it, transform it, and standardize it so it becomes usable. This bodes well for Onehouse and its ilk.

“From an information management and processing perspective, I consider that high-quality data provided by a solid data infrastructure will play a critical role in bringing these AI projects into real-world production use cases – avoiding garbage-in/garbage-out data issues,” Chandar said. “We're seeing such demand from data lakehouse users as they struggle to scale the info processing and query requirements for constructing these newer AI applications on enterprise-scale data.”

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read