HomeNewsIntel and others are committed to developing open generative AI tools for...

Intel and others are committed to developing open generative AI tools for enterprises

Can generative AI designed for the enterprise (e.g. AI that autocompletes reports, spreadsheet formulas, etc.) ever be interoperable? Along with a bunch of organizations including Cloudera and Intel, the Linux Foundation — the nonprofit that supports and maintains a growing variety of open source efforts — wants to seek out out.

The Linux Foundation on Tuesday announced the launch of the Open Platform for Enterprise AI (OPEA), a project to advertise the event of open, multi-vendor and composable (i.e. modular) generative AI systems. Led by the Linux Foundation's LF AI and Data organization, which focuses on AI and data-related platform initiatives, OPEA's goal will likely be to pave the best way for the discharge of “hardened,” “scalable” generative AI systems , which “leverage the most effective open source innovation from across the ecosystem,” Ibrahim Haddad, managing director of LF AI and Data, said in a press release.

“OPEA will unlock recent possibilities in AI by creating an in depth, composable framework that sits at the highest of the technology stack,” said Haddad. “This initiative is a testament to our mission to drive open source innovation and collaboration inside the AI ​​and data communities under a neutral and open governance model.”

In addition to Cloudera and Intel, OPEA – one among the Linux Foundation's sandbox projects, a type of incubator program – counts amongst its members corporate heavyweights reminiscent of Intel, IBM-owned Red Hat, Hugging Face, Domino Data Lab, MariaDB and VMware.

So what exactly could they construct together? Haddad points to some possibilities, reminiscent of “optimized” support for AI toolchains and compilers that allow AI workloads to run across different hardware components, in addition to “heterogeneous” retrieval-augmented generation (RAG) pipelines.

RAG is becoming increasingly popular in enterprise generative AI applications, and it's not hard to see why. The responses and actions of most generative AI models are limited to the info they’re trained on. But with RAG, a model's knowledge base might be expanded to incorporate information outside of the unique training data. RAG models reference this external information – which might be in the shape of proprietary company data, a public database, or a mixture of each – before generating a response or executing a task.

A diagram to clarify RAG models. Photo credit: Intel

Intel itself revealed a number of more details Press release:

Companies are faced with a do-it-yourself approach (to RAG) as there are not any de facto standards for all components that allow firms to pick out and deploy RAG solutions which are open, interoperable and help them to return to market quickly. OPEA intends to handle these issues by working with industry to standardize components, including frameworks, architectural designs and reference solutions.

Evaluation will even be a vital a part of OPEA's measures.

In his GitHub RepositoryOPEA proposes a rubric to guage generative AI systems along 4 axes: performance, features, trustworthiness and “enterprise fit.” According to OPEA, these are “black box” benchmarks from real use cases. is an assessment of the interoperability, possible uses and value of a system. examines the flexibility of an AI model to make sure “robustness” and quality. And focuses on the necessities to get a system running without major problems.

Rachel Roumeliotis, director of open source strategy at Intel, says that OPEA will work with the open source community to supply tests based on the rubric and supply assessments and rankings of generative AI deployments upon request.

OPEA's other efforts are currently in limbo. But Haddad harnessed the facility of open model development within the spirit of Meta's growing Llama family and Databricks' DBRX. To this end, Intel has already contributed reference implementations within the OPEA repo for a chatbot, document summarizer and generative AI code generator optimized for its Xeon 6 and Gaudi 2 hardware.

Now, OPEA members are clearly invested in (and selfishly concerned with) developing generative AI tools for enterprises. Cloudera recently Partnerships launched to create what it calls an “AI ecosystem” within the cloud. Domino offers one Suite of apps for constructing and testing business-oriented generative AI. And VMware — focused on the infrastructure side of enterprise AI — launched last August recent “private AI” computing products.

The query is whether or not these vendors will work together to develop compatible AI tools under OPEA.

This has an obvious advantage. Customers are welcome to make use of multiple providers depending on their needs, resources and budget. However, history has shown that it’s all too easy to turn into depending on the respective provider. Let's hope this isn't the outcome.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read