Niloom.ai today launched beta testing for its generative AI content creation platform for spatial computing.
Niloom.ai is a comprehensive platform that leverages GenAI throughout the spatial computing ecosystem to create, prototype, edit, and immediately publish sophisticated AR/VR content in a fraction of the time and at a fraction of the associated fee, said Amir Baradaran, CEO of Niloom.ai, in an interview with Gamesbeat.
Niloom.ai is a streamlined Software-as-a-Service (SaaS) solution that consolidates the complete creative process from ideation and development to testing, collaborating and publishing.
“We are the primary GenAI technology within the ecosystem and certainly one of the primary to create content for spatial computing. You can generate custom assets like 3D models, 2D/360 images, music, sound effects, and even use text-to-speech to provide voice to your characters,” said Baradaran. “I'm very excited to say which you can now personalize your character's personality with an AI agent in a really sophisticated way. We also offer video-to-animation (integrating Kinetixtech). And then we optimized the strategy of inserting animations into each character. These are a number of the heavy lifting we've done.”
He added: “Most importantly, you’ll be able to easily create a complete story on a timeline that offers you a bird's eye view on a timeline. You have sophisticated editing capabilities and interactivity, which is actually vital. For me, gamification is an inherent a part of the character of AR/VR.”
And there's rather a lot more within the pipeline, with latest upgrades coming about every two weeks for things like revenue generation, buying and selling projects, web AR, and more.
By integrating over 100 key features into one platform, Niloom.ai reduces production time and costs, optimizes production workflows, and solves the interoperability problem of the spatial computing market. The browser-based, no-code platform eliminates the necessity for dependency on expensive engineers and is straightforward to make use of for each skilled and casual developers.
“Niloom.ai opens the floodgates for the creative community that has previously been excluded by the technical requirements of spatial computing content creation,” said Baradaran. “As certainly one of the early adopters of spatial computing, I even have experienced first-hand the constraints of getting to depend on a military of engineers to bring my artwork to life. Niloom.ai is fundamentally changing the strategy of spatial computing content creation by breaking down the technical and price barriers out there and enabling anyone to create and publish AR/VR experiences in minutes.”
The Niloom.ai platform provides GenAI throughout the spatial computing ecosystem to create, prototype, and edit sophisticated AR/VR content. Using easy text or voice prompts, Niloom.ai's GenAI generates complete AR/VR experiences, personalized AI agents, and custom assets. It can now create and publish projects on to Apple Vision Pro and Meta Quest headsets.
It might be used for advanced creation, editing, and prototyping. Developers can create immersive AR/VR experiences with advanced features, including interactive 3D models and animated characters with verbal communication, engaging storylines, detailed backgrounds, music, visual and sound effects, AI-driven voices, and more.
Editing tools enable live collaboration, precise editing, version control, testing and simulation. Prototyping enables simulation of scenes to facilitate feedback and collaboration.
Developers can capture a bird's eye view of entire projects using visual timelines and decision trees to “add logic” to scenes – enabling complex stories and limitless possibilities for user interaction: touch, hand gestures and verbal commands.
And they might be integrated directly with third-party tools comparable to Sketchfab, Kinetix.tech, Ready Player Me, Inworld and Google TTS for an entire solution. Niloom.ai is hardware and software agnostic, facilitating each content creation and quick publishing to all mobile spatial computing devices (iOS, Android) and headsets (Apple Vision Pro, Meta Quest).
It has a management system that helps developers streamline workflows using a cloud-based asset and project library, team management tools, and access to data and analytics.
Live-Demo
Baradaran did a live demo for me with the working technology.
“You can upload your individual assets, import latest ones from Sketchfab, or simply generate them. Be it a 3D asset, characters, animations, 2D or 360-degree images, or music and sound effects. And all of this stuff might be brought together so you’ll be able to control them and place them on a timeline,” he said.
He created a project in front of my eyes in minutes, regardless that he previously expected it to take weeks.
“The most significant thing is that we give content creators the chance to be a part of it, even in the event that they usually are not developers.”
“Over the past decade, I've seen demos of dozens of tools designed to simplify the creation of XR experiences,” said Ori Inbar, Niloom.ai advisor and co-founder of Augmented World Expo, in an announcement. “Niloom.ai succeeds by enabling developers of all technical backgrounds to not only quickly prototype AR and VR experiences, but in addition go deeper and create sophisticated scenes and interactions.”
“Niloom.ai offers a breakthrough technology that can usher in a brand new era in spatial computing. This is strictly the form of scalable, transformative software that major technology corporations wish to partner with or acquire to empower a brand new generation of content creators,” added Debu Purkayastha, strategic advisor to Niloom.ai and managing partner of third Eye, in an announcement. “What Niloom.ai has built is revolutionary; there’s simply nowhere else prefer it.”
Niloom.ai is now available within the US at Niloom.ai and on the iOS App Store, visionOS App Store, Meta Quest Store, etc. The first 1,000 developers will receive exclusive early access to the platform, including a 14-day bonus on the Pro version, after which they will probably be offered the chance for exclusive beta subscriptions.
Baradaran has been working on this planet of augmented reality for about 15 years, starting out as a content creator.
“I used to be an artist on the time who was excited in regards to the world of spatial computing, augmented reality and virtual reality. I used to be really lucky to come upon this technology. He has done exhibitions for this technology on the Louvre, the British Museum, Art Basel and elsewhere. He has taught courses on spatial computing on the Columbia University School of Engineering.
“I used to be certainly one of the few artists who said, 'Hey, that is going to fundamentally change the way in which we create content, tell stories, and fundamentally change the way in which we see ourselves.' That was pretty exciting since the art world was also very cautious about this latest technology. And I'm very blissful to have been certainly one of the early adopters, but in addition the early evangelists of this space.”
Recently, he was excited when Apple launched the Apple Vision Pro.
“We principally began constructing what has turn into a generative, AI-powered content creation platform to create content in AR/VR for spatial computing experiences,” he said.
He founded the corporate three years ago with a few of his students and raised $2.5 million in a pre-seed round in 2021. The team consists of a core group of three people and is currently expanding along with his marketing team.
“It was so hard to actually execute that vision that I needed to simplify the entire complex strategy of creating content,” he said. “It was technical, time-consuming and really expensive.”
“We've done our calculations. And we're very blissful to say that the identical project that takes you about six months with Unity, takes us about six hours,” said Baradaran. And that's a really difficult project. You can initiate it with us because we’ve a totally generative AI engine integrated.”
While many large technology platforms are siloed, Niloom.ai goals to handle the ecosystem's pain points and make technology interoperable.
Baradaran said it's unusual to see people walking around outside with an Apple Vision Pro on their heads. However, he points out that the shape factor will change over time and it's probably not natural to walk around with a smartphone in your hand and look down on a regular basis.