HomeEthics & SocietyResearchers connect two AI models, enabling them to speak

Researchers connect two AI models, enabling them to speak

Scientists from the University of Geneva have bridged a niche in AI by creating a man-made neural network that learns tasks before communicating them to a different AI, which might replicate them. 

Humans can grasp recent tasks from short instructions and articulate the learned task well enough for an additional person to duplicate it. This is integral to human communication and is a key feature of our conscious world. 

This fascinating study, detailed in Nature Neuroscience, grants AI a type of human communication and learning that has long evaded the technology. 

The project, led by Alexandre Pouget, a professor on the UNIGE Faculty of Medicine, alongside his team, delves into advanced techniques inside natural language processing – a subset of AI focused on machine understanding and response to human language. 

Pouget explains the present limitations of AI on this context, noting in an article published on the University of Geneva’s website: “Currently, conversational agents using AI are able to integrating linguistic information to supply text or a picture. But, so far as we all know, they should not yet able to translating a verbal or written instruction right into a sensorimotor motion, and even less explaining it to a different artificial intelligence in order that it could reproduce it.”

The Geneva team enhanced an existing language-understanding artificial neural model (ANN), S-Bert.

They connected S-Bert to a smaller, simpler network, simulating the human brain’s language perception and production areas— the Wernicke and Broca areas.

Through training, this network could execute tasks based on written English instructions after which convey these tasks linguistically to a “sister” network, allowing the 2 AIs to speak task instructions purely through language.

Reidar Riveland, a Ph.D. student involved within the study, explained, “We began with an existing model of artificial neurons, S-Bert, which has 300 million neurons and is pre-trained to grasp language. We ‘connected’ it to a different, simpler network of a couple of thousand neurons.”

The tasks ranged from easy instructions like pointing to a location to more complex commands requiring the identification of subtle contrasts between visual stimuli. 

Here are the study’s key achievements:

  • The AI system could each comprehend and execute instructions, accurately performing recent, unseen tasks 83% of the time based on linguistic instructions alone.
  • The system could generate descriptions of learned tasks in a way that allowed a second AI to grasp and replicate these tasks with an identical success rate.

This furthers the potential for AI models to learn and communicate tasks linguistically, opening recent opportunities in robotics. 

It integrates linguistic understanding with sensorimotor functions, meaning AIs could converse and understand when an instruction requests it to perform a task like grabbing something off a shelf or moving in a certain direction. 

“The network we now have developed may be very small. Nothing now stands in the best way of developing, on this basis, way more complex networks that will be integrated into humanoid robots able to understanding us but in addition of understanding one another,” researchers shared of the study.

Along with recent massive investments in AI robotics corporations like Figure AI, intelligent androids is likely to be closer to reality than we expect.


Please enter your comment!
Please enter your name here

Must Read