AI research is fueled by the pursuit of ever-greater sophistication, which incorporates training systems to think and behave like humans.
The end goal? Who knows. The goal for now? To create autonomous, generalized AI agents able to performing a wide selection of tasks.
This concept known as artificial general intelligence (AGI) or superintelligence, which implies the identical thing.
It’s difficult to pinpoint precisely what AGI entails because there’s virtually zero consensus on what ‘intelligence’ is, or indeed, when or how artificial systems might achieve it.
Some even imagine AI, in its current state, can never truly obtain general intelligence.
Professor Tony Prescott and Dr. Stuart Wilson from the University of Sheffield described generative language models, like ChatGPT, as inherently limited because they’re “disembodied” and don’t have any sensory perception or grounding within the natural world.
Meta’s chief AI scientist, Yann LeCun, said even a house cat’s intelligence is fathomlessly more advanced than today’s best AI systems.
“But why aren’t those systems as smart as a cat?” LeCun asked on the World Government Summit in Dubai.
“A cat can remember, can understand the physical world, can plan complex actions, can do some level of reasoning—actually a lot better than the most important LLMs. That tells you we’re missing something conceptually big to get machines to be as intelligent as animals and humans.”
While these skills might not be essential to realize AGI, there’s some consensus that moving complex AI systems from the lab into the true world would require adopting behaviors just like those observed in natural organisms.
So, how can this be achieved? One approach is to dissect elements of cognition and work out how AI systems can mimic them.
A previous DailyAI essay investigated curiosity and its ability to guide organisms toward latest experiences and objectives, fueling the collective evolution of the natural world.
But there’s one other emotion – one other essential component of our existence – from which AGI may gain advantage. And that’s fear.
How AI can learn from biological fear
Far from being a weakness or a flaw, fear is considered one of evolution’s most potent tools for keeping organisms protected.
The amygdala is the central structure that governs fear in vertebrates. In humans, it’s a small, almond-shaped structure nestled deep throughout the brain’s temporal lobes.
Often dubbed the “fear center,” the amygdala serves as an early warning system, continuously scanning incoming sensory information for potential threats.
When a threat is detected – whether it’s the sudden lurch of a braking automotive ahead or a shifting shadow within the darkness – the amygdala springs into motion, triggering a cascade of physiological and behavioral changes optimized for rapid defensive response:
- Heart rate and blood pressure surge, priming the body for “fight or flight”
- Attention narrows and sharpens, honing in on the source of danger
- Reflexes quicken, readying muscles for split-second evasive motion
- Cognitive processing shifts to a rapid, intuitive, “higher protected than sorry” mode
This response is just not a straightforward reflex but a highly adaptive, context-sensitive suite of changes that flexibly tailor behavior to the character and severity of the threat at hand.
It’s also exceptionally quick. We grow to be consciously aware of a threat around 300-400 milliseconds after initial detection.
Moreover, the amygdala doesn’t operate in isolation. It’s densely interconnected with other key brain regions involved in perception, memory, reasoning, and motion.
Why fear might profit AI
So, why does fear matter within the context of AI anyway?
In biological systems, fear serves as a vital mechanism for rapid threat detection and response. By mimicking this technique in AI, we will potentially create more robust and adaptable artificial systems.
This is especially pertinent to autonomous systems that interact with the true world. Case in point: despite AI intelligence exploding in recent times, driverless cars still are inclined to fall short in safety and reliability.
Regulators are probing quite a few fatal incidents involving self-driving cars, including Tesla models with Autopilot and Full Self-Driving features.
Speaking to the Guardian in 2022, Matthew Avery, director of research at Thatcham Research, explained why driverless cars have been so difficult to refine:
“Number one is that these items is harder than manufacturers realized,” Avery states.
Avery estimates that around 80% of autonomous driving functions involve relatively straightforward tasks like lane following and basic obstacle avoidance.
The next actions, nonetheless, are rather more difficult. “The last 10% is basically difficult,” Avery emphasizes, like “while you’ve got, you recognize, a cow standing in the course of the road that doesn’t need to move.”
Sure, cows aren’t fear-inspiring in their very own right. But any concentrating driver would probably fancy their possibilities at stopping in the event that they’re hurtling towards one at speed.
An AI system relies on its training and technology to see the cow and make the suitable decision. That process isn’t at all times quick or reliable enough, hence a high risk of collisions and accidents, especially when the system encounters something it’s not trained to grasp.
Imbuing AI systems with fear might provide another, quicker, and more efficient technique of reaching that call.
In biological systems, fear triggers rapid, instinctive responses that don’t require complex processing. For instance, a human driver might instinctively brake on the mere suggestion of an obstacle, even before fully processing what it’s.
This near-instantaneous response, driven by a fear response, may very well be the difference between a near-miss and a collision.
Moreover, fear-based responses in nature are highly adaptable and generalize well to novel situations.
An AI system with a fear-like mechanism may very well be higher equipped to handle unexpected scenarios.
Deconstructing fear: insights from the fruit fly
We’re removed from developing artificial systems that replicate the integrated, specialized neural regions in biological brains, but that doesn’t mean we will’t model those mechanisms in other ways.
So, let’s zoom out from the amygdala and have a look at how invertebrates – small insects, for instance – detect and process fear.
While they don’t have a structure directly analogous to the amygdala, that doesn’t mean they lack circuitry that achieves an identical objective.
For example, recent studies into the fear responses of Drosophila melanogaster, the common fruit fly, yielded intriguing insights into the basic constructing blocks of primitive emotion.
In an experiment conducted at Caltech in 2015, researchers led by David Anderson exposed flies to an overhead shadow designed to mimic an approaching predator.
Using high-speed cameras and machine vision algorithms, they meticulously analyzed the flies’ behavior, on the lookout for signs of what Anderson calls “emotion primitives” – the essential components of an emotional state.
Remarkably, the flies exhibited a set of behaviors that closely paralleled the fear responses seen in mammals.
When the shadow appeared, the flies froze in place, and their wings cocked at an angle to arrange for a fast escape.
As the threat endured, some flies took flight, darting away from the shadow at high speed. Others remained frozen for an prolonged period, suggesting a state of heightened arousal and vigilance.
Crucially, these responses weren’t mere reflexes triggered routinely by the visual stimulus. Instead, they appeared to reflect a permanent internal state, a type of “fly fear” that endured even after the threat had passed.
This was evident within the undeniable fact that the flies’ heightened defensive behaviors may very well be elicited by a special stimulus (a puff of air) even minutes after the initial shadow exposure.
Moreover, the intensity and duration of the fear response scaled with the extent of threat. Flies exposed to multiple shadow presentations showed progressively stronger and longer-lasting defensive behaviors, indicating a type of “fear learning” that allowed them to calibrate their response based on the severity and frequency of the danger.
As Anderson and his team argue, these findings suggest that the constructing blocks of emotional states – persistence, scalability, and generalization – are present even in the best creatures.
If we will decode how simpler organisms like fruit flies process and reply to threats, we will potentially extract the core principles of adaptive, self-preserving behavior.
Primitive types of fear may very well be applied to develop AI systems which are more robust, safer, and attuned to real-world risks and challenges.
Infusing AI with fear circuitry
It’s an amazing theory, but can AI be imbued with an authentic, functional type of ‘fear’ in practice?
One intriguing study examined exactly that with the aim of improving the security of driverless cars and other autonomous systems.
“Fear-Neuro-Inspired Reinforcement Learning for Safe Autonomous Driving,” led by Chen Lv at Nanyang Technological University, Singapore, developed a fear-neuro-inspired reinforcement learning (FNI-RL) framework for improving the performance of driverless cars.
By constructing AI systems that may recognize and reply to the subtle cues and patterns that trigger human defensive driving – what they term “fear neurons” – we may give you the chance to create self-driving cars that navigate the road with the intuitive caution and risk sensitivity they need.
The FNI-RL framework translates key principles of the brain’s fear circuitry right into a computational model of threat-sensitive driving, allowing an autonomous vehicle to learn and deploy adaptive defensive strategies in real time.
It involves three key components modeled after core elements of the neural fear response:
- A “fear model” that learns to acknowledge and assess driving situations that signal heightened collision risk, playing a job analogous to the threat-detection functions of the amygdala.
- An “adversarial imagination” module that mentally simulates dangerous scenarios, allowing the system to securely “practice” defensive maneuvers without real-world consequences – a type of risk-free learning paying homage to the mental rehearsal capacities of human drivers.
- A “fear-constrained” decision-making engine that weighs potential actions not only by their immediately expected rewards (e.g. progress towards a destination), but additionally by their assessed level of risk as gauged by the fear model and adversarial imagination components. This mirrors the amygdala’s role in flexibly guiding behavior based on an ongoing calculus of threat and safety.
To put this technique through its paces, the researchers tested it in a series of high-fidelity driving simulations featuring difficult, safety-critical scenarios:
- Sudden cut-ins and swerves by aggressive drivers
- Erratic pedestrians jaywalking into traffic
- Sharp turns and blind corners with limited visibility
- Slick roads and poor weather conditions
Across these tests, the FNI-RL-equipped vehicles demonstrated remarkable safety performance, consistently outperforming human drivers and traditional reinforcement learning (RL) techniques to avoid collisions and practice defensive driving skills.
In one striking example, the FNI-RL system successfully navigated a sudden, high-speed traffic merger with a 90% success rate, in comparison with just 60% for a state-of-the-art RL baseline.
It even achieved safety gains without sacrificing driving performance or passenger comfort.
In other tests, the researchers probed the FNI-RL system’s ability to learn and generalize defensive strategies across driving environments.
In a simulation of a busy city intersection, the AI learned in only a number of trials to acknowledge the telltale signs of a reckless driver – sudden lane changes, aggressive acceleration – and pre-emptively adjust its own behavior to offer a wider berth.
Remarkably, the system was then capable of transfer this learned wariness to a novel highway driving scenario, routinely registering dangerous cut-in maneuvers and responding with evasive motion.
This demonstrates the potential of neurally-inspired emotional intelligence to boost the security and robustness of autonomous driving systems.
By endowing vehicles with a “digital amygdala” tuned to the visceral cues of road risk, we may give you the chance to create self-driving cars that may navigate the challenges of the open road with a fluid, proactive defensive awareness.
Towards a science of emotionally-aware robotics
While recent AI advancements have relied on brute-force computational power, researchers are actually drawing inspiration from human emotional responses to create smarter and more adaptive artificial systems.
This paradigm, named “bio-inspired AI,” extends beyond self-driving cars to fields like manufacturing, healthcare, and space exploration.
There are many exciting angles to explore. For example, robotic hands are being developed with “digital nociceptors” that mimic pain receptors, enabling swift reactions to potential damage.
In terms of hardware, IBM’s bio-inspired analog chips use “memristors” to store various numerical values, reducing data transmission between memory and processor.
Similarly, researchers on the Indian Institute of Technology, Bombay, have designed a chip for Spiking Neural Networks (SNNs), which closely mimic biological neuron function.
Professor Udayan Ganguly reports this chip achieves “5,000 times lower energy per spike at an identical area and 10 times lower standby power” compared to standard designs.
These advancements in neuromorphic computing bring us closer to what Ganguly describes as “a particularly low-power neurosynaptic core and real-time on-chip learning mechanism,” key elements for autonomous, biologically inspired neural networks.
Combining nature-inspired AI technology with architectures informed by natural emotional states like fear or curiosity could thrust AI into a wholly latest state of being.
As researchers push those boundaries, they’re not only creating more efficient machines – they’re potentially birthing a brand new type of intelligence.
As this line of research evolves, autonomous machines might roam the world amongst us, reacting to unpredictable environmental cues with curiosity, fear, and other emotions considered distinctly human.
The impacts? That’s one other story altogether.