HomeArtificial IntelligenceAI has a big and growing carbon footprint, but there are potential...

AI has a big and growing carbon footprint, but there are potential solutions on the horizon

Given the large problem-solving potential of artificial intelligence (AI), it could not be unreasonable to consider that AI could also help us take care of the climate crisis. However, once you have a look at the energy requirements of AI models, it becomes clear that the technology is as much a part of the climate problem because it is an answer.

The emissions come from AI-related infrastructure, akin to the development and operation of the information centers that process the massive amounts of data required to take care of these systems.

But different technological approaches to constructing AI systems could help reduce the carbon footprint. Two technologies particularly are promising for this: Spike neural networks and lifelong learning.

The lifespan of an AI system could be divided into two phases: training and inference. During training, a relevant data set is used to construct and optimize or improve the system. During inference, the trained system generates predictions based on previously unseen data.

For example, to coach an AI to be utilized in self-driving cars would require a knowledge set with many alternative driving scenarios and decisions made by human drivers.

After the training phase, the AI ​​system will predict effective maneuvers for a self-driving automobile. Artificial neural networks (ANN)are an underlying technology utilized in most current AI systems.

They consist of many alternative elements, so-called parameters, whose values ​​are adjusted through the training phase of the AI ​​system. These parameters can total greater than 100 billion.

While numerous parameters improves the capabilities of ANNs, in addition they make training and inference processes resource-intensive processes. To put things into perspective, training GPT-3 (the precursor AI system to the present ChatGPT) produced 502 tons of carbon, the equivalent of driving 112 gasoline-powered cars for a yr.

GPT-3 continues to emit 8.4 tons of CO₂ per yr based on conclusions. Since the AI ​​boom began within the early 2010s, the energy demands of AI systems generally known as Large Language Models (LLMs) – the sort of technology behind ChatGPT – have increased by an element of 300,000.

As AI models change into more ubiquitous and sophisticated, this trend will proceed, potentially making AI a major contributor to CO₂ emissions. In fact, our current estimates may very well be lower than AI's actual carbon footprint resulting from the dearth of standardized and accurate techniques to measure AI-related emissions.


Leonid Sorokin / Shutterstock

Spiking neural networks

The previously mentioned emerging technologies, Spiking Neural Networks (SNNs) and Lifelong Learning (L2), have the potential to cut back AI's ever-growing carbon footprint, with SNNs acting as an energy-efficient alternative to ANNs.

ANNs work by processing and learning patterns from data to make predictions. You work with decimal numbers. To perform accurate calculations, especially when multiplying numbers with decimals together, the pc should be very precise. Because of those decimals, ANNs require loads of computing power, memory and time.

This signifies that the larger and more complex the networks change into, the more energy intensive ANNs change into. Both ANNs and SNNs are inspired by the brain, which accommodates billions of neurons (nerve cells) connected to one another via synapses.

Like the brain, ANNs and SNNs even have components that researchers call neurons, although these are artificial and never biological. The predominant difference between the 2 kinds of neural networks is the way in which individual neurons transmit information to one another.

Neurons within the human brain communicate with one another by transmitting intermittent electrical signals called spikes. The spikes themselves contain no information. Instead, the data lies within the timing of those peaks. This binary all-or-nothing characteristic of spikes (often represented as 0 or 1) implies that neurons are lively once they produce spikes and inactive otherwise.

This is considered one of the explanation why energy-efficient processing within the brain.

Just as Morse code uses specific sequences of dots and dashes to convey messages, SNNs use patterns or timing of spikes to process and transmit information. So while the synthetic neurons in ANNs are all the time lively, SNNs only eat energy when a spike occurs.

Otherwise the energy requirement is sort of zero. SNNs could be as much as 280 times more energy efficient than ANNs.

My colleagues and I are Development of learning algorithms for SNNs This could bring them even closer to the energy efficiency of the brain. The lower computational cost also implies that SNNs may have the opportunity to make decisions more quickly.

These properties make SNNs useful for a wide selection of applications, including space exploration, defense and self-driving cars resulting from the limited energy sources available in these scenarios.

L2 is one other strategy to cut back the general energy consumption of ANNs over their lifetime that we’re also working on.

Training ANNs sequentially (where the systems learn from sequences of information) on latest problems causes them to forget their previous knowledge when learning latest tasks. ANNs must be retrained from scratch as their operating environment changes, further increasing AI-related emissions.

L2 is a set of algorithms that make it possible to coach AI models on multiple tasks one after the opposite without forgetting anything. L2 allows models to do that learn throughout their lives by constructing on their existing knowledge without having to retrain them from scratch.

The field of AI is growing rapidly, and more potential advances are emerging that may reduce the energy requirements of this technology. For example, smaller AI models could be created which have the identical predictive capabilities as a bigger model.

Advances in quantum computing – a special approach to constructing computers that leverages phenomena from the world of quantum physics – would also enable faster training and inference using ANNs and SNNs. The superior computing capabilities of quantum computing could allow us to search out energy-efficient solutions for AI on a much larger scale.

The challenge of climate change requires us to try to search out solutions for fast-moving areas like AI before their carbon footprint becomes too large.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read