HomeNewsAI just isn't a magic wand – it has built-in problems which...

AI just isn’t a magic wand – it has built-in problems which can be difficult to repair and will be dangerous

By now now we have all heard and skim rather a lot about artificial intelligence (AI). You have probably among the countless AI tools that grow to be available. For some, AI looks like a magic wand that predicts the longer term.

But AI just isn’t perfect. A supermarket menu planner in Aotearoa New Zealand gave customers toxic recipesa New York chatbot advised people to interrupt the lawand Google's AI overview tells people to eat rocks.

At its core, an AI tool is a particular system that solves a particular problem. With any AI system, we must always adjust our expectations about its capabilities – and plenty of of those expectations rely on how the AI ​​was built.

Let’s examine some inherent flaws of AI systems.

Real world problems

An inherent problem with all AI systems is that they should not 100% accurate in real-world environments. For example, a predictive AI system is trained using data points from the past.

If the AI ​​then encounters something recent that doesn’t resemble the training data, it would probably not give you the chance to make the fitting decision.

As a hypothetical example, let's take a military aircraft equipped with an AI-controlled autopilot system. This system works due to its “knowledge base” acquired through training. But an AI just isn’t a magic wand, it just performs mathematical calculations. An adversary could create obstacles that the aircraft AI cannot “see” because they should not included within the training data, with potentially disastrous consequences.

Unfortunately, there just isn’t much we are able to do about this problem apart from attempting to train the AI ​​for each possible circumstance we all know of, which might sometimes be an insurmountable task.

Bias of coaching data

You can have heard of AI makes biased decisions. Bias normally occurs when now we have unbalanced data. Simply put, which means when training the AI ​​system, we show it too many examples of 1 sort of consequence and only a few of one other type.

Let's take the instance of an AI system trained to predict the likelihood of a selected person committing against the law. If the crime data used to coach the system incorporates mostly people from group A (say, a selected ethnic group) and only a few from group B, the system is not going to learn equally about each groups.

The predictions for group A subsequently give the impression that these persons are more prone to commit crimes than people from group B. If the system is used uncritically, this distortion can have serious ethical consequences.

Fortunately, developers can solve this problem by “balancing” the dataset. This can involve various approaches, including using synthetic Data – computer-generated, pre-labeled data for testing and training AI that has built-in controls to guard against bias.

To prevent AI systems from spreading bias, balanced data is crucial.
Comuzi/© BBC/Better images from AI/Surveillance View A., From

Be old-fashioned

Another problem with AI can arise when training was done “offline” and just isn’t up so far with the dynamics of the issue it’s alleged to work on.

A straightforward example can be an AI system designed to predict the each day temperature in a city. Its training data includes all previous temperature data for that location.

For example, after the AI ​​has accomplished training and is deployed, there may be a disruption to the standard weather dynamics brought on by a severe climate event. Because the AI ​​system that produces the forecasts was trained with data that doesn’t account for this disruption, its forecasts grow to be increasingly inaccurate.

This problem will be solved as follows: Training the AI ​​“online”i.e. it commonly displays the newest temperature data and is used for temperature forecasting.

This seems like an awesome solution, but there are some risks related to online training. We can let the AI ​​system train itself with the newest data, but it may get uncontrolled.

Basically, this will occur because of Chaos theorywhich, in easy terms, means that the majority AI systems are sensitive to initial conditions. If we don't know what data the system will encounter, we are able to't know the right way to adjust the initial conditions to regulate potential instabilities in the longer term.

If the information is inaccurate

Sometimes the training data is solely not fit for purpose. For example, it doesn’t have the properties that the AI ​​system must perform the duty we’re training it to do.

To use a particularly simplified example, imagine an AI tool for identifying “tall” and “short” people. Should a one that is 170 cm tall be labeled as tall or short within the training data? If it’s a tall person, what is going to the system return when it encounters a one that is 169.5 cm tall? (Perhaps one of the best solution can be so as to add a “medium” label.)

The above could appear trivial, but problems with data labeling or poor data sets can have problematic consequences when the AI ​​system is involved in medical diagnosis, for instance.

Fixing this problem just isn’t easy as identifying the relevant information requires numerous knowledge and experience. Involving a subject expert in the information collection process will be an awesome solution as they can provide developers guidance on what forms of data ought to be included in the primary place.

As (future) users of AI and technology, it will be important for all of us to concentrate on these issues in an effort to broaden our perspective on AI and the outcomes of its predictions regarding various elements of our lives.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read