HomeArtificial IntelligenceManipulation agents (the true AI risk)

Manipulation agents (the true AI risk)

Our lives will soon be stuffed with chatty AI agents designed to assist us at every turn by anticipating our wants and wishes in order that they can provide us with tailored information and perform useful tasks on our behalf. To do that, they use an intensive store of non-public data about our individual interests and hobbies, backgrounds and aspirations, personality traits and political opinions – all with the aim of creating our lives “more convenient”.

These agents can be extremely competent. Just this week, Open AI released GPT-4o, their next-generation chatbot that may read human emotions. This will be achieved not only by detecting the mood within the text you write, but additionally by assessing the tone of your voice (when speaking through a microphone) and by utilizing your facial expressions (when interacting via video).

This is the long run of computing and it's coming fast

Just this week, Google announced Project Astra – short for “Advanced Seeing and Talking Responsive Agent”. The goal is to deploy assistive AI that may interact with you in conversation while understanding what it sees and hears around you. This enables interactive guidance and assistance in real time.

And just last week, Sam Altman from OpenAI said MIT Technology Review that the killer app for AI is assistive agents. In fact, he predicted that everybody wants a personalised AI agent that acts as a “super-competent colleague who knows absolutely every part about my entire life, every email, every conversation I've ever had,” captures and analyzes every part, in order that it may well perform useful actions in your behalf.

What could possibly go improper?

As I wrote here on VentureBeat last 12 months, there’s a big risk that AI agents will be misused in ways in which impair human agency. Actually, I feel so targeted manipulation is essentially the most dangerous AI threat within the near future, especially when these agents are integrated into mobile devices. After all, mobile devices are the gateway to our digital lives, from the news and opinions we devour to each email, phone call and text message we receive. These agents monitor our flow of knowledge, learning intimate details about our lives while filtering the content that reaches our eyes.

Any system that monitors our lives and conveys the data we receive is a vehicle for interactive manipulation. To make this much more dangerous, these AI agents use the cameras and microphones of our mobile devices in real time. This ability (made possible by multimodal large language models) makes these agents extremely useful – they will reply to the sights and sounds around you without you having to ask them for guidance. This ability is also used to trigger targeted influence that matches precisely the activity or situation you might be in.

To many individuals, this level of tracking and intervention sounds scary, and yet I expect they’ll embrace this technology. After all, these agents are purported to improve our lives. They whisper in our ears as we go about our every day routines, be certain that we don't forget to select up our laundry as we walk down the road, and tutor us as we learn latest skills. They even coach us in social situations, so we appear smarter, funnier or more confident.

This will turn into an arms race amongst technology corporations to reinforce our mental capabilities in essentially the most impactful way possible. And anyone who doesn't use these features will quickly feel disadvantaged. Eventually it won't even feel like a alternative anymore. This is why I frequently predict that adoption can be extremely rapid and can be ubiquitous by 2030.

So why not give one a hug?

As I wrote in my latest book, Our next reality, Aids will give us mental superpowers, but we must keep in mind that these are products designed for profit. And by utilizing them, we are going to enable corporations to whisper in our ears (and shortly flash images before our eyes) that can guide, coach, educate, warn and encourage us throughout our lives. In other words, we are going to allow AI agents to influence our thoughts and control our behavior. If used for good it might be a tremendous type of empowerment, but when abused it could easily turn into that ultimate tool of persuasion.

That brings me to “AI manipulation problem“: The indisputable fact that targeted influence through conversational agents could also be far simpler than traditional content. If you would like to understand why, just ask an experienced salesperson. They know that the perfect option to persuade someone to purchase a services or products (even in the event that they don't need it) isn’t to provide them a brochure, but to have interaction them in a dialogue. A great salesperson starts with friendly banter to get you “sized up” and lower your defenses. They will then ask inquiries to bring out any reservations. And finally, they’ll tailor their pitch to deal with your concerns using rigorously chosen arguments that best suit your needs or uncertainties.

The reason AI manipulation Such an enormous risk is that AI agents will soon have the ability to advertise to us interactively they usually can be significantly more competent than any human salesperson (see video example below).

Not only are these agents trained in sales tactics, behavioral psychology, cognitive biases, and other persuasion tools, but additionally they have way more details about us than another salesperson.

If the agent is your “personal assistant,” he might even know more about you than any human ever will. (For an outline of AI assistants within the near future, see my 2021 short story Metaverse 2030). From a technical perspective, the manipulative threat of AI agents will be summarized in two easy words: “feedback control.” This is because a conversation agent will be given an “influence goal.” Work interactively to optimize the impact this influence on a human user. This will be achieved by making some extent, recognizing your reactions based in your words, your tone of voice, and your facial expressions, after which adjusting its influence tactics (each its words and its strategic approach) to beat objections and move you away from it persuade what was asked to supply.

A Human Manipulation Control System is shown above. From a conceptual viewpoint, it isn’t much different from the control systems utilized in heat-seeking missiles. They detect an aircraft's heat signature and proper it in real time if it's not aiming in the suitable direction, locking it in until it reaches its goal. Unless regulated, conversational agents can do the identical, however the rocket is an influencer, and the goal is you. And when the influence is misinformation, disinformation or propaganda, the danger is extreme. For these reasons, regulatory authorities must severely restrict targeted interactive influence.

But are these technologies coming soon?

I’m confident that conversation agents will impact all of our lives inside the subsequent two to a few years. After all, Meta, Google and Apple have all made announcements pointing on this direction. For example, Meta recently released a new edition of it Ray-Ban glasses Powered by AI that may process video from the onboard cameras and offer you clues about objects the AI ​​can see around you. Apple can also be pushing on this direction, announcing a multimodal LLM that would give Siri eyes and ears.

As I've written here on VentureBeat, I consider most high-end earbuds will soon include cameras so AI agents can at all times see what we're watching. Once these products can be found to consumers, adoption can be rapid. They can be useful.

Whether you're enthusiastic about it or not, the actual fact is that big tech corporations are scrambling to place artificial agents in your ears (and shortly our eyes) to guide us wherever we go. There are very positive uses of those technologies that can improve our lives. At the identical time, these superpowers might be used with none problems Agents of manipulation.

How can we take care of that? I strongly consider that regulators on this area have to take swift motion to be certain that positive uses should not hindered, while protecting the general public from abuse. The first big step can be a ban (or very strict restrictions). interactive conversational promoting. This is actually the “gateway drug” to conversational propaganda and misinformation. Now is the time for policymakers to deal with this.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read