HomeArtificial IntelligenceThe dangers of voter fraud: We cannot know what we cannot see

The dangers of voter fraud: We cannot know what we cannot see

It's hard to imagine that deepfakes have been around so long that we don't even bat an eyelid at a brand new case of identity manipulation. But it hasn't been so way back that we've forgotten.

In 2018 a deepfake The video showing Barack Obama saying words he never uttered has sent the web right into a frenzy and sparked concern amongst US lawmakers, who warned of a future wherein AI could manipulate elections or spread misinformation.

In 2019, a famous manipulated Video by Nancy Pelosi went viral on social media. The video was subtly altered to make her speech sound slurred and her movements sluggish, suggesting her ineptitude or intoxication during an official speech.

In 2020, deepfake videos were used to escalate political tensions between China and India.

And I don't even wish to get into the lots of – if not 1000’s – of celebrity videos which have circulated on the Internet in recent times, from the Taylor Swift porn scandal to Mark Zuckerberg's dark speech concerning the power of Facebook.

Yet despite these concerns, a more subtle and potentially more deceptive threat looms: . This may possibly prove – at the danger of sounding like a doomsayer – to be the nail that seals the coffin.

The invisible problem

In contrast to high-resolution video, the everyday transmission quality of audio, especially for telephone conversations, is amazingly low.

We've change into desensitized to low-fidelity audio signals—from bad signal to background noise to distortion—making it incredibly difficult to detect a real anomaly.

The inherent imperfections of audio lend voice manipulation an air of anonymity. A rather robotic tone or a noise-laden voice message can easily be dismissed as a technical error quite than an attempt at fraud. This makes voice fraud not only effective, but in addition extremely insidious.

Imagine getting a call from a loved one's number telling you they're in trouble and asking for help. The voice might sound a little bit odd, but you set it right down to the wind or a nasty line. The emotional urgency of the decision might compel you to act before you think that to confirm its authenticity. And therein lies the danger: voice fraud exploits our willingness to disregard small variations in sound which can be common in on a regular basis phone use.

Videos, however, provide visual clues. Small details like hairline or facial expressions provide clear clues that even essentially the most sophisticated fraudsters cannot detect with the human eye.

These alerts aren't available on a voice call. That's one reason why most wireless carriers, including T-Mobile, Verizon and others, provide free services to dam, or at the very least discover and warn about suspected scam calls.

The urgency to validate every thing and anything

One consequence of all that is that folks mechanically check the validity of the source or origin of data. And that's an ideal thing.

Society will regain trust in trusted institutions. Despite pressure to discredit traditional media, people will put much more trust in trusted institutions akin to C-SPAN. In contrast, people may begin to change into increasingly skeptical of social media chatter and lesser-known media or platforms with no good popularity.

On a private level, people will change into more cautious about incoming calls from unknown or unexpected numbers. The old excuse of “I’m just borrowing a friend’s phone” will hold much less water, as the danger of voice fraud makes us wary of unverified claims. The same goes for caller ID or a trusted mutual connection. As a result, individuals may be more inclined to make use of and trust services that supply secure and encrypted voice communications where the identity of every party could be clearly confirmed.

And technology will get well and hopefully help. Verification technologies and practices will evolve significantly. Techniques akin to multi-factor authentication (MFA) for voice calls and the usage of blockchain to confirm the origin of digital communications will change into standard. Likewise, practices akin to verbal passcodes or callback verification could change into routine, especially in scenarios where sensitive information or transactions are involved.

MFA is just not just technology

But MFA isn't nearly technology. Effectively combating voice fraud requires a mixture of education, caution, business practices, technology, and government regulations.

For humans: It is essential that you simply be extra cautious. Remember that your family members' voices may have already got been recorded and possibly cloned. Pay attention, ask questions and listen.

It's a responsibility for businesses to develop reliable methods for consumers to confirm that they're communicating with legitimate representatives. In general, you possibly can't shift responsibility. And in certain jurisdictions, a financial institution could also be at the very least partially legally answerable for fraudulent behavior on customer accounts. This also applies to any business or media platform you interact with.

The government must proceed to make it easier for technology corporations to innovate. And it must proceed to pass laws that protect people's right to web safety.

It takes an entire village, but it surely is feasible.

.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read