HomePolicyAI is here – and in every single place: 3 AI researchers...

AI is here – and in every single place: 3 AI researchers have a look at the challenges in 2024

Casey Fiesler, Associate Professor of Information Science, University of Colorado Boulder

2023 was the 12 months of AI hype. Regardless of whether the narrative was that AI would save or destroy the world, it often felt that visions of what AI could at some point be overwhelmed the present reality. And while I feel anticipating future harm is a critical component of overcoming ethical debt in technology, getting too caught up within the hype risks making a vision of AI that seems more like magic than a technology that may still be shaped through explicit decisions. However, taking control requires a greater understanding of this technology.

One of a very powerful AI debates of 2023 revolved across the role of ChatGPT and similar chatbots in education. This time last 12 months, major headlines focused on how students could use it to cheat and the way educators tried to stop them from doing so – in ways that usually do more harm than good.

However, over the course of the 12 months, it was recognized that not teaching students about AI could possibly be disadvantageous, and many colleges lifted their bans. I don't think we should always redesign education to place AI at the middle of every part, but when students don't learn the way AI works, they won't understand its limitations – and subsequently won't understand how useful and appropriate it’s it isn't. This doesn't just apply to students. The higher people understand how AI works, the more empowered they’re to make use of and criticize it.

My prediction or perhaps my hope for 2024 is that there might be an enormous boost in learning. In 1966, Joseph Weizenbaum, the inventor of the ELIZA chatbot, wrote that machines were “often enough to amaze even probably the most experienced observer,” but that their magic disappears as soon as “their inner workings are explained in language so plain that it stimulates understanding”. .” The challenge with generative artificial intelligence is that, unlike ELIZA's very basic pattern matching and substitution method, it is far harder to search out a language that’s “clear enough” to make the AI ​​magic crumble.

I feel it is feasible to attain this. I hope that universities rushing to rent more technical AI experts make just as much effort to rent AI ethicists. I hope the media helps cut through the hype. I hope everyone thinks about their very own use of this technology and its implications. And I hope tech firms hearken to informed criticism as they consider what decisions will proceed to shape the longer term.

Many of the challenges in the approaching 12 months should do with AI issues that society is already facing.

Kentaro Toyama, Professor of Community Information, University of Michigan

In 1970, Marvin Minsky, the AI ​​pioneer and neural network skeptic, told Life magazine: “In three to eight years we may have a machine with the overall intelligence of a mean human.” With the singularity, the moment when the substitute Intelligence matches and begins to surpass human intelligence – not quite there yet – it’s secure to say that Minsky was off by at the very least an element of 10. It is dangerous to make predictions about AI.

Still, it doesn't seem quite as dangerous to make predictions for a 12 months. What to expect from AI in 2024? First: The race is on! Advances in AI have been regular because the days of Minsky's heyday, but the general public release of ChatGPT in 2022 sparked all-out competition for profit, fame, and global supremacy. Expect more powerful AI and a flood of latest AI applications.

The big technical query is how quickly and the way thoroughly AI engineers can address the present Achilles heel of deep learning – what you would possibly call generalized hard pondering, things like deductive logic. Are quick adjustments to existing neural network algorithms enough or is a fundamentally different approach required, as neuroscientist Gary Marcus suggests? There are legions of AI scientists working on this problem, so I expect some progress in 2024.

Meanwhile, latest AI applications are also prone to bring latest problems. You may soon hear about AI chatbots and assistants that talk over with one another and have entire conversations in your behalf, but behind your back. Some of it would be muddled – comic, tragic, or each. Deepfakes, AI-generated images and videos which might be difficult to detect, are prone to turn into widespread despite emerging regulation, causing even greater harm to individuals and democracies in every single place. And there’ll likely be latest classes of AI disasters that wouldn't have been possible five years ago.

Speaking of problems: The very people who find themselves sounding the loudest alarm about AI – like Elon Musk and Sam Altman – apparently cannot stop themselves from developing ever more powerful AI. I expect they are going to proceed to do the identical. They are like arsonists, calling out the hearth they themselves began and begging the authorities to restrain them. And with that in mind, what I hope most for 2024 – even when it appears to be progressing slowly – is stronger AI regulation at national and international levels.

Anjana Susarla, Professor of Information Systems, Michigan State University

In the 12 months since ChatGPT's launch, the event of generative AI models continues at a rapid pace. Unlike ChatGPT a 12 months ago, which used text prompts as inputs and produced text output, the brand new class of generative AI models are trained multimodally, meaning the info used for training comes not only from text sources equivalent to Wikipedia and Reddit, but additionally from videos on YouTube, songs on Spotify and other audio and visual information. With the brand new generation of multimodal Large Language Models (LLMs) that power these applications, you should use text input to generate not only images and text, but additionally audio and video.

Companies are striving to develop LLMs that might be used on quite a lot of hardware and in quite a lot of applications, including running an LLM in your smartphone. The emergence of those lightweight LLMs and open source LLMs could usher in a world of autonomous AI agents – a world for which society just isn’t necessarily prepared.

These advanced AI capabilities offer tremendous transformative power in applications starting from business to precision medicine. My most important concern is that such advanced capabilities will introduce latest challenges in distinguishing between human-generated content and AI-generated content, and introduce latest sorts of algorithmic harm.

The flood of synthetic content generated by generative AI could unleash a world by which malicious individuals and institutions can create synthetic identities and orchestrate large-scale misinformation. A flood of AI-generated content designed to take advantage of algorithmic filters and advice engines could soon overwhelm critical functions equivalent to information verification, information literacy and serendipity provided by engines like google, social media platforms and digital services.

The Federal Trade Commission has warned about fraud, deception, privacy violations and other unfair practices enabled by easy AI-powered content creation. While digital platforms equivalent to YouTube have established policies for disclosing AI-generated content, there’s a necessity for greater scrutiny of algorithmic harms by authorities equivalent to the FTC and lawmakers addressing privacy protections equivalent to the American Data Privacy & Protection Act.

A brand new bipartisan bill introduced in Congress goals to codify algorithmic literacy as a key component of digital literacy. As AI becomes increasingly intertwined with every part people do, it is obvious that it’s time to not concentrate on algorithms as parts of technology, but to contemplate the contexts by which the algorithms operate: people, processes and society.


Please enter your comment!
Please enter your name here

Must Read