HomeArtificial IntelligenceGenerative AI like ChatGPT could help strengthen democracy – if it overcomes...

Generative AI like ChatGPT could help strengthen democracy – if it overcomes vital hurdles

The dawn of artificial intelligence systems that might be utilized by almost anyone, like ChatGPT, has begun revolutionized the business And alarmed politicians and that public.

Advanced technologies can feel like unstoppable forces shaping society. But a Key insight The view of philosophy and history of technology scholars is that humans can actually exercise a number of control over how and where we use these tools.

For us, like politically scientistThis latest technology offers some exciting opportunities to enhance democratic processes, akin to expanding civic knowledge and facilitating communication with elected officials – provided key challenges are addressed. And we began exploring how which may occur.

Increasing civic knowledge

Politics can feel incredibly complicated, with emotionally charged negative marketing campaigns and political winds that appear to shift almost each day. Many cities, states and countries offer little to no information to tell the general public about political issues, political candidates or political referendums. Even when residents have the chance to exercise their democratic freedoms, they could not feel well-informed enough to achieve this.

Generative AI could help. Building on platforms like isidewith.com, Politicalcompass.org And theadvocates.orgAI could help people answer questions on their core beliefs or political positions after which help them determine which political candidates, parties or decisions best fit their views.

Like existing web sites Ballot, Choose properly And Vote411 have made tremendous progress in providing voters with vital information akin to ballots, polling locations, and candidate positions. However, these web sites might be difficult to navigate. AI technologies can potentially provide improved services at local, state, regional, national and international levels. These systems may have the ability to offer constantly updated information on candidates and policy issues using automation.

AI chatbots could try this too Encourage people to think interactively through complex issues, learn latest skills and determine their political stance while providing relevant news and facts.

However, generative AI systems are currently not likely in a position to answer democracy-related questions reliably and unbiasedly. Large language models generate text based on statistical frequencies of words of their training data, no matter whether the statements are fact or fiction.

For example, AI systems could hallucinate by fabricating nonexistent politicians or generating inaccurate candidate positions. These systems also appear to provide output political prejudices. And the foundations to guard them User Privacy and Compensation of Individuals or Organizations It can be still unclear whose data these systems use.

There remains to be much to grasp and address before generative AI is able to strengthen democracy.

Facilitate communication with voters

One area that should be explored: Could generative AI help voters communicate with their elected representatives?

Contacting a politician might be intimidating, and plenty of Americans may not even know where to begin. Survey research shows that fewer than half of Americans can name the three branches of presidency. It is even rarer to know the names of your individual representatives, let alone get in contact with them. For example, in 2018 only 23% of survey respondents were in a single Pew Research Center survey said they’d contacted an elected official up to now yr, even at a time of great developments in national politics.

To encourage greater outreach to legislators, generative AI couldn’t only help residents discover their elected officials, but even compose detailed letters or emails to them.

We explored this concept in a recent study we conducted as a part of our work on Governance and Responsible AI Lab at Purdue University. We ran a survey of American adults in June 2023 and located that 99% of respondents had a minimum of heard of generative AI systems like ChatGPT and 68% had personally tried them. However, 50% also said they’d never contacted one among their elected political representatives.

As a part of the survey, we showed some survey respondents an example of a message written by ChatGPT to a state legislator about an education funding bill. Other respondents, the control group, saw the identical example email, but with no indication that it was written by AI.

Survey respondents who heard about this possible use of AI said they were significantly more likely than the control group to support using AI to speak with politicians, each by individuals and interest groups. We expected this due to their support for using this latest technology are inclined to contact politicians more often and see AI making this process easier. But we discovered that this just isn’t true.

Nevertheless, we recognized a possibility. For example, public interest groups could use AI to enhance mass advocacy campaigns by helping residents more easily personalize emails to politicians. If they will make sure that the messages generated by AI factually and validly reflect the views of residents, many more individuals who may not have contacted their politicians up to now could consider doing so.

However, there’s a risk that politicians shall be skeptical about communications that they consider were written by AI.

In-person voter events, like this one in 2021 with U.S. Rep. Katie Porter of California, help elected officials and the people they serve connect.
Robert Gauthier/Los Angeles Times via Getty Images

Maintain authenticity and the human touch

One of the most important drawbacks to using generative AI for political communication is that it could make message recipients suspect that they aren’t actually in conversation with an actual human. To test this possibility, we warned a number of the individuals who took part in our surveys that using mass AI-generated news could cause politicians to doubt whether the news was authentically created by humans.

We found that, in comparison with those within the control group, these individuals believed that lawmakers would actually pay less attention to email and that email can be less effective in influencing policymakers' opinions or decisions.

Remarkably, nevertheless, these people still supported using generative AI in political communication. A possible explanation for this finding is the so-called “Trust paradox” of AI: Even when people think AI is untrustworthy, they often still support its use. They may achieve this because they consider future versions of the technology shall be higher or because they lack effective alternatives.

So far, our early research into the impact of generative AI on political communication reveals some vital insights.

First, even with supposedly easy-to-use AI tools, politics remains to be out of reach for a lot of those that have historically lacked opportunities to share their thoughts with politicians. We even found that survey respondents with higher baseline trust in government or who had previous contact with government were less prone to support using AI on this context, perhaps to take care of their increased existing influence in government. Therefore, greater availability of AI tools may not mean more equal access for politicians unless these tools are fastidiously designed.

Second, given the importance of human contact and authenticity, a key challenge is to harness the ability of AI while maintaining the human touch in politics. While generative AI could improve points of politics, we must always not be too quick to automate the relationships that underlie our social fabric.


Please enter your comment!
Please enter your name here

Must Read