In the not-too-distant future, most of the information people consume on the internet will be influenced by artificial intelligence, a Northeastern expert says.
And while it is impossible to slow the use of AI, it is crucial to understand AI’s limits—both what it cannot and should not do—and to adopt ethical norms for its development and deployment, says John Wihbey, an associate professor of media innovation and technology.
If not, democracy is in jeopardy, Wihbey says.
Democracy today, he says, is a complex system of people collectively processing information to resolve problems. Knowledge and information that the public consumes play a key role in supporting democratic life.
Chatbots can simulate human conversation and perform routine tasks effectively, and AI agents are autonomous intelligent systems that resolve customer requests by performing tasks without human intervention. They might soon replace humans, Wihbey says, in such information fields as journalism, social media moderation and polling.
“As AI systems begin to create public narratives and begin to moderate and control public knowledge,” Wihbey says, “there could be a kind of lock-in in terms of the understanding of the world.”
AI and large language models are trained on and generate content based on past data about people’s values and interests. They will continuously reinforce past ideas and preferences, Wihbey says, creating feedback loops and echo chambers.
This risk of feedback loops, he says, will remain recurrent.
In journalism, Wihbey says, AI might be further incorporated into newsrooms to discover and verify information, categorize content, conduct large-scale analysis of social media and even generate automated coverage of events, including civic and government meetings.
Entire municipalities or larger regions, so-called news deserts, might end up being covered by AI agents, he says.
On social media, AI moderators whose judgment is conditioned by outdated data and doesn’t align with latest human preferences, Wihbey says, might overmoderate and erase users’ posts and commentary—a vital space for modern human deliberation.
If they can’t keep up with the fast-changing environment of human contexts, chatmods may also be subject to feedback loops. Their actions will affect what becomes public knowledge, or what humans believe to be true and worthy of attention.
AI-driven simulations in polling could distort results, affecting citizens’ conclusions. Such warped knowledge will repeatedly influence human preferences and decisions in democratic space—for example, what people believe in or who they may vote for—creating recursive spirals.
AI models, Wihbey says, intrinsically will never be able to accurately predict the public’s reaction to something or an election outcome.
“Some of the research about how AI can serve to simulate human opinion polls show that this is true where data is not well established in the model yet,” he says. “In political and social life, so much of what is important is fundamentally emergent.
Northeastern University
This story is republished courtesy of Northeastern Global News news.northeastern.edu.
Citation:
Can AI-generated content be a threat to democracy? (2024, June 6)
retrieved 6 June 2024
from https://techxplore.com/news/2024-06-ai-generated-content-threat-democracy.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.