Where AI and disinformation meet


Graphic illustration of a human form leaning forward and clenching its fists with an open mouth. Various letters appear to come forth from the figures mouth.
|

With the midterm elections just weeks away, the political vitriol and rhetoric are about to heat up.

One Arizona State University professor thinks most of the hyperbolic chatter will come from malicious bots spreading racism and hate on social media and in the comments section on news sites.

Victor Benjamin, assistant professor of information systems at the W. P. Carey School of Business, has been researching this phenomenon for years. He says the next generation of AI is a reflection of what's going on in society. So far, it’s not looking good.

Benjamin says that as AI learning becomes increasingly dependent on public data sets, such as online conversations, it is vulnerable to influence from cyber adversaries injecting disinformation and social discord.

And these cyber adversaries aren’t only adding nasty posts on social media sites. They are swaying public opinion on issues such as presidential elections, public health and social tensions. Benjamin says if this isn’t curbed, it can harm the health of online conversations and technologies such as AI that are dependent on it.

ASU News spoke to Benjamin about his research and perspectives on trends in AI.

Editor's note: Answers have been edited for length and clarity.

Man in glasses and jacket

Victor Benjamin

Question: Midterm elections are weeks away. What are you predicting regarding the online community and political rhetoric?

Answer: Unfortunately, it is a certainty that we will see extreme perspectives on both ends of the political spectrum become among the most frequently echoed in online discourse. Many messages will push for fringe ideas and try to dehumanize opposition. The goal of manipulating social media in this manner is generally to make it seem like these extreme perspectives are popular.

Q: When did you start noticing this trend of social manipulation with AI?

A: Social manipulation on the internet has been a thing for a long time, but activity picked up with the 2016 presidential election. For example, some social platforms such as Facebook have been forthcoming in admitting that they allowed advertisements to be purchased by nation-states to push hateful and inciteful messaging about social issues to American users. Further, the debate about masks and COVID-19 was largely fueled by cyber adversaries who played both sides. ... More recently, the anti-work movement also sees some destructive and demotivational messaging that encourages individuals to effectively give up and quit participating in society. We can expect to see even more dehumanizing and extremist messaging coming in the near election about varied social issues.

Q: Why is this happening and who’s behind it?

A: Much of this adversarial behavior is being driven by organizations and nation-states that may have a vested interest in seeing American society fracture and civilians demoralized into non-productivity. ... Social media and the internet give adversarial groups the power to directly target American citizenry like never before in history. This sort of activity is commonly recognized as a form of "fifth column warfare" in defense communities, in which a group of individuals tries to undermine a larger group from within.

Q: How does this impact future AI development? 

A: The impacts on future AI development are quite significant. Increasingly, to advance AI, research groups are utilizing public data sets, including social media data, to train AI systems so that they can learn and improve. For example, consider the autocomplete feature on phones and computers. This feature is operationalized by allowing an AI to see millions or even billions of example sentences where it can learn the structure of language, what words appear together frequently, in what order and more. After the AI learns patterns of our language, it can then use that knowledge to assist us with various language tasks, such as autocomplete.

The problem arises when we consider what exactly the AI is learning when we feed it social media data. We have all seen the media headlines about how different tech companies have released chatbots, only to pull them down offline shortly after because the AI quickly went astray and developed extremist perspectives. We should ask ourselves, why is this happening?

... This learned behavior by the AI is just a reflection of who we are as a society, or at least as dictated by online discourse. When cyber adversaries manipulate our social media to make Americans angry and demoralized, things tend to get said online that do not reflect the best of us. These conversations, despite being harmful, are eventually aggregated and fed into AI systems to learn from. The AI then may potentially pick up on some extremist views.

Q: What can be done to curb this current threat of social discord?

A: An obvious step in the right direction that I don’t see discussed enough is to show the metadata. Social media platforms have all the metadata but are never transparent. For example, in the case of Facebook advertisements about extreme social perspectives, Facebook knew who the advertiser was but never disclosed it to users. I suspect Facebook users would have reacted differently to the advertisements if they knew the advertiser was from a foreign nation-state.

Further, regarding platforms like Twitter or Reddit, so much of the conversation that lands on the home page is driven by what is popular, not necessarily what is correct or truthful. These platforms should be more forthcoming about who is posting those messages and to what frequency, (as well as) if the conversations are indeed organic or appear manufactured, and so on. For example, if out of nowhere, hundreds of social media accounts are activated simultaneously to begin spreading the same divisive messaging that did not exist previously, it is of course not organic, and the platforms should curb this content.

Beyond that, I think everyone needs to develop the right mindset about what the internet is today. ... Whenever we encounter some information online that we are not familiar with, we should stop and think about what the source is, what are potential motivations of the source to share this information, what is the information trying to get us to do, etc. We need to think about how the systems and information we encounter try to bias our behaviors and thinking. 

Top photo courtesy iStock/Getty Images

More Science and technology

 

An artist's conception of a galaxy with gas clouds.

Cosmic clues: Metal-poor regions unveil potential method for galaxy growth

For decades, astronomers have analyzed data from space and ground telescopes to learn more about galaxies in the universe. Understanding how galaxies behave in metal-poor regions could play a crucial…

A group of people wearing matching black jackets pose for a photo in front of ASU's Old Main building.

Indigenous geneticists build unprecedented research community at ASU

When Krystal Tsosie (Diné) was an undergraduate at Arizona State University, there were no Indigenous faculty she could look to in any science department. In 2022, after getting her PhD in genomics…

Collage of photos of covers of books by Professor Robert Boyd.

Pioneering professor of cultural evolution pens essays for leading academic journals

When Robert Boyd wrote his 1985 book “Culture and the Evolutionary Process,” cultural evolution was not considered a true scientific topic. But over the past half-century, human culture and cultural…