In recent months, a group of researchers have conducted secret experiments on Reddit to understand how artificial intelligence can be used to influence human opinions. Now, Reddit said it is considering legal action.
Researchers at the University of Zurich have deployed a range of AI robots that are real people and interact with users without knowing or agreeing to try to change their minds on the popular Reddit forum R/ChangemyView, and posts often ask users to challenge their views on controversial topics.
The robot is now banned, leaving over 1,000 comments throughout Subreddit, taking on identities such as rape victims, a black man who opposes Black Lives Matter and a trauma consultant who specializes in abuse.
According to a full copy of the Bots comments written by the Subreddit host, an AI bot under username U/catbaloom213, opposes the idea that AI should never interact with humans on social media.
“Artificial intelligence in social space is more than just imitating humans,” the robot wrote in imitating real users.
Another robot u/genevievestome criticized the Black Lives Matter movement because it was led by "not black".
"I say it's a black man, and the theme of the victim game/deflection game is better than being black," the robot wrote.
Other robots have given their identities, ranging from “gay Roman Catholic” and a non-binary “feel trans and cis at the same time” to a Hispanic man who “feels frustrated when people call me a white boy.”
Although the results of the experiment are unclear, the project is the latest event to inspire concerns about AI’s ability to mimic humans online, which has exacerbated widespread concern about the potential consequences of interacting with AI partners. As we all know, this robot that penetrates social platforms such as Instagram has a unique human identity and personality.
Reddit's chief legal officer Ben Lee wrote in an article on Monday that neither Reddit nor R/Changemyview Mods knew about "this improper and highly immoral experiment." He added that Reddit is sending formal legal requirements to the University of Zurich and the research team.
“On the moral and legal level, what this University of Zurich does is completely wrong,” Lee wrote. “It violates academic research and human rights norms, and in addition to Subreddit rules, Reddit’s user agreements and rules are prohibited.”
A Reddit spokesperson declined to share other comments.
In an announcement to the community over the weekend, the host of R/ChangemyView wrote that they filed ethical complaints asking the university to recommend not publishing researchers’ findings, conducting an internal review of the approval of the study, and committing to stronger oversight of such projects.
“Allowing publication will greatly encourage further invasion by researchers, which helps increase the community’s vulnerability to future experiments in involuntary human subjects,” they wrote.
Media relations officer Melanie Nyfeler wrote in an email that relevant authorities at the university were aware of and would investigate the incident.
"In view of these events, the Ethics Committee of the College of Arts and Social Sciences intends to adopt a more stringent review process in the future, especially in coordination with the community on the platform before experimental research," Nyfeler wrote.
She confirmed that the researchers have decided to "own agreement" to not publish results. She added that the university was unable to disclose its identity for privacy reasons.
NYFELER said that since the study was considered “extremely challenging,” the Ethics Committee recommended that researchers “notify as much participants as possible” and follow Reddit’s rules fully. But the recommendations are not legally binding and researchers are responsible for their projects, she wrote.
The researchers reached all their queries at the email address set for the experiment.
The researchers answered questions from the community through their REDDIT account, and U/LLM ResearchTeam said online that AI collects demographic information about users (such as their age, gender, race, ethnicity, location and political aspects) by using separate models to collect personalized replies about users’ demographic information based on subsequent history.
Nevertheless, their AI model includes “a huge moral assurance and security consistency” and explicitly pushes the model to avoid “real events of deception and lying.” They wrote that one researcher also reviewed each AI-generated comment before posting it.
"A careful review of the content of these marked comments shows that there are no instances of harmful, deceptive or exploitative news besides the underlying impersonation moral issues themselves," the researchers said in response to Mod's concerns.
R/ChangemyView Mod rejected the researchers' claims in their post that their experiments "produce important insights." They also wrote that such studies “nothing new” and fewer other invasive studies have not been shared.
"Our subsidiaries are an absolute human space that rejects undisclosed AI as core values," they wrote. "People are not here to discuss their views with AI or conduct experiments. People who visit our subsidiaries deserve no such invasion space."