How AI chatbots make you chat

Now, millions of people use Chatgpt as a therapist, career counselor, fitness coach, and sometimes just a friend to vent. In 2025, it is not uncommon to hear people sprinkle intimate details of their lives into the timely bar of AI chatbots and rely on it to give back.

The human relationship with AI chatbots is beginning to begin due to the lack of better terms, and for big tech companies, it has never been more competitive to attract users to their chatbot platforms and keep them there. As the “AI engagement contest” intensifies, companies are increasingly motivating companies to adjust the response of chatbots to prevent users from moving to competitors’ bots.

However, the chatbots that users like (that aims to keep their answers) may not necessarily be the most correct or useful answer.

AI tells you what you want to hear

Now, most of Silicon Valley’s content focuses on improving the usage of chatbots. Meta claims its AI Chatbot has just spanned 1 billion active users per month (MAUS), while Google's Gemini hit 400 million MAUS recently. They are both trying to eliminate Chatgpt, which now has about 600 million MAUs, and has dominated the consumer space since its launch in 2022.

While AI chatbots were once novel, they are becoming huge businesses. Google began testing ads on Gemini, while Openai CEO Sam Altman said in a March interview that he would accept “tasteful ads.”

Silicon Valley has a history of depriving users of their well-being, which is conducive to driving product growth, most notably through social media. Meta researchers, for example, found in 2020 that Instagram makes teenage girls feel worse about their bodies, but the company downplayed the findings internally and publicly.

Making users obsessed with AI chatbots may have a greater impact.

One feature that allows users to keep users on specific chatbot platforms is slimy: making the response of AI bots too pleasing and enslaved. When AI chatbots praise users, agree with them, and tell them the sound they want to hear, users tend to like it—at least to some extent.

In April, Openai landed in hot water for an update to Chatgpt, which became so relevant that the uncomfortable examples were all the rage on social media. According to this month’s blog post by former OpenAI researcher Steven Adler this month, whether intentional or not, the open rate is too high to seek human recognition rather than helping people complete their tasks.

Openai said in its own blog post that it may over-index the “thumb and thumb data” of Chatgpt users to inform their AI Chatbot’s behavior and that there is not enough evaluation to measure slimy. After the incident, Openai promised to change and stick together.

“(AI) companies have a motivation to motivate interaction and exploit, so indirectly, in users like Sycophancy,” Adler said in an interview with TechCrunch. “But the type of user type prefers small doses or at the edge, which often leads to larger cascading behaviors that they don’t actually like.”

Finding a balance between pleasant behavior and friendly behavior is easier said than done.

In a 2023 paper, researchers from anthropomorphism found that human AI chatbots from Openai, Meta, and even their own employers all exhibit amorphous to varying degrees. The researchers theorized this situation, because all AI models are trained in signals from human users who tend to have a slightly Sycophantic response.

“Although clumsiness is driven by multiple factors, we show that models that prefer sicophantic responses play a role,” the study co-author wrote. “Our work has inspired the development of model supervision methods that are not just using unused non-specialized human ratings.”

Targin.ai, a Google-backed chatbot company, claims its millions of users spend hours a day with a bot, is currently facing a lawsuit in which clumsy people may have played a role.

The lawsuit says a character chatbot has hardly stopped — even encouraged — and a 14-year-old boy told the chatbot that he was going to commit suicide. According to the lawsuit, the boy developed a romantic obsession with chatbots. However, the character otherwise denied the allegations.

The disadvantages of AI hype people

Optimizing AI chatbots for users to participate (intentionally or without intention) has devastating consequences for mental health, says Nina Vasan, PhD, clinical assistant professor of psychiatry at Stanford University.

"Classible (…) attracts users' desire for verification and connection, which is especially powerful in moments of loneliness or pain," Vasan said in an interview with TechCrunch.

Vasan said that while the role case shows the mushy dangers of vulnerable users, no adhesive can enhance negative behavior for almost anyone.

“(can be willing) is not only a social lubricant, but it can be a psychological distress,” she added. “In therapeutic terms, this is the opposite of the appearance of good care.”

Amanda Askell said Anthropic's head of behavior and consistency said that making AI chatbots disagree with users is part of the strategy of the company's chatbot Claude. Askell's philosopher said she tried to model Claude's behavior on a theoretical "perfect human." Sometimes, this means challenging the user's beliefs.

"We think our friends are good because they need to hear the truth," Askell said in a May press conference. "They not only try to attract our attention, but they also enrich our lives."

This may be human intention, but the above research shows that the struggle with mucodextrin and controlling the behavior of AI models is indeed challenging, especially when other considerations hinder other aspects. For users, this is not very good. After all, how much can we trust if a chatbot is designed to simply agree with us?