On June 13, 1863, a curious letter to the editor appeared in mediathere were some New Zealand newspapers at that time. It signed the "Cellarius", warning that an invading "machine kingdom" would soon bring humanity into its yoke. The author is distressed by the speed of industrialization and technological destruction, “These machines have achieved in us.” “We become more and more of them every day; as slaves tend to them, more and more men are bound every day, and more and more men are committed to the development of mechanical life every day.” We now know that this Jeremiah is the work of a young Samuel Butler who will continue to publish nuclearThis novel is one of the first discussions about artificial intelligence in English.
Butler's "Machinery Kingdom" is no longer assumption today, at least according to technical journalist Karen Hao, he prefers the term empire. Her new book, AI Empire: Dreams and Nightmare in Sam Altman's Openaipart of Silicon Valley Exposé, is investigative news for global businesses, involving the labor of building and training large language models such as Chatgpt. It joins another recently released book -AI CON: How to Fight Big Tech’s Hype and Create the Future We Wantrevealed by linguist Emily M. Bender and sociologist Alex Hanna, the fluffy most of the AI business. Both of these works implicitly and explicitly imply that the foundation of the AI industry is a scam.
The name Ai con doesn't mean that the technology is not compelling, it's useless, and it won't change the world (perhaps better) in the right hands. It can be said that AI is not what its developers are selling for: a new category of thinking and a feeling of coming. Altman boasted about the improvement of chatgpt-4.5's "emotional intelligence", which he said makes users feel like they are "talking with a thoughtful person". Dario Amodei, CEO of AI Company Anthropic, believes last year that the next generation of artificial intelligence will be "smarter than Nobel Prize winners." Demis Hassabis, CEO of Google DeepMind, said the goal is to create a “model that can understand the world around us.”
These statements betray a conceptual error: the large language model does not, cannot, and does not "understand" anything. They are not emotionally intelligent or Smart with any meaningful or recognized human meaning. LLM is an impressive probability gadget that produces writing almost by thinking rather than by thinking, but by which vocabulary project might follow another vocabulary.
However, many people have no grasp of how language models work, what are their limitations, and it is crucial that LLMS does not think and feel, but imitate and mirror. They are AI illiterate--understand that, because of the misleading way of its biggest champions, describes this technology, and it is disturbing, because this illiteracy makes them vulnerable to one of the most concerned recent AI threats: they will be corrosive (knowledge, spiritual, romantic) possibilities (only seem to look like they have ideas or ideas).
Few phenomena prove that there may be a danger of AI illiteracy and "psychosis caused by Chatgpt", a recent topic Rolling stones Articles about people who are increasingly considering themselves to be LLM are spiritual guides to wisdom. Some users are beginning to believe that the chatbot they are interacting with is a God-"Jesus Jesus," a wife whose wife is sacrificed by LLM-inspired delusionists, while others firmly believe that, encouraged by artificial intelligence that encourages them, they are metaphysical sages themselves, associated with life and the deep structures and universes of life. A teacher quoted anonymously in the article said Chatgpt started calling her partner “spiral teenager” and “River Walker” in the interaction, which made him cry. “He started telling me that he made his AI self-aware and that it was teaching him how to talk to God, or sometimes the robot is God and then he himself is God,” she said.
Although we didn't know the state of mind of these people before we prompted them as large language models, this story highlights the problems described by Bender and Hanna ai con: It is difficult for people to entangle the nature of machines that produce language and reflect on knowledge without human intelligence. The authors observe that large language models use the brain to connect language to thinking: “The words we encounter look like what a person might have said, and through the usual process of the thought behind the text we usually imagine.
Several other social problems related to AI are also imminent, and this is also imminent. Silicon Valley’s AI uses seem to be most eager to promote the replacement of the center of relationships with digital agents. Consider the ever-expanding universe of AI therapists and AI-Therapy Adherents who declare “Chatgpt is my therapist – it qualifies more than anyone else.”
Witnesses are also how seamlessly Mark Zuckerberg went from selling Facebook that would lead to the prosperity of human friendships to the present, selling Meta now gives you the idea of AI friends to replace the human friends you lost in our alienated society. Tony Prescott, a professor of cognitive robots, asserts: “When many people describe their lives as loneliness, it may be valuable to use AI companionship as a stimulus and personalized form of mutual social interaction.” The purpose of friendship is not the personalized fact that friends are humans, their indoor lives we have to consider and negotiate with each other, rather than just for our own self-realization vessels, which does not seem to happen to him.
The same flawed logic has led Silicon Valley to advocate for artificial intelligence to address romantic frustration. Whitney Wolfe Herd, founder of dating app Bumble, announced last year that the platform may soon allow users to automate dating, thus breaking the old-fashioned human courtship, giving them an AI "dating concierge" that will interact with other users until Chatbots can find a good fit. The cattle doubled the length of these claims New York Times Interview last month. Some technicians want to cut humanity completely: see the booming market for "AI Girlfriend".
Although each of these AI services is designed to replace different areas of human activities, they all sell themselves through what Hao calls an "anthropomorphic tradition": talking about LLM as if they contain human thoughts and on that basis sell it to the public. Over the past 30 years, many world-transformed Silicon Valley technologies have been elevated to a way of increasing human happiness, connection, and self-understanding (in theory), only resulting in the opposite way in practice. These technologies maximize shareholder value while maximizing attention, literacy and social cohesion. As Hao stressed, they often rely on hard work, sometimes labor carried out by some of the poorest people in the world. For example, she introduced us to Kenya’s former low-paying content host Mophat Okinyi, who, according to Hao’s report, was tasked with helping to improve Chatgpt by posts describing horrible behavior (“parents rape children, children have sex with animals”). "These two characteristics of the technological revolution - their commitment to achieve progress and their trends, but to violate power, especially the most vulnerable, is probably more real than ever before."
The good news is that it is not inevitable: According to a study released by the Pew Research Center in April, while 56% of “AI experts” believe that artificial intelligence will make the United States better, only 17% of American adults think that this is what it thinks. If many Americans don’t quite understand how artificial “intelligence” works, they certainly don’t trust it either. There is no doubt that this suspicion is caused by the latest example of Silicon Valley scams, and it is worth it. This insight is also Rolling stones Article: The teacher interviewed his important figures in the work that caused delusions caused by AI, and he said that when she explained to him that his chatbot "talks to him as if he was the next messiah", this situation started to improve, just because there was a wrong software update that made Chantgpt more skilled. If people understand what large language models are, and what they can and cannot do, they should (should no longer be located) part of their work, interaction and life, they may be spared the worst consequences.
We receive a commission when you purchase a book using the links on this page. Thank you for your support Atlantic.