Great language flattening - Atlantic Ocean

In at least one crucial way, AI has won its global dominance campaign. Every moment of the day, an incredible series of synthetic essays are published, with a bunch of machine-written news articles, text messages, emails, search results, customer service chats, and even scientific research every day.

Chatbots learn from human writing. Now, the impact may run in another direction. It was hypothesized that the proliferation of generative-AI tools such as Chatgpt would penetrate human communication, and that the concise language we use when prompting chatbots might lead us to disposing of any benefits or works when it corresponds to friends and colleagues. But there are other possibilities. Jeremy Nguyen, a senior researcher at Swinburne Technology in Australia, conducted an experiment last year to see how exposure to AI-generated texts can change the way people write. He and his colleagues asked 320 people to write a post on the second-hand market that supplies sofas. Afterward, the researchers showed participants what Chatgpt wrote when giving the same prompt, and they asked the subjects to perform the same task again. The response has changed dramatically.

“We didn’t say, ‘Hey, try to make it better, or be more like GPT,” Nguyen told me. However, “more like GPT” is essentially what happened: after participants saw the AI-generated text, they became more detailed, drafting an average of 87 words, compared with their preamble average of 32.7 in the first round. The full results of the experiment have not been published or peer-reviewed, but this is an interesting finding. Even if the prompt is simplified, text generators tend to write for a long time. People may be influenced by this style rather than the language they use when typing in a chatbot?

Text written by AI is baked into software that millions of people use every day. Even if you don't use chatgpt, gemini, claude or any other popular text generation tool, you will inevitably be on the receiving end of email, document and marketing materials that have been compiled with its assistance. Gmail provides some users with an integrated AI tool that starts drafting responses before any finger hits the key. Last year, Apple launched Apple Intelligence, which includes AI features on Mac, iPhone and iPad, such as cross-app writing help and "smart reply" functionality in mail apps. Now, writing on the internet is more likely to be a hybrid product than a year or two ago, and this is the result of human use of AI in the drafting or refining stage, while making subtle adjustments. "So, this could be a way to be money laundered," Emily M. Bender, a professor of computational linguistics at the University of Washington, told me.

Bender, well-known criticism of artificial intelligence Random parrotdo not use AI text generators for ethical reasons. She told me: "I'm not interested in reading what no one says." The question, of course, is whether it's getting harder to know something written by AI. People are sensitive to patterns of language - you may have noticed that you switch accents or use different words depending on who you talk to, but “our handling of these patterns depends heavily on how we see who is talking to.” You may not be moved to mimic AI, but if you seem to come from a source of respect, you may be more likely to pick up language quirks. Interacting with chatgpt is the same thing. Another is receiving Chatgpt-influenced emails from highly respected colleagues.

Language is constantly developing, and technological advancements have long shaped the way people communicate (Ha hais anyone? ). These effects are not necessarily good or bad, although technological developments often help make language and communication more accessible: most people see the invention of the printing press as a welcome development of long-term writing. LLMS follows this vein – it’s never easy to turn your mind into smooth prose, no matter what your opinion of output quality.

Recent technological advancements are often inspired or even require simplicity – for example, many text messages and social media posts have clear role limitations. Generally speaking, language works according to the principle that efforts increase with length. The five paragraphs require more work than two sentences for the sender to write and the receiver to read. But Simon Kirby, a professor of language evolution at the University of Edinburgh, told me that AI tools could undermine that balance. "What happens when you have a machine, sending 10,000 words is the same or roughly the same as sending 1,000?" he said.

Kirby gave me a hypothesis: a person might give AI tools a few bullet points to turn into a lengthy, professional email, just to have the recipient immediately use another tool to summarize the prose and then read it. "Essentially, we came up with a protocol in which the machine is sending very short versions of very short encapsulated information in a flower-like formal language," he said.

The linguists I spoke with speculated that the spread of AI writing could lead to a new form of language. "It's easy to imagine that English will become more standardized in the standards of these language models," said Jill Walker Rettberg, professor of digital culture at the Center for Digital Narratives at the University of Bergen, Norway. This has happened to some extent to which both spelling and grammar checkers can adhere to the recipes they consider to be "correct." As AI tools become more common, people may think of their style as a template to follow, thus making the language homogenization bigger: Just yesterday, Cornell University proposed a study that shows that this is already happening. In the experiment, AI writing tools “make Indian participants more like American writing, thus making writing unified with Western styles and reducing the nuances of differentiated cultural expression,” the authors wrote.

Philip Seargeant, an applied linguist at the Open University of the UK, told me that when students use AI tools incorrectly, their work was read too perfectly, "but in a very bland way." Kirby said the AI ​​text lacked the error or embarrassment he expected in his student papers and had the feeling of a "strange valley". "It does have this feeling (this feeling) without any behind the eyes," he said.

Several linguists I spoke with suggested that the spread of text written or mediated by AI may cause counter-work. Perhaps some people will rebel and tend to their own language behaviorism to make themselves unique. Bender imagines people turning off AI functions or deliberately selecting synonyms when prompted to use certain words as an act of resistance. Kirby told me he has seen some students proud of not using AI writing tools. “There is a way to be that kind of value writing,” he said. “It will be the real deal, and it will be obvious because you will intentionally tend to be traits as a writer.” Rettberg compared it to cheap, factory-made fares that choose handmade products: rather than losing value due to the wave of AI, it is more appreciating human writing, but with the quality of hand-quality.

Ultimately, as language continues to evolve, AI tools will be both setting trends and catching up. After training on existing data, even if people influence it, they always lag behind how people use language. In fact, we may eventually develop into humans using AI tools, Kirby said. Large language models are often trained by the Internet's text, and AI-generated text will eventually penetrate the network, and these tools may eventually receive their own output training and embed their own language style. For Kirby, it was fun. "We might find that these models are beginning to fade away and use languages ​​produced in a specific direction in a different direction than them, and if the language has gone from human to human, it will evolve," he said. This, he argues, could make other technological advances different from generative AI related to the impact on language: "We inadvertently created something that itself can develop culturally."