A philosopher has released a famous book on digital manipulation. The author eventually becomes AI

"When readers discover the truth about the creation of this book, many people were hurt. I feel deeply sorry, but it is necessary," he said.

Lian interviewed Colamedici in a conversation, exploring the nuances of his project.

This interview has been edited for length and clarity.

Wired: What is the inspiration for philosophical experiments?

Andrea Colamedici: First, I taught timely thinking at the European School of Design and led a research project on artificial intelligence and thought systems at Foggia University. Working with my students, I realized they were using chatgpt in the worst way: copying from it. I observed that they lost their understanding of life by relying on AI, which is shocking because we live in an era where we can enter the ocean of knowledge, but we don’t know what to do. I often warn them: "You can get good grades, even cheat with Chatgpt, or even build a great career, but you will become empty." I have trained professors from several Italian universities and many people asked me: "When can I stop learning how to use chatgpt?" The answer will never be. It's not about completing education in AI, it's about how You learn while using it. ”

We have to be curious while using this tool correctly and teach it to do the way we want it to. It all starts with the crucial difference: There is some information that makes you passive, erodes your ability to think over time, and there is some information that challenges you, which allows you to generalize you beyond the limits. This is how we should use AI: As interlocutors, it can help us think differently. Otherwise, we would not understand that these tools were designed by large tech companies that impose some ideology. They choose data, where the connections, and most importantly, they see us as customer satisfaction. If we use AI this way, we will only confirm our bias. We will think we are right, but we are not actually thinking. We will be accepted digitally. We can't afford this numbness. This is the starting point of this book. The second challenge is how to describe what is happening now. For Gilles Deleuze, philosophy is the ability to create concepts, and today we need new philosophy to understand our reality. Without them, we would be lost. Just look at Trump’s Gaza video (generated by artificial intelligence) or the provocation of characters like Musk. Without solid conceptual tools, we will be sunk. A good philosopher creates concepts like a key that enables us to understand the world.

What is the goal of the new book?

The book tries to do three things: help readers become AI literacy, invent a new concept in this era, and become both theoretical and practical. Many people were hurt when readers discovered the truth about the creation of the book. I feel deeply sorry, but it is necessary. Someone said, "I hope this author exists." Well, he didn't. We have to understand the narrative we build ourselves. If we don't, then the far right will monopolize the narrative, create myths, and we will spend our lives on factual checks while they write history. We can't let this happen.

How did you use AI to help you write this philosophical paper?

I want to clarify that AI has not written this article. Yes, I used AI, but not in the traditional way. I developed a method based on the establishment of the opposition’s professors in the European Design Institute. It's a way to think and use machine learning in an opposing way. Instead of asking the machine to write for me, it produces ideas, and then I criticize them with GPT and Claude to make my opinion of what I write. Everything written in the book is mine. Artificial intelligence is a tool we have to learn to use because if we abuse it, and “abuse” includes treating it as an oracle, asking it to “tell me the answer to the world’s problems; explain to me why I exist” – and then we lose the ability to think. We become stupid. Nam June Paik was a great artist from the 1990s and said, “I use technology to hate it properly.” That’s what we have to do: understand it because if we don’t, it will use us. AI will become Big Tech to control us and manipulate us. We must learn to use these tools correctly; otherwise, we will face a serious problem.