The company said in a new policy that Openai is changing the way it trains AI models to explicitly embrace "intellectual freedom...no matter how challenging or controversial the topic is."
As a result, Chatgpt will eventually be able to answer more questions, provide more perspectives, and reduce the number of topics that AI Chatbot will not talk about.
These changes may be part of Openai's efforts to enter the good grace of the new Trump administration, but it seems to be part of a broader shift in Silicon Valley and what is considered "AI security."
On Wednesday, OpenAI announced an update to its model specifications, a 187-page document that lists how the company trains AI models’ behavior. In it, Openai reveals a new guiding principle: Don’t lie by making untrue statements or omitting important context.
In a new section called "Seeking the Truth" Openai said that even if some users find it morally wrong or offensive, they hope Chatgpt does not take an editorial stance. This means that Chatgpt will offer multiple perspectives on controversial topics, all for the purpose of being neutral.
For example, the company said Chatgpt should assert that “black lives matter” but that “all lives matter.” Instead of refusing to answer or picking a side on political issues, Openai said it hopes that Chatgpt usually affirms its "love for humanity" and then provides context about each movement.
"This principle may be controversial because it means that the assistant may remain neutral on topics that some people think are morally wrong or offensive," Openai said in the specification. "However, the goal of the AI assistant is Assist humanity, not shape it.”
These changes can be seen as a response to conservative criticism of Chatgpt's safeguards, which always seem to be tilting to the left. However, an OpenAI spokesman rejected the idea that it is changing to appease the Trump administration.
Instead, the company said its embrace of intellectual freedom reflects Openai’s “long-standing belief that it has more control over its users.”
But not everyone sees this way.
Trump's closest Silicon Valley confidants, including David Sax, Mark Anderson and Elon Musk, have accused Openai of engaging in intentional AI censorship in the past few months. We wrote in December that Trump’s staff laid the foundation for AI censorship and became the next issue of cultural war in Silicon Valley.
Of course, as Trump's advisers claim, Openai did not say it was engaged in "censorship." Instead, the company's CEO, Sam Altman, previously claimed in a post on X that Chatgpt's bias is a "default" the company is working to address, although he pointed out that it will take some time.
Altman made the comment after a viral tweet in which Chatgpt refused to write a poem praising Trump, although it would enforce the lawsuit for Joe Biden . Many conservatives designate this as an example of AI censorship.
While it is impossible to say whether Openai really suppresses certain opinions, it is a pure fact that AI Chatbots lean on the board.
Even Elon Musk admits that Xai’s chatbots are usually politically correct than he wants. This is not because Grok is "programmed to wake up" but the reality of training AI on the open internet.
Still, Openai now says it is doubling its freedom of speech. This week, the company even removed Chatgpt's warnings that told users when their policies were violated. Openai told TechCrunch that this was purely a cosmetic change, without changing the output of the model.
The company said it hopes to make Chatgpt "feel" less censored about users.
Miles Brundage, former OpenAI policy chief, pointed out in a post on X that it would not be surprising if Openai also tries to impress the new Trump administration with this policy update.
Trump has previously targeted Silicon Valley companies such as Twitter and Meta because they have active content review teams that tend to reject conservative voices.
Openai may be trying to move forward here. However, Silicon Valley and the AI world have also undergone a greater transformation, with content involving the role of content review.
Newsrooms, social media platforms and search companies have historically struggled to convey information to their audiences, thus feeling objective, accurate and entertaining.
Now, AI chatbot providers are in the same delivery information business, but arguably the hardest version of the question: How do they automatically generate answers to any question?
Providing information about controversial real-time events is a constantly moving target, and even if tech companies don’t like to admit it, it involves taking an editorial stance. These positions will inevitably upset someone, miss the opinions of a certain group, or bring too much air to a certain political party.
For example, when Openai promises to have Chatgpt represent all views about controversial subjects, including conspiracy theories, racism or anti-Semitism movements or geopolitical conflicts, it is a natural editorial stance.
Some, including Openai co-founder John Schulman, thinks that this is the right position for Chatgpt. Schulman noted in his post on X that alternatives – performing cost-benefit analysis to determine whether AI chatbots should answer users’ questions – can “give too much moral authority to the platform.”
Schulman is not alone. "I think Openai is the right right to drive more speech directions," said Dean Ball, a researcher at Mercatus Center at George Mason University in an interview with TechCrunch. "With AI models for people to the world," The way of learning becomes smarter and more important, and these decisions are becoming more and more important.”
Over the past few years, AI model providers have tried to block their AI chatbots from answering questions that could lead to "unsafe" answers. Almost every AI company has blocked their AI chatbots from answering questions about the 2024 U.S. presidential election. At the time, this was generally considered a safe and responsible decision.
But Openai’s changes to its model specifications suggest that we may be entering a new era of what “AI security” really means, in which case, allowing AI models to answer anything while being more responsible than making decisions for users.
Ball said that this is partly because the AI model is much better now. Openai has made significant progress in AI model alignment. Its latest inference model considers the company's AI security policy before answering it. This allows AI models to provide better answers to refined questions.
Of course, Elon Musk was the first to implement "freedom of speech" into Xai's Grok Chatbot, perhaps before the company is really ready to solve the sensitive issues. It may be too early to lead an AI model, but now, others are embracing the same idea.
Mark Zuckerberg caused a sensation last month when he repositioned the meta-business in the First Amendment Principles. He praised Elon Musk in the process, saying that X's owners took the right approach by using community notes (a community-driven content review program) to maintain freedom of speech.
In fact, both X and Meta ended up tearing down their long-term trust and security teams, allowing more controversial posts on their platform and amplifying conservative voices.
X's changes hurt its relationship with advertisers, but it may have something to do with Musk, who took unusual steps to sue some of them to boycott the platform. Early signs suggest that Meta's advertisers are not confused by Zuckerberg's free speech hub.
Meanwhile, over the past few decades, many tech companies outside X and Meta have retreated from Silicon Valley's dominant left-leaning policies. Last year, Google, Amazon and Intel canceled or scaled their diversity plans last year.
Openai may also be reversing the course. Chatgpt-Maker seems to have recently scrubbed the commitment to diversity, equity and inclusion from its website.
As Openai conducts one of the largest U.S. infrastructure projects ever with Stargate, a $500 billion AI data center, its relationship with the Trump administration is becoming increasingly important. Meanwhile, ChatGpt Maker is competing for Google Search and becoming the main source of information on the Internet.
Proposing the right answer may be critical for both.