Openai CEO Sam Altman has put forward a huge vision for Chatgpt's future at an AI event hosted by VC company Sequoia earlier this month.
When one attendee asked how Chatgpt could become more personalized, Altman replied that he ultimately wanted the model to record and remember everything in a person's life.
The ideal, he said, is “a very small model of reasoning, where you put a trillion contexts into your whole life.”
He described: "This model can reason in the entire context and do it effectively. Every book you have ever read in your life, every email you have read, everything you have read, and everything you have read, and all the data you have seen from other sources. Your life can only keep all the data in the context."
"Your company is just doing the same for all your company's data," he added.
Altman may have some data-driven reasons to think this is Chatgpt's natural future. In the same discussion, when asked about the cool ways for young people to use Chatgpt, he said, “People in college use it as an operating system.” They upload files, connect to data sources, and then use “complex tips” for that data.
Furthermore, with Chatgpt's memory choice (the fact that previous chats and memories can be used) as the background - one trend he said he noticed is that young people "do not really make decisions about life without asking Chatgpt."
“The massive simplification is: Seniors use Chatgpt AS, such as Google alternatives,” he said. “People in their 20s and 30s use it like life consultants.”
Seeing how Chatgpt becomes an omniscient AI system is not a leap. Pairing with an agent that the valley is currently trying to build, it's an exciting future.
Imagine your AI automatically schedules the car's oil changes and reminds you; plan the trip you need to have a field wedding and order gifts from the registry; or book the next volume of a book series you've read for years.
But the terrible part? Should we trust a large technology for-profit company to understand everything in our lives? These companies don’t always act in a model way.
Google started living with the motto "Don't Be Evil" and lost a lawsuit in the United States that accused it of engaging in anti-competitive, monopoly behavior.
Chatbots can be trained to respond politically motivated. Not only did the Chinese robots find that they meet China's censorship requirements, but Xai's chatbot randomly discussed South Africa's "white genocide" this week. The behaviors that many people noticed suggests its intentional manipulation of its response engine under the command of South African-born founder Elon Musk.
Last month, Chatgpt became so pleasant that it was downright sicophantic. Users start sharing screenshots of the robot, encouraging problematic decisions and dangerous decisions and ideas. Ultraman responded quickly, promising the team to resolve the adjustments that caused the problem.
Even the best and most reliable models still make things up from time to time.
So, having an omniscient AI assistant can help our lives in ways we can only start to see. But given Big Tech's long history of behavior, this is also a case of abuse.