If you reach a progress that makes the ability to safe the system beyond its capabilities, will you stop?
I don't think today's system poses any existing risks, so it's still theoretical. Geopolitical issues may actually end up being trickier. But if there is enough time, enough care and considerate, and using scientific methods…
If the time range is tight, we don’t have much time to take care of and be considerate.
us No There is a lot of time. We increasingly invest resources in security and networking and research on systems sometimes called mechanical interpretability. Then, at the same time, we also need to have a social debate on institutional building. How do we hope governance works? How will we obtain international agreements at least on some basic principles of how to use and deploy and build these systems?
How much do you think AI will change or eliminate people’s work?
What usually happens is to create new jobs, which take advantage of new tools or techniques, which are actually better. We'll see if this time is different, but over the next few years we'll have these incredible tools to boost our productivity, which actually makes us a little bit superhuman.
If Agi can do everything everyone can do, it seems like it can be a new job as well.
We don't want to use a machine to handle many things. AI tools can help doctors, or you can even use AI doctors. But you don't want robot nurses - about the compassionate aspects of humanity, especially humanism.
Tell me, what are you envisioning when you look at our future in 20 years, and based on your predictions, AGI is everywhere?
If everything goes well, then we should be in an era of radicality, the Golden Age. AGI can solve what I call the root problems in the world, solve terrible diseases, be healthier, live longer, and find new energy. If everything happened, it should be the greatest time for human prosperity, where we headed to the stars and settled in the Milky Way. I think this will start to happen in 2030.
I'm skeptical. We have incredible richness in the Western world, but we don’t distribute it fairly. As for solving big problems, we don’t need to solve them but solve them. We don't need AGI to tell us how to solve climate change - we know how to solve climate change. But we don't.
I agree. As a species, we have always been bad at cooperating. Our natural habitat is destroyed, partly because it requires people to sacrifice and people don’t want to do it. But this radical AI will make things feel like a non-zero game -
Will AGI change human behavior?
Yes. Let me give you a very simple example. Water supply will be a huge problem, but we have a solution - precipitation. It takes a lot of energy, but if there is a converged renewable, free, clean energy (because AI proposes it), the water problem suddenly solves. Suddenly, this is no longer a zero-sum game.