Research | Artificial Intelligence (AI)
According to an open letter signed by AI practitioners and thinkers (including Stephen Fry), if technology is developed irresponsible, artificial intelligence systems with emotional or self -consciousness may be harmed.
More than 100 experts have put forward five principles of responsible research on AI consciousness, because rapid progress has caused people to pay attention to this system that can be considered valuable.
These principles include priority to understanding and evaluating research on AIS consciousness to prevent “abuse and pain”.
Other principles are: setting the constraints of conscious AI systems; adopting staged methods to develop such systems; sharing and discovering with the public; avoiding a statement of misleading or excessive confidence in creating conscious AI.
The signer of this letter includes scholars such as the University of London, Sir Anthony Finkstein, as well as AI professionals from Amazon and advertising group WPP.
It is published with a new research paper with an overview principle. The paper believes that conscious AI systems can be established in the near future, or at least a conscious impression.
The researchers said: “It may be a situation where a large number of conscious systems can be created and causing losses.” He added that if a powerful AI system can reproduce his moral considerations. “
The paper was written by the Patrick Butlin of Oxford University and Theodoros Lappas at the University of Economics at Athens, and added that even companies that do not intend to create conscious systems need to be created. “Inadvertently create conscious entities”.
It acknowledges that there is general uncertainty and differences with the consciousness in the AI system and the possible consciousness, but saying “we cannot ignore” is a problem.
Other issues raised in this article are concentrated in how to define it as “moral patients” in the AI system. This is an entity that is morally “it is for itself and itself”. In this case, it questioned whether the destruction of AI could be compared with killing animals.
The paper was published in the “Artificial Intelligence Research Magazine” and also warned that it was wrong that with misleading efforts to promote its benefits, the AI system has consciously consciously caused political energy waste.
The paper and letter were organized by the COUSCIUM organization funded by WPP and were co -founded by the chief AI officer Daniel Hulme, the chief of WPP.
Last year, a group of senior scholars believed that by 2035, some artificial intelligence systems would be conscious and “moral significance.”
In 2023, Sir Demis Hassabis, the head of the Google AI plan, said that AI Systems “is definitely not a bit a bit, but may be in the future.
He said in an interview with the CBS of the US Broadcasting Corporation: “Philosophers have not really resolved the definition of consciousness, but if we mean a little self -awareness, these things, I think one day may be possible.”