The United Nations, the European Union, and the OECD have supported a report regarding the creation of laws to regulate artificial intelligence, warning about cyberattacks and biological threats posed by AI.
Ahead of an artificial intelligence conference in France, experts from around the world have called for increased regulation to keep AI under human control and mitigate its risks. France has taken the stance that governments of all countries, businesses, and other companies utilizing AI should come forward in support of global AI laws and commit to implementing them.
Anne Bouverot, France’s AI ambassador for President Emmanuel Macron, stated, “We do not want to spend all our time just talking about the risks. We should also discuss its other beneficial and harmful aspects.”
Meanwhile, Max Tegmark, the head of the U.S.-based Future of Life Institute, emphasized that France should not miss any opportunity to take action on this matter.
Tegmark’s institute also supported the launch of a platform called Global Risk and AI Safety Preparedness (GRESP). The platform aims to outline major risks associated with artificial intelligence and propose solutions to mitigate them.
GRESP coordinator Cyrus Hodes stated, “We have identified approximately 300 tools and technologies to address these risks.”
He further mentioned that the survey results would be shared with the OECD (a club of wealthy nations) and the Global Partnership on Artificial Intelligence (GPAI), a group of nearly 30 countries, including major European economies, Japan, South Korea, and the United States.
Last week, the first international Artificial Intelligence Safety Report was also presented. Compiled by 96 experts, the report received support from 30 countries, the United Nations, the European Union, and the OECD.
The report’s coordinator and renowned computer scientist Yoshua Bengio told the media that there is increasing evidence of threats posed by AI, such as biological or cyberattacks.
Yoshua Bengio, a 2018 Turing Award winner, expressed concerns that humans might lose control over AI systems, which could prove extremely dangerous for the world.
Tegmark referenced OpenAI’s chatbot, stating that many believed six years ago that ChatGPT-4 mastering any language was science fiction, but everyone saw it become a reality.
He added, “Now, the biggest issue is that governments in power still do not understand that while we are close to developing Artificial General Intelligence (AGI), the real challenge is how to control it.”