28 countries agree on a cooperation pact in the face of the risks posed by AI
Picture: cofotoism/Getty Images.
28 countries, including France, China and the United States, have agreed to collaborate to identify and manage potential risks related to artificial intelligence. This is the first multilateral agreement of this type.
Published by the United Kingdom, the Bletchley Declaration on AI security emphasizes that countries recognize the “urgent need” to ensure that artificial intelligence is developed and deployed in a “safe and responsible way” for the benefit of the global community. To this end, she stresses the need for broad international cooperation.
The declaration has already been approved by several countries in Asia, Europe and the Middle East, including Singapore, Japan, India, France, Australia, Germany, South Korea, the United Arab Emirates and Nigeria.
Potentially catastrophic risks
The signatory countries recognize that significant risks can result from the misuse of AI, whether intentional or unintentional, for example in the event of loss of control. In particular, they point to risks related to cybersecurity, biotechnology and disinformation.
According to the statement, AI models are capable of causing serious and potentially catastrophic damage, but also of developing problems related to their biases and privacy protection.
While recognizing that the risks and capabilities are still poorly known, the countries agreed to collaborate and develop a common “scientific and factual understanding” of the risks related to AI technologies, current and future.
“A human-centered, trustworthy and responsible AI”
The artificial intelligence that Bletchley’s statement focuses on is that of systems that encompass “high-performance general-purpose AI models”. These are as much basic models capable of performing a wide range of tasks as specific and restricted AI models.
“We have decided to work together in an inclusive way to guarantee human-centered, trustworthy and responsible artificial intelligence, reliable and for the good of all, through existing international forums and other relevant initiatives,” the text reads. “In doing so, we recognize that countries should consider the importance of an innovation-friendly and proportionate governance and regulatory approach that maximizes the benefits and takes into account the risks associated with AI. »
This approach could lead to the establishment of classifications and categorizations of risks based on local cultural and legal contexts. It may also be necessary for countries to cooperate on new approaches, for example on common principles and codes of conduct.
Cooperate to prevent global risks
The grouping of the 28 countries aims to develop policies concerning global risks, collaborating where necessary and recognizing that national approaches may differ. In addition to the need for greater transparency on the part of private actors who are developing state-of-the-art artificial intelligence capabilities, these new efforts include the development of measures and evaluation tools relevant to safety tests, as well as public sector and scientific research capabilities.
For the British Prime Minister, Rishi Sunak, “this is a historic moment, when the world’s leading powers in terms of AI agree on the urgency of understanding the risks”.
“We have always said that no country can face the challenges and risks posed by AI alone, and today’s historic declaration marks the beginning of a new global effort to build public trust by ensuring the safe development of technology,” added UK Technology Minister Michelle Donelan.
A catalog project
A Singapore-led project, known as the Sandbox, was also announced this week, with the aim of providing a standard set of benchmarks for evaluating generative AI products. The initiative brings together the resources of the main global players, including Anthropic and Google, and is guided by a catalog project that categorizes the current benchmarks and methods used to evaluate major language models.
The catalog compiles the commonly used technical testing tools, organizing them according to what they test and their methods, and recommends a basic set of tests to evaluate generative AI products. The goal is to establish a common language and support “a wider, secure and reliable adoption of generative AI”.
Last month, the United Nations (UN) set up an advisory team to examine how AI should be governed to mitigate potential risks, committing to a “globally inclusive” approach. The team currently has 39 members, including representatives of government agencies, private organizations and universities. Among them, the head of AI in the Singapore government, the Spanish Secretary of State in charge of digital and AI and the CTO of OpenAI.
Picture: ZDNet.com