Major Tech Companies Unite for AI Safety at Seoul Summit
In a landmark agreement, major technology companies, including Microsoft, Amazon, and OpenAI, have committed to ensuring the safe development of advanced artificial intelligence (AI) models. This pledge was made at the Seoul AI Safety Summit, bringing together tech giants from the U.S., China, Canada, the U.K., France, South Korea, and the United Arab Emirates.
As part of this international accord, these companies will voluntarily implement safety measures for their most sophisticated AI models. For those who haven't already, AI developers agreed to establish and publish safety frameworks. These frameworks will outline how they intend to address the challenges posed by their cutting-edge models, such as preventing the misuse of AI by malicious actors.
Central to these frameworks will be clearly defined "red lines" indicating risks deemed "intolerable." Such risks include automated cyberattacks and the potential creation of bioweapons. In response to these extreme threats, companies plan to introduce a "kill switch" mechanism to halt the development of AI models if they cannot adequately mitigate these dangers.
U.K. Prime Minister Rishi Sunak hailed the agreement as a global milestone. “It’s a world first to have so many leading AI companies from so many different parts of the globe all agreeing to the same commitments on AI safety,” Sunak said. “These commitments ensure the world’s leading AI companies will provide transparency and accountability on their plans to develop safe AI.”
This pact builds upon previous commitments made by companies involved in generative AI software development last November. Moving forward, the companies will seek input on these safety thresholds from "trusted actors," including their respective governments, before finalizing them ahead of the AI Action Summit in France in early 2025.
The agreed commitments specifically target so-called frontier models, the technology behind generative AI systems like OpenAI’s GPT family, which powers the ChatGPT chatbot. Since the launch of ChatGPT in November 2022, concerns about the risks associated with advanced AI systems have intensified among regulators and tech leaders.
In response, the European Union has introduced the AI Act, approved by the EU Council, to regulate AI development. In contrast, the U.K. has adopted a "light-touch" regulatory approach, applying existing laws to AI technologies rather than proposing new legislation. However, the U.K. government has indicated that it may legislate for frontier models in the future, though no timeline has been set.
This international collaboration marks a significant step toward harmonizing global efforts to ensure the responsible and safe development of AI technologies, reflecting a shared commitment to addressing the ethical and practical challenges posed by rapidly advancing AI.