BRUSSELS, 2023 October 29 – The Group of Seven industrialized nations will officially endorse a set of ethical guidelines for corporations engaged in the development of cutting-edge artificial intelligence systems. This decision, as revealed in a G7 document, underscores governments’ efforts to preemptively address potential hazards and misuse associated with this technology.
The voluntary ethical framework serves as a watershed moment in shaping the approach of major nations toward AI governance, in light of mounting apprehensions about privacy and security risks. The exclusive document disclosed to Reuters conveys this significance.
Leaders from the Group of Seven (G7) comprising Canada, France, Germany, Italy, Japan, the United Kingdom, the United States, and the European Union, initiated this endeavor back in May during a ministerial summit colloquially referred to as the “Hiroshima AI initiative.”
This comprehensive eleven-point code is designed with the intention of “advocating for the safety, security, and reliability of AI technology on a global scale.” It seeks to offer voluntary counsel for actions undertaken by entities engaged in developing highly advanced AI systems, which includes the most sophisticated foundational models and creative AI systems, as articulated within the G7 document.
facilitate the realization of advantages and simultaneously contend with the associated risks and challenges posed by these technological advancements.
G7
The primary aim of this framework is to “facilitate the realization of advantages and simultaneously contend with the associated risks and challenges posed by these technological advancements.“
The guidelines call upon enterprises to employ prudent measures for the identification, evaluation, and mitigation of risks throughout the entire AI lifecycle. Additionally, they are expected to address any untoward incidents and patterns of misuse that may surface after AI products have entered the market.
Furthermore, companies are encouraged to publish publicly-accessible reports that detail the capabilities and limitations of AI systems, in addition to insights into their utilization and possible misapplication. Simultaneously, they are urged to make substantial investments in robust security mechanisms.
The European Union has been a trailblazer in the regulation of this emerging technology, particularly with its rigorous AI Act. In contrast, Japan, the United States, and certain Southeast Asian nations have adopted a more hands-off approach in their pursuit of economic growth, distinguishing them from the bloc.
Speaking at an internet governance symposium in Kyoto, Japan earlier this month, Vera Jourova, the digital head of the European Commission, underscored that a Code of Conduct serves as a robust foundation for ensuring safety, acting as a bridge to more comprehensive regulations in the future.
source :
https://www.reuters.com/technology/g7-agree-ai-code-conduct-companies-g7-document-2023-10-29/