FB Pixel no scriptG-7 ministers agree to 'five principles' for assessing AI risks | KrASIA
MENU
KrASIA
News

G-7 ministers agree to ‘five principles’ for assessing AI risks

Written by Nikkei Asia Published on   3 mins read

Share
The group seeks to avoid incompatible standards that would impede development of the technology.

The Group of Seven leading industrialized nations will call for the creation of international standards for assessing the risks associated with generative artificial intelligence at the digital and technology ministers meeting that opened Saturday to promote the technology’s prudent development.

The idea will be included in a statement to be issued at the Sunday close of the meeting, which is being held in the Japanese city of Takasaki in Gunma prefecture, as the group explores ways to curb the spread of bias and misinformation, infringement of copyrights and other harmful effects due to AI.

Participants on Saturday agreed to five principles for the appropriate use of AI and other developing technologies: rule of law, due process, utilizing opportunities for innovation, democracy and respect for human rights.

“We were able to share policies for promoting the development and utilization of AI,” Takeaki Matsumoto, Japan’s minister of internal affairs and communications, said after the meeting.

In light of concerns over AI, the G-7 agreed to a plan to establish uniform standards to prepare for its widespread use.

The G-7 is seeking international standards for evaluating AI technologies because regulating every use would inhibit the progress of the technology. While respecting regulations set by each country, the aim is to ensure that AI risk assessments do not become internationally disparate.

Such issues as whether AI programs are learning from unbiased data or whether there is discrimination based on race, location and other factors when used for hiring employees will be addressed.

The standards are expected to require that data used to teach AI technology be stored to ensure transparency.

Calls for human supervision of AI, data processing that ensures privacy protection, and a defense system against cyberattacks will also be included in the statement. The Organisation for Economic Co-operation and Development and private players, including the Alan Turing Institute in the UK, are taking the lead in creating the basis for evaluation standards.

The European Union, which began discussing legislation in 2021, is leading the way in AI regulation.

In regulations under consideration by the bloc, AI applications with a high risk of violating human rights in areas such as in employment, education and medical care, will be allowed only if they meet certain standards. Details are expected to be released next year.

European Commission Executive Vice President Margrethe Vestager told Nikkei the commission intends to have rules governing services like Chat GPT in place by December given its explosive adoption.

The US and Japan have up to now explored ways to create flexible guidelines in the public and private sectors, but the widespread use of ChatGPT has led some to suggest there may be a need to establish some hard regulations.

The US Commerce Department is concerned AI could be misused for discrimination and the spread of false news and began inviting suggestions for creating an auditing system for AI.

Japan will also establish an AI strategy council in May to look into policies for the government’s ministries and agencies. While regulatory discussions are proceeding at a rapid pace, G-7 countries will take a coordinated approach to ensure regulations do not become disjointed and chill investment in AI development.

The Saturday meeting also discussed measures to facilitate the smooth distribution of data across borders.

“Generative AI cannot work without data, which is a critical component for progress,” Taro Kono, Japan’s digital minister, said at a news conference.

This article first appeared on Nikkei Asia. It has been republished here as part of 36Kr’s ongoing partnership with Nikkei.

Share

Auto loading next article...

Loading...