Sixteen companies involved in artificial intelligence, including Alphabet’s Google, Meta, Microsoft and OpenAI have committed to safe development of the technology. Picture: DADO RUVIC/REUTERS
Loading ...

Seoul — Sixteen companies involved in artificial intelligence (AI), including Alphabet’s Google, Meta, Microsoft and OpenAI, as well as companies from China, South Korea and the United Arab Emirates (UAE) have committed to safe development of the technology.

The announcement unveiled in a UK government statement on Tuesday came as South Korea and Britain host a global AI summit in Seoul at a time when the breakneck pace of AI innovation leaves governments scrambling to keep up.

The agreement was a step up from the number of commitments at the first global AI summit held six months ago, the statement said.

Zhipu.ai, backed by Chinese tech giants Alibaba, Tencent, Meituan and Xiaomi, as well as UAE’s Technology Innovation Institute were among the 16 companies pledging to publish safety frameworks on how they will measure risks of frontier AI models.

The firms, also including Amazon, IBM and Samsung Electronics, voluntarily committed to not develop or deploy AI models if the risks could not be sufficiently mitigated, and to ensure governance and transparency on approaches to AI safety, the statement said.

“It’s vital to get international agreement on the ‘red lines’ where AI development would become unacceptably dangerous to public safety,” said Beth Barnes, founder at METR, a nonprofit for AI model safety.

The AI summit in Seoul this week aims to build on a broad agreement at the first summit held in the UK to better address a wider array of risks.

At the November summit, Tesla’s Elon Musk and OpenAI CEO Sam Altman mingled with some of their fiercest critics, while China cosigned the “Bletchley Declaration” on collectively managing AI risks alongside the US and others.

British Prime Minister Rishi Sunak and South Korean President Yoon Suk Yeol will oversee a virtual summit later on Tuesday, followed by a ministerial session on Wednesday.

This week’s summit would address “building ... on the commitment from the companies, also looking at how the [AI safety] institutes will work together”, Britain’s technology secretary, Michelle Donelan, said on Tuesday.

Since November, discussion on AI regulation had shifted from longer-term doomsday scenarios to “practical concerns” such as how to use AI in areas like medicine or finance, said Aidan Gomez, co-founder of large language model firm Cohere.

Industry participants wanted AI regulation that would give clarity and security on where the companies should invest, while avoiding entrenching big tech, Gomez said.

With countries such as the UK and US establishing state-backed AI safety institutes for evaluating AI models and others expected to follow suit, AI firms were also concerned about the interoperability between jurisdictions, analysts said.

Representatives of the Group of Seven (G7) major democracies were expected to take part in the virtual summit, while Singapore and Australia were also invited, a South Korean presidential official said.

China would not participate in the virtual summit but is expected to attend Wednesday’s in-person ministerial session, the official said.

South Korea’s foreign ministry said Musk, former CEO of Google Eric Schmidt, Samsung Electronics chair Jay Y Lee and other AI industry leaders would participate in the summit.

Reuters

Loading ...
Loading ...
View Comments