The 4th AI High-Level Strategic Dialogue [Courtesy of the Ministry of Science and ICT]
The South Korean government has announced plans to improve the reliability of artificial intelligence (AI) generated content by considering the inclusion of watermarks. The government is also looking to establish a regulatory framework to facilitate voluntary AI reliability verification and certification by private entities, starting in November 2023.
The Ministry of Science and ICT made this announcement during the “4th AI High-Level Strategic Dialogue,” which took place in Seoul’s LG Science Park on Wednesday. The discussion focused on the “AI Ethics and Trustworthiness Assurance Plan,” which aims to strengthen AI’s role in enhancing the country’s competitive position in AI technology.
The 4th AI High-Level Strategic Dialogue, a top-level, public-private consultative body for national AI competitiveness, brings together government and private sector stakeholders to discuss policy, investment directions, and AI strategic collaboration. The dialogue was convened to discuss the practical steps for enhancing AI trustworthiness following commitments made by businesses during an AI policy promotion event in September, which was attended by President Yoon Suk Yeol.
The ministry plans to bolster AI trustworthiness focuses on several key aspects, including private sector self-regulation, technological and institutional foundations, and widespread AI awareness. Specific actions include the development of sector-specific guidelines based on generative AI technologies, with private sector self-regulation to ensure reliability verification and certification to be initiated next month.
The government plans to pilot AI certification for companies engaged in high-risk AI development and trial projects by December. Additionally, in response to technical limitations and potential issues such as malfunctions, the government will launch new technology development initiatives in 2024.
A substantial investment of 22 billion won ($16.2 million) from 2022 to 2027 will be allocated to develop next-generation generative AI technologies, and the introduction of labeling for AI-generated content is also on the agenda. The government will recommend the use of watermarks to help users identify AI-generated content and is considering making AI-generated content labeling mandatory via a phased approach after industry feedback. Explanatory documents for high-risk AI will also be provided in the first quarter of 2024.
By Kang Bong-jin and Minu Kim
[ⓒ Pulse by Maeil Business Newspaper & mk.co.kr, All rights reserved]