Operators must check the propriety of material created by their generative AI services, according to new guidelines suggested by China’s internet authority
Generative AI products will also need to undergo a government security evaluation before being offered to the public
China’s internet watchdog has revealed a new set of draft guidelines targeting ChatGPT-like services, as governments across the globe attempt to reign in the fast growth of generative artificial intelligence (AI) capabilities.
Companies that offer generative AI services in China must take efforts to avoid discriminatory material, inaccurate information, and content that threatens personal privacy or intellectual property, according to the draft rule released by the Cyberspace Administration of China (CAC) on Tuesday.
Businesses should also make sure that their goods support Chinese communist principles, and should not create anything that promotes regime subversion, violence or pornography, or undermines economic or social order, the CAC stated.
All generative AI products need undergo a security evaluation by the CAC before serving the public, as required by a 2018 legislation regulating online information services that have the power to influence public opinion, the regulator said.
Generative AI – which develops unique material based on text, picture or audio cues – has gathered significant attention in China after Microsoft-backed US start-up OpenAI unveiled its conversational bot ChatGPT in November.
In China, where ChatGPT remains officially unavailable, indigenous businesses have been racing to build technology identical to ChatGPT.
They include online search operator Baidu, which introduced its Ernie Bot last month, and e-commerce behemoth Alibaba Group Holding, which is seeking to incorporate its Tongyi Qianwen into all of its products. AI startup SenseTime also introduced its next set of huge AI models termed SenseNova earlier this week.
As interest in generative AI rises, however, Chinese authorities have cast a watchful eye on the threats presented by the quickly burgeoning field. State-run media have regularly warned of a “market bubble” and “excessive hype” around generative AI technologies, with one publication recently cautioning that ChatGPT may corrupt, rather than enhance users’ moral judgment.
On Monday, the Payment & Clearing Association of China, overseen by the country’s central bank, warned industry personnel to be careful of the hazards associated in utilizing ChatGPT-like technologies for work and to avoid posting personal and sensitive information to such sites.
Other nations are also exploring laws on generative AI.
The US commerce department on Tuesday started asking public opinions on how policymakers should tackle more advanced AI technologies, while Italy last month became the first Western jurisdiction to prohibit ChatGPT over privacy concerns.
The European Union meanwhile is contemplating the Artificial Intelligence Act, which was initially suggested in 2021 to restrict the usage of AI products according to their risk categories.
The EU framework, which adopts a different approach than the CAC’s proposed guidelines, is likely to impose “extremely onerous compliance burdens on companies”, said Angela Zhang, associate law professor at the University of Hong Kong.
Instead, China’s proposed rule focuses largely on content filtering, she added.
“These content requirements are not new to Chinese internet companies, so I don’t think the publication of these new rules will add too onerous a burden to Chinese companies,” Zhang added.
Still, several experts have pointed to censorship as a significant hurdle for Chinese entrepreneurs wanting to establish a viable alternative to ChatGPT.
“Excessive restrictions, content regulation, and censorship could hinder commercialisation and further innovation of such technologies,” Hanna Dohmen, a research analyst at Georgetown University’s Center for Security and Emerging Technology, warned in February.
The CAC is requesting comment on its proposed regulations through May 10.