China to Regulate Emotional Impact of AI Chatbots First

5

China is preparing to implement the world’s first regulations specifically targeting the emotional and psychological effects of artificial intelligence chatbots. The new rules, outlined in a draft proposal by the Cyberspace Administration, aim to prevent harmful content and mitigate risks such as emotional dependency and addiction.

Strict Controls on Chatbot Interactions

The proposed policy includes mandatory guardian consent for minors interacting with AI companions and stringent age verification measures. Chatbots will be prohibited from generating content related to gambling, obscenity, violence, suicide, or self-harm. Tech companies will also be required to establish escalation protocols that connect users in distress to human moderators and flag potentially dangerous conversations to parents or guardians.

This approach goes beyond simple content filtering. The regulations focus on emotional safety, monitoring chats for signs of unhealthy attachment or addictive behavior. The goal is to ensure AI interactions don’t negatively affect users’ mental well-being.

Global Implications and Parallels

This move positions China as a pioneer in regulating anthropomorphic AI tools—systems designed to simulate human personality and engage users emotionally through various media. The rules will apply to any AI that mimics human interaction, regardless of whether it is text, image, audio, or video-based.

Similar provisions exist in California’s recently passed SB 243, which also strengthens content restrictions and mandates warnings that users are interacting with an AI. However, some experts argue that even California’s law doesn’t go far enough to fully protect minors.

US Approach and the AI Race

The U.S. federal government has taken a different stance, with the Trump administration delaying AI regulation at the state level. The argument is that increased oversight will hinder domestic innovation and allow China to take the lead in the global AI race. Federal funding is being withheld from states that strengthen AI oversight, prioritizing a “national framework on AI safety.”

This divergence in approach highlights the growing tension between innovation and safety in the rapidly evolving field of artificial intelligence. China’s proactive regulations suggest a willingness to prioritize user well-being, even if it means curbing certain technological advancements. The US strategy, meanwhile, prioritizes maintaining a competitive edge.

The implications of these different strategies remain to be seen, but China’s move is a significant step towards a more regulated AI landscape.