Generative AI in Games: The Double Filing Trap
The Cyberspace Administration of China (CAC) imposes two independent entry requirements for public-facing Generative AI services: Algorithm Filing and Model Filing.
Western developers currently rely on direct API calls to OpenAI, Gemini, or Claude to power intelligent NPCs. In China, directly calling an unfiled offshore Large Language Model (LLM) constitutes a regulatory violation. It invites immediate penalties.
The Regulatory Mechanism
China treats AI interactions as content publication with public opinion properties. Compliance requires clearing two distinct regulatory hurdles.
The First Hurdle: Model Filing The Interim Measures for the Management of Generative AI Services mandate that any foundation model serving the public must pass a CAC security assessment. Only models on the “White List” are legal for use. OpenAI, Google Gemini, Claude, and Midjourney are not filed. Due to strict data sovereignty and content compliance requirements, the probability of offshore models passing this filing remains near zero.
The Second Hurdle: Algorithm Filing The Provisions on the Administration of Algorithmic Recommendation Services apply to services generating or pushing content via algorithms. If a game uses an LLM to determine the plot, quests, or dialogue a player sees, it constitutes an algorithmic recommendation service. The provider must disclose the basic principles, logic framework, and a summary of training data.
Note: If the underlying model lacks a filing, the regulator will summarily reject the upper-layer algorithm filing.
The two procedures run parallel. They do not replace each other. Missing either filing puts any game integrated with AI services at risk of removal.
Once a game includes features like “Text-to-Text,” “Image Generation,” or “Intelligent NPCs,” the law defines the developer as an “AI Service Provider.” You assume independent liability for content safety.
The Red Line
Using APIs from ChatGPT, Claude, Gemini, or Midjourney to serve users within China leads to application removal and administrative penalties.
Using reverse proxies or IP masking to relay API requests for unfiled offshore models through Chinese servers offers no legal defense.
Article 20 of the Interim Measures explicitly forbids providing unfiled foreign generative AI services to the Chinese public. The law requires the model to be physically deployed on servers within mainland China. The model provider must hold a valid CAC filing number.
If a Chinese player induces an NPC to discuss sensitive historical, political, or high-risk topics, and the NPC responds using training data from GPT or Gemini, the game operator bears direct liability for disseminating illegal information. Claiming “lack of knowledge” or “technological neutrality” fails as a defense in Chinese court.
Consequences of violation include:
- Immediate Removal: The CAC can order a suspension of services. Android channels and the Apple App Store China typically execute removal orders within 24 hours.
- Administrative Fines: Entities face penalties up to 500,000 RMB.
- License Revocation: Serious violations lead to the revocation of the Online Publishing Service License (ISBN) and the Value-Added Telecommunications Business License (ICP). Regulators may block offshore domains.
- Criminal Liability: Repeated violations or incidents causing severe social impact expose the legal representative and direct responsible personnel to criminal prosecution.
Operational Impact: The “Whispers” Dilemma
Consider Whispers from the Star, developed by Anuttacon (a startup founded by miHoYo co-founder Cai Haoyu). The game possesses complete AI NPC dialogue and dynamic plot generation capabilities, benchmarking against top-tier global products.
Despite the creators’ deep Chinese background, launching this game in China presents significant hurdles:
- Data Sovereignty: The development entity is based in Singapore. The game uses a proprietary model. Under Chinese law, you cannot directly upload user chat data to inference servers located in Singapore.
- Localization and Deployment: To launch a domestic Chinese version, localization goes beyond UI and Voice (TTS). The LLM used for dialogue generation must be physically deployed within China. It must complete the filing process before the game can operate.
Compliance Action List
- Mandate Real-Name Verification: All users interacting with AI generation content must complete real-name identity verification. This is a prerequisite for Model Filing. There are no exceptions.
- Select Whitelisted Models: If using third-party AI, choose only from the CAC’s public list of filed models. Alternatively, file your own model. Your data processing agreement must explicitly require the provider to submit proof of filing.
- Implement Double Filtering: The law mandates “Human-in-the-Loop” intervention mechanisms. Implement a local keyword blacklist before the prompt hits the LLM. Add a secondary text filter on the output before it reaches the player.
- Prepare for Delays: If you must use a proprietary model, hire local counsel to assist with the security assessment materials. Your engineering team must develop content filtering interfaces compliant with National Standards. Reserve six to nine months for this approval cycle.
AUTHOR DOSSIER
Boyang Li Attorney at Law
Licensed Chinese attorney. Specializing in the regulatory intersection of Digital Entertainment and Artificial Intelligence.