Over the past month, several reports offered new details on the scale and tactics of the Chinese Communist Party (CCP)’s foreign influence operations. In late February, OpenAI published findings on how ChatGPT was being misused, including by an account linked to Chinese law enforcement, to plan and document what that user termed “cyber special operations.” Meta’s first report of 2026 on adversarial threats detailed the takedown of a China-linked network targeting Taiwan. The European Council on Foreign Relations published an analysis of China’s influence playbook in Europe, drawing on a decade of documented activity in Poland and the Czech Republic. And the International Campaign for Tibet exposed a Chinese state-backed AI model designed to shape how Tibetan speakers view the region.
The OpenAI report, in particular, offers a rare window into the internal workings of what it termed a “well-resourced, meticulously-orchestrated strategy for covert influence operations against domestic and foreign adversaries.” The resource-intensive effort involves at least hundreds of staff and thousands of fake accounts operating across dozens of platforms worldwide. One status report that the user, who appeared to be linked to Chinese law enforcement, tried to draft with ChatGPT’s help referenced 300 operators in a single Chinese province, with equivalent teams described in other provinces. The operations also used locally-deployed AI tools, including Chinese models like DeepSeek and Qwen, for tasks ranging from content generation to monitoring and profiling targets.
According to the OpenAI report, the PRC user referenced “over 100 tactics” that had been developed to manipulate narratives and to silence, discredit, or intimidate CCP critics. Across these reports, six tactics in particular illustrate how CCP foreign influence operations are evolving.
Customized AI Model for Narrative Control
In March 2026, Chinese state media announced the launch of DeepZang, billed as “the world’s first Tibetan large language model,” ostensibly designed for “global users seeking to learn about Tibetan culture, history, and politics.” In practice, the app delivers CCP talking points: it tells users Tibet has always been part of China, describes the Dalai Lama in line with official party positions, and when users ask about Tibetan independence, self-immolation protests inside Tibet, or the Tibetan national anthem, it instructs them to ask only “legally compliant” queries.
The name of the app itself serves the regime’s Sinicization campaign by incorporating a reference to the Chinese name for the region: Xizang. Within China, the government has simultaneously banned access to Monlam.ai, a Tibetan-language AI tool built by exile communities in India, which is in fact the world’s first Tibetan LLM.
Fabricating Evidence to Silence Dissidents
The OpenAI report documents several tactics used to target specific individuals. In one case, operators created a fake obituary — including AI-generated photos of a gravestone — for a living dissident and mass-posted them online. When seeking to get dissident accounts removed from social media platforms, operators filed abusive reports accompanied by AI-generated fake screenshots as fabricated “evidence” of policy violations, intended to trigger automated enforcement systems. In separate incidents, operators also forged documents purporting to be from a U.S. county court and submitted them to a social media platform in an attempt to force a takedown; an effort that reportedly failed but was deemed to show potential. In another example, Chinese agents reportedly disguised themselves as U.S. immigration officials and tried to intimidate a U.S.-based dissident by warning that their public statements had broken the law.
Fake Business Targeting Western Officials
Accounts likely linked to China also reportedly used ChatGPT to draft English-language emails presenting a fictitious consulting firm called “Nimbus Hub” as a legitimate geopolitical advisory company — complete with a professional website and fake LinkedIn profiles for supposed team members. The emails targeted U.S. state-level officials and policy analysts working in business and finance, inviting them to paid consultations to “interpret policy and provide strategic advice,” while requesting information about American citizens and federal buildings. According to OpenAI, the messages were crafted with “subtle psychological cues” and designed to move recipients off-platform quickly.

Screenshot of a fake LinkedIn profile affiliated with the fictitious company Nimbus Hub Consulting. Credit: OpenAI
Coordinated Fake Personas and Platform Manipulation
Meta’s first quarter 2026 threat report documented the takedown of a China-linked network of 154 Facebook accounts, 23 Pages, and 1 Instagram account that had accumulated around 93,000 followers targeting audiences in Taiwan. Pages with names like “Taiwan Gossip Net” and “New Generation Rebellion” claimed to be run by Taiwanese volunteers, used Taiwan-based proxy IPs, and wrote in traditional Chinese script to make the operation appear local.
The network encouraged users to submit anonymous grievances about Taiwanese public affairs to foster domestic discord and undermine the ruling party. Meta documented roughly $15,000 in Facebook and Instagram ad spending to support the campaign. The OpenAI report also describes “cyber special operations teams” creating fake accounts on Bluesky impersonating Chinese dissidents in an effort to occupy their identities and pre-empt those activists from building a presence on the platform. Five accounts impersonating a single dissident were all created on the same day.
Cloaking: Disguising State Media as Independent Journalism
According to the ECFR report, in the Czech Republic, a commercial rock radio station aired a 30-minute program called Colorful World six times a week for four years, totaling more than 1,000 episodes, before analysis revealed that every episode had been produced by China Radio International (CRI), a Chinese state-run broadcaster.
The ECFR report also documents a “laundering” technique in which CRI publishes articles without prominent disclosure of their origin, which are then republished by Czech alternative news outlets as their own work, so that readers encounter the content as apparently domestic journalism with no visible connection to Beijing.
“Borrowing Mouths”: Paying Local Influencers to Carry CCP Messaging.
The ECFR report relays several examples of how social media influencers and ordinary users were offered payments or other monetary incentives — sometimes above market value — to produce content or engage with CRI accounts in order to boost their apparent reach and authenticity.
For example, CRI reportedly paid Czech and Slovak students 20 euro each to record videos repeating prescribed CCP slogans during the pandemic. In a more elaborate version of the same tactic, CRI brought a Czech TikToker on a curated trip to Xinjiang, controlling what he could see and film, content that appeared to his audience as independent travel reporting.
The ECFR describes the underlying approach as a “bait and switch”: build an audience through seemingly apolitical cultural content, then use that established platform to carry more strategic messaging. Indeed, the report found that CRI’s Czech Facebook account had over 1 million followers, far surpassing other local media in a country with a population of 11 million. Beyond Europe, the OpenAI report cites an example where a China-linked operation reportedly asked local influencers to support a multi-faceted campaign to discredit now Japanese Prime Minister Takaichi Sanae.

Screenshot of graphic included in ECFR report on China’s influence playbook in Europe. Credit: ECFR.
On Scale and Obfuscation
Two dynamics run through all of these examples: the sheer scale of the operations and a consistent emphasis on obfuscation, making the China-linked origins of content as difficult for ordinary users to perceive as possible.
While the monetary figures cited in the above examples may seem modest on their own, they are only the tip of the iceberg and illustrate a systemic, budgeted investment in making foreign influence look homegrown. Other findings in the reports reinforce how well-resourced these efforts are: hundreds of staff across numerous provinces, multiple people targeted simultaneously, and a single campaign deploying varied tactics and platforms. This may not be surprising to anyone who has been following this space closely, but the collection of specific dollar figures and incidents is still notable.
In terms of obfuscation, the CCP’s propaganda apparatus has long since learned that international audiences are skeptical of content known to be tied to the party-state, so they are doubling down on tactics that hide those links. Fake local personas, laundered articles, cloaked radio programs, forged court documents, fictitious consulting firms: the goal in each case is to make foreign interference look like organic domestic activity, state messaging look independent, and fabricated content look real. The OpenAI report notes that some of the operations it documented had limited measurable impact such as posts that generated little engagement and accounts that were quickly taken down. But impact is hard to assess when attribution is deliberately hidden and activity is spread across dozens of platforms simultaneously.
That is precisely what makes investigations like these and basic awareness of the tactics they describe important. A LinkedIn message from a professional-looking consulting firm, a travel video from a familiar content creator, a radio program on a commercial station: none of these automatically signal a foreign influence operation to an average user. But knowing that these are documented tactics used by Beijing enhances resilience and provides context on how such content should be consumed. As these operations grow in scale and sophistication, that kind of literacy may be one of the more practical tools available to ordinary users to make sense of what they are reading and seeing.

