A pro-China technology and anti-US influence operation thrives on YouTube
Executive Summary
ASPI has recently observed a coordinated inauthentic influence campaign originating on YouTube that’s promoting pro-China and anti-US narratives in an apparent effort to shift English-speaking audiences’ views of those countries’ roles in international politics, the global economy and strategic technology competition. This new campaign (which ASPI has named ‘Shadow Play’) has attracted an unusually large audience and is using entities and voice overs generated by artificial intelligence (AI) as a tactic that enables broad reach and scale.1 It focuses on promoting a series of narratives including China’s efforts to ‘win the US–China technology war’ amid US sanctions targeting China. It also includes a focus on Chinese and US companies, such as pro-Huawei and anti-Apple content.
The Shadow Play campaign involves a network of at least 30 YouTube channels that have produced more than 4,500 videos. At time of publication, those channels have attracted just under 120 million views and 730,000 subscribers. The accounts began publishing content around mid-2022. The campaign’s ability to amass and access such a large global audience—and its potential to covertly influence public opinion on these topics—should be cause for concern.
ASPI reported our findings to YouTube/Google on 7 December 2023 for comment. By 8 December, they had taken down 19 YouTube channels from the Shadow Play network—10 for coordinated inauthentic behaviour and nine for spam. As of publication, these YouTube channels display a range of messages from YouTube indicating why they were taken down. For example, one channel was ‘terminated for violating YouTube’s community guidelines’, while another was ‘terminated due to multiple or severe violations of YouTube’s policy for spam, deceptive practices and misleading content or other Terms of Service violations’. ASPI also reported our findings to British artificial intelligence company, Synthesia, whose AI avatars were used by the network. On 14 December 2023, Synthesia disabled the Synthesia account used by one of the YouTube accounts, for violating its Media Reporting (News) policy.
We believe that it’s likely that this new campaign is being operated by a Mandarin-speaking actor. Indicators of this actor’s behaviour don’t closely map to the behaviour of any known state actor that conducts online influence operations. Our preliminary analysis (see ‘Attribution’) is that the operator of this network could be a commercial actor operating under some degree of state direction, funding or encouragement. This could suggest that some patriotic companies increasingly operate China-linked campaigns alongside government actors.
The campaign focuses on promoting six narratives. Two of the most dominant narratives are that China is ‘winning’ in crucial areas of global competition: first, in the ‘US–China tech war’ and, second, in the competition for rare earths and critical minerals.2 Other key narratives express that the US is headed for collapse and that its alliance partnerships are fracturing, that China and Russia are responsible, capable players in geopolitics, that the US dollar and the US economy are weak, and that China is highly capable and trusted to deliver massive infrastructure projects. A list of visual representative examples from the network for each narrative is in Appendix 1 on page 35.
Figure 1: An example of the style of content generated by the network, in which multiple YouTube channels published videos alleging that China had innovated a 1-nanometre chip, without using a lithography machine
Sources: ‘China Charged’, ‘China reveals the world’s first 1nm chip & SHOCKS the US!’, YouTube, 3 November 2023, online;‘ Relaxian’, ‘China’s groundbreaking 1nm chip: redefining technology and global power’, YouTube, 4 November 2023, online; ‘Vision of China’, ‘China breaks tech limit: EUV lithography not needed to make 1nm chips!’, YouTube, 17 July 2023 online; ‘China Focus—CNF’, ‘World challenge conquered: 1nm chips produced without EUV lithography!’, YouTube, 5 July 2023, online; ‘Curious Bay’, ‘China’s NEW 1nm chip amazes the world’, YouTube, 24 July 2023, online; ‘China Hub’, ‘China shatters tech boundaries: 1nm chips without EUV lithography? Unbelievable tech breakthrough!’, YouTube, 30 July 2023, online.
This campaign is unique in three ways. First, as noted above, there’s a notable broadening of topics. Previous China-linked campaigns have been tightly targeted and have often focused on a narrow set of topics. For example, the campaign’s focus on promoting narratives that establish China as technologically superior to the US presents detailed arguments on technology topics including semiconductors rare earths, electric vehicles and infrastructure projects. In addition, it targets, via criticism and disinformation, US technology firms such as Apple and Intel. Chinese state media outlets, Chinese officials and online influencers sometimes publish on these topics in an effort to ‘tell China’s story well’ (讲好中国故事).3 A few Chinese state-backed inauthentic information operations have touched on rare earths and semiconductors, but never in depth or by combining multiple narratives in one campaign package.4 The broader set of topics and opinions in this campaign may demonstrate greater alignment with the known behaviour of Russia-linked threat actors.
Second, there’s a change in techniques and tradecraft, as the campaign has leveraged AI. To our knowledge, the YouTube campaign is one of the first times that video essays, together with generative AI voiceovers, have been used as a tactic in an influence operation. Video essays are a popular style of medium-length YouTube video in which a narrator makes an argument through a voiceover, while content to support their argument is displayed on the screen. This shows a continuation of a trend that threat actors are increasingly moving towards: using off-the-shelf video editing and generative AI technology tools to produce convincing, persuasive content at scale that can build an audience on social-media services. We also observed one account in the YouTube network using an avatar created by Sogou, one of China’s largest technology companies (and a subsidiary of Tencent) (see page 24). We believe the use of the Sogou avatar we identified to be the first instance of a Chinese company’s AI-generated human being used in an influence operation.
Third, unlike previous China-focused campaigns, this one has attracted large views and subscribers. It has also been monetised, although only through limited means. For example, one channel accepted money from US and Canadian companies to support the production of their videos. The substantial number of views and subscribers suggest that the campaign is one of the most successful influence operations related to China ever witnessed on social media. Many China-linked influence operations, such as Dragonbridge (also known as ‘Spamouflage’ in the research community), have attracted
initial engagement in some cases but have failed to sustain a meaningful audience on social media.5 However, further research by YouTube is needed to determine whether view counts and subscriber counts on YouTube demonstrated real viewership or were artificially manipulated, or a combination of both. We note that, in our examination of YouTube comments on videos in this campaign, we saw signs of a genuine audience. ASPI believes that this campaign is probably larger than the 30 channels covered in this report, but we constrained our initial examination to channels we saw as core to the campaign. We also believe there to be more channels publishing content in non-English languages that belong to this network; for example, we saw channels publishing in Bahasa Indonesia that aren’t included in this report.
That’s not to say that the effectiveness of influence operations should only be measured through engagement numbers. As ASPI has previously demonstrated, Chinese Communist Party (CCP) influence operations that troll, threaten and harass on social media seek to silence and cause psychological harm to those being targeted, rather than seeking engagement.6 Similarly, influence operations can be used to ‘poison the well’ by crowding out the content of genuine actors in online spaces, or to poison datasets used for AI products, such as large-language models (LLMs).7
This report also discusses another way that an influence operation can be effective: through its ability to spill over and gain traction in a wider system of misinformation. We found that at least one narrative from the Shadow Play network—that Iran had switched on its China-provided BeiDou satellite system—began to gain traction on X (formerly Twitter) and other social-media platforms within a few hours of its posting on YouTube. We discuss that case study on page 29.
This report offers an initial identification of the influence operation and some defining characteristics of a likely new influence actor. In addition to sections on attribution, methodology and analysis of this new campaign, this report concludes with a series of recommendations for government and social media companies, including:
- the immediate investigation of this ongoing information operation, including operator intent and the scale and scope of YouTube channels involved
- broader efforts by Five Eyes and allied partners to declassify open-source social-media-based influence operations and share information with like-minded nations and relevant NGOs
- rules that require social-media users to disclose when generative AI is used in audio, video or image content
- national intelligence collection priorities that support the effective amalgamation of information on Russia-, China- and Iran-linked information operations
- publishing detailed threat indicators as appendixes in information operations research.
Full Report
For the full report, please download here.
14 Dec 2023