A balanced approach to protecting our digital ecosystems
What’s the problem?
Artificial intelligence (AI)–enabled systems make many invisible decisions affecting our health, safety and wealth. They shape what we see, think, feel and choose, they calculate our access to financial benefits as well as our transgressions, and now they can generate complex text, images and code just as a human can, but much faster.
So it’s unsurprising that moves are afoot across democracies to regulate AI’s impact on our individual rights and economic security, notably in the European Union (EU).
But, if we’re wary about AI, we should be even more circumspect about AI-enabled products and services from authoritarian countries that share neither our values nor our interests. And, for the foreseeable future, that means the People’s Republic of China (PRC)—a revisionist authoritarian power demonstrably hostile to democracy and the rules-based international order, which routinely uses AI to strengthen its own political and social stability at the expense of individual human rights. In contrast to other authoritarian countries such as Russia, Iran and North Korea, China is a technology superpower with global capacity and ambitions and is a major exporter of effective, cost-competitive AI-enabled technology into democracies.
In a technology-enabled world, the threats come at us ‘at a pace, scale and reach that is unprecedented’.1 And, if our reliance on AI is also without precedent, so too is the opportunity—via the magic of the internet and software updates—for remote, large-scale foreign interference, espionage and sabotage through AI-enabled industrial and consumer goods and services inside democracies’ digital ecosystems. AI systems are embedded in our homes, workplaces and essential services. More and more, we trust them to operate as advertised, always be there for us and keep our secrets.
Notwithstanding the honourable intentions of individual vendors of Chinese AI-enabled products and services, they’re subject to direction from PRC security and intelligence agencies, so we in the democracies need to ask ourselves: against the background of growing strategic competition with China, how much risk are we willing to bear?
We should worry about three kinds of Chinese AI-enabled technology:
- products and services (often physical infrastructure), where PRC ownership exposes democracies to risks of espionage (notably surveillance and data theft) and sabotage (disruption and denial of products and services)
- AI-enabled technology that facilitates foreign interference (malign covert influence on behalf of a foreign power), the most pervasive example being TikTok
- ‘Large language model AI’ and other emerging generative AI systems—a future threat that we need to start thinking about now.
While we should address the risks in all three areas, this report focuses more on the first category (and indeed looks at TikTok through the prism of the espionage and sabotage risks that such an app poses).
The underlying dynamic with Chinese AI-enabled products and services is the same as that which prompted concern over Chinese 5G vendors: the PRC Government has the capability to compel its companies to follow its directions, it has the opportunity afforded by the presence of Chinese AI-enabled products and services in our digital ecosystems, and it has demonstrated malign intent towards the democracies.
But this is a more subtle and complex problem than deciding whether to ban Chinese companies from participating in 5G networks. Telecommunications networks are the nervous systems that run down the spine of our digital ecosystems; they’re strategic points of vulnerability for all digital technologies. Protecting them from foreign intelligence agencies is a no-brainer and worth the economic and political costs. And those costs are bounded because 5G is a small group of easily identifiable technologies.
In contrast, AI is a constellation of technologies and techniques embedded in thousands of applications, products and services, so the task is to identify where on the spectrum between national-security threat and moral panic each of these products sits. And then pick the fights that really matter.
What’s the solution?
A general prohibition on all Chinese AI-enabled technology would be extremely costly and disruptive. Many businesses and researchers in the democracies want to continue collaborating on Chinese AI-enabled products because it helps them to innovate, build better products, offer cheaper services and publish scientific breakthroughs. The policy goal here is to take prudent steps to protect our digital ecosystems, not to economically decouple from China.
What’s needed is a new three-step framework to identify, triage and manage the riskiest products and services. The intent is similar to that proposed in the recently introduced draft US RESTRICT Act, which seeks to identify and mitigate foreign threats to information and communications technology (ICT) products and services, although the focus here is on teasing out the most serious threats.
Step 1: Audit. Identify the AI systems whose purpose and functionality concern us most. What’s the potential scale of our exposure to this product or service? How critical is this system to essential services, public health and safety, democratic processes, open markets, freedom of speech and the rule of law? What are the levels of dependency and redundancy should it be compromised or unavailable?
Step 2: Red Team. Anyone can identify the risk of embedding many PRC-made technologies into sensitive locations, such as government infrastructure, but, in other cases, the level of risk will be unclear. For those instances, you need to set a thief to catch a thief. What could a team of specialists do if they had privileged access to (that is, ‘owned’) a candidate system identified in Step 1—people with experience in intelligence operations, cybersecurity and perhaps military planning, combined with relevant technical subject-matter experts? This is the real-world test because all intelligence operations cost time and money, and some points of presence in a target ecosystem offer more scalable and effective opportunities than others. PRC-made cameras and drones in sensitive locations are a legitimate concern, but crippling supply chains through accessing ship-to-shore cranes would be devastating.
For example, we know that TikTok data can be accessed by PRC agencies and reportedly also reveal a user’s location, so it’s obvious that military and government officials shouldn’t use the app. Journalists should also think carefully about this, too. Beyond that, the merits of a general ban on technical security grounds are a bit murky. Can our Red Team use the app to jump onto connected mobiles and IT systems to plant spying malware? What system mitigations could stop them getting access to data on connected systems? If the team revealed serious vulnerabilities that can’t be mitigated, a general ban might be appropriate.
Step 3: Regulate. Decide what to do about a system identified as ‘high risk’. Treatment measures might range from prohibiting Chinese AI-enabled technology in some parts of the network, a ban on government procurement or use, or a general prohibition. Short of that, governments could insist on measures to mitigate the identified risk or dilute the risk through redundancy arrangements. And, in many cases, public education efforts along the lines of the new UK National Protective Security Authority may be an appropriate alternative to regulation.
The democracies need to think harder about Chinese AI-enabled technology in our digital ecosystems. But we shouldn’t overreact: our approach to regulation should be anxious but selective.
It seems like an age since we worried about China’s dominion over the world’s 5G networks. These days, the digital authoritarian threat feels decidedly steampunk—Russian missiles powered by washing-machine chips and stately Chinese surveillance balloons. And, meanwhile, our short attention spans are centred (ironically) on TikTok—an algorithmically addictive short video app owned by Chinese technology company ByteDance.
More broadly, there are widespread concerns that ‘large language model’ (LLM) generative AI such as ChatGPT will despoil our student youth, replace our jobs and outrun the regulatory capacity of the democracies.2 To be sure, the way we trust and depend on AI to sustain and improve our lives is an experiment without precedent in human history. We rely on AI to make invisible decisions affecting our health, safety and wealth in critical public infrastructure and financial markets. Online, it shapes what we see, think, feel and choose. It knows more about us than we do ourselves, so it’s handy for gatekeeping access to the things we want, such as jobs, welfare, credit and insurance. It calls out our transgressions when it calculates that we’ve committed fraud or traffic violations and it predicts our risk of committing criminal offences3 and dying from disease. And now AI can generate complex text, images and code, which hitherto only humans could do, and do it in a fraction of the time.
So, understandably, many citizens and democratic governments are mistrustful about the impact of AI on our individual rights and economic security. Moves are afoot across the democracies to regulate AI, notably in the EU, which is poised to enact comprehensive AI regulations. In June 2023, the Australian Government also foreshadowed regulation to ‘ensure the growth of artificial intelligence in Australia is safe and responsible’.4
But if we in the democracies are wary about AI, we should be even more circumspect about AI-enabled products and services from authoritarian countries that share neither our values nor our interests.
And, for the foreseeable future, that means the PRC. In the span of a generation, China has transfigured its technological base. Once barely capable of producing third-rate knock-offs of second-rate Soviet designs, it’s now a peer tech competitor with the US in the field of leading-edge AI.5 Kicked along by the search for technology to address the PRC Government’s internal-security concerns, China’s tech companies are now exporting their AI-enabled technology to the world.
Chinese AI-enabled products are price competitive and effective, which no doubt is why almost 1,000 Chinese-made surveillance cameras were installed across Australian Public Service agencies.6 However, Chinese companies are also subject to the direction of the Chinese state.7
The challenge for democracies is how to manage the security risks posed by Chinese AI-enabled products and services. The underlying dynamic is the same as that which prompted concern over Chinese 5G vendors: the PRC Government has the capability to compel its companies to follow its directions, it has the opportunity afforded by the presence of Chinese AI-enabled products and services in our digital ecosystems, and it has demonstrated malign intent towards the democracies.
But this is a more subtle and complex problem than deciding whether to ban Chinese companies from participating in 5G networks. Telecommunications networks are the nervous systems that run down the spine of our digital ecosystems; they’re strategic points of vulnerability for all digital technologies. Because of the nature of 5G technology, even extensive security mitigations can’t shield sensitive data and network functions from vendors under instruction from Chinese security agencies.8 Protecting these networks is a no-brainer and is worth the economic and political costs. And those costs are bounded because 5G is a small group of easily identifiable technologies.
In contrast, AI is a constellation of technologies and techniques embedded in thousands of applications, products and services, so the task is to identify where on the spectrum between national-security threat and moral panic each of those products sits. And then pick the fights that really matter.
This report is broken down into six sections. The first section highlights our dependency on AI-enabled products and services. The second examines China’s efforts to export AI-enabled products and services and promote its model of digitally enabled authoritarianism, in competition with the US and the norms and values of democracy. This section also surveys PRC laws compelling tech-sector cooperation and explains the nature of the threat, giving three examples of Chinese AI-enabled products of potential concern. It also explains why India is particularly vulnerable to the threat.
In the third section, the report looks at the two key democratic responses to the challenge of AI: on the one hand, US efforts to counter both China’s development of advanced AI technologies and the threat from Chinese technology already present in the US digital ecosystem; on the other, a draft EU Regulation to protect the fundamental rights of EU citizens from the pernicious effects of AI. The fourth section of the report proposes a framework for triaging and managing the risk of China’s authoritarian AI-enabled products and services embedded in democratic digital ecosystems. The final section acknowledges complementary efforts to mitigate the PRC threat to democracies’ digital ecosystems.
For the full report, please download here.
27 Jul 2023