Tag Archive for: Artificial Intelligence

For long-term AI ambitions, Australia should think nuclear

Australia’s two major parties are divided over nuclear energy and the future mix of the nation’s power sources. But they are missing Australia’s opportunity to power the next generation of AI models.

During the election campaign, the Liberals advocated adopting nuclear power as an alternative green energy source for Australia, especially using small-scale nuclear power generators. Labor criticised the plan as being too vague and the technology as too immature, and said they will continue to rely on fossil fuels to smooth the transition to green technology by 2050.

Although small-scale nuclear power technology has been proven in many submarines cruising around the world—and some are questioning if Labor’s plans will even get Australia to net zero by 2050—we are missing out on a massive national security and economic opportunity for Australia.

The training of AI models requires significant power. OpenAI’s GPT-4 was estimated to use 50 times more electricity to train than its GPT-3 model. This trend is likely to continue for frontier models. By 2030, power consumption by data centres is set to double, and AI is forecast to consume more than 9 percent of total US power generation.

In the United States, AI companies are worried that there won’t be enough electricity base load (the background power generation capacity) to train future models in the West. It is not surprising that OpenAI CEO Sam Altman’s side project is a nuclear fusion startup.

To help, Britain is planning AI growth zones to accelerate building AI data centres,  repurposing post-industrial areas with an oversupply of electricity due to closures of physical manufacturing facilities. These zones will benefit from a streamlined planning approvals process and accelerate the provisioning of clean power. However, Britain is still limited by power base load supply. Hinkley Point (Britain’s latest nuclear power site) is delayed and over budget. In some regions, there isn’t enough power to support more high-performance computers.

To train the next generation of models, AI companies are considering moving to countries with excess power—most likely in the Middle East. Saudi Arabia and other Gulf states are trying to attract AI companies with the promise of low-cost or completely free power supply based on their oil production. However, training in these countries opens companies up to political interference and influence. Physical data centres are always vulnerable to different types of cyberattacks.

China understands that AI powered by clean nuclear technology is the future. In February 2023, China had 55 nuclear plants in operation, 22 under construction and more than 70 planned. Using their centralised planning process, they can build nuclear power stations at will and are more likely to be able to scale power production in line with growing power requirements.

If AI companies want to keep training in democratically aligned countries and use green electricity sources, where can they train the next generation of models? Britain is too small and broke to fund more nuclear power and the US is too science-sceptical and politically divided to adopt such a forward-looking approach. Canada has the space and an abundance of fossil fuels, but in the short-term Australia can provide more consistent green electricity using solar. Australia sits in the right security and political spheres of influence to develop the technology without becoming a nuclear threat. International defence and security institutions, such as the US, would baulk at other smaller and unaligned countries developing such nuclear technologies.

Much like the supply of critical and rare earth minerals, Australia is well positioned to be the energy supplier to Five Eyes nations and other democratically aligned Western countries. Australia has a lot of space to build power stations and can mine the uranium directly from the earth. Other forms of power generation, including such green alternatives as solar panels—even if combined with massive battery storage—won’t provide enough base load power to train AI models in the long term.

Yes, in the short-term it wouldn’t be cheap and it wouldn’t meet AI’s immediate power requirements before 2030, but nuclear power is a long-term opportunity for Australia. Plus, small-scale nuclear power generators are significantly cheaper and quicker to build, and international sales will drive economic growth in Australia’s manufacturing sector. Australia needs more policies to onshore large data centre companies such as AWS and Google.

While this election showed that nuclear power remains a political football, our adversaries are seizing the opportunity to build for the long term. Market factors will force the training of AI models into areas with an oversupply of electricity, playing even further into their hands. Unfortunately, we don’t have the luxury of time to argue—we need to get building.

AI is changing Indo-Pacific naval operations

Artificial intelligence is poised to significantly transform the Indo-Pacific maritime security landscape. It offers unprecedented situational awareness, decision-making speed and operational flexibility. But without clear rules, shared norms and mechanisms for risk reduction, AI could act as a destabilising force—particularly in contested waters where tensions are already high.

In the Indo-Pacific, larger countries are adopting AI to monitor and respond to threats. But the unclear nature of AI decisions risks escalation in contested spaces. Additionally, smaller states without the technological capacity risk being left behind. Furthermore, the lack of clear legal guidelines means there is no agreement on responsible use, risking further escalation.

Indo-Pacific partners must work together to develop standards of AI use in naval operations to avoid escalation and conflict in the region.

At the operational level, AI offers substantial enhancements in the ability to monitor, track and interpret activities across oceanic spaces. AI-powered systems can rapidly analyse satellite imagery, sonar data and automatic identification system signals to identify naval deployments, ships surreptitiously engaged in illicit activities, or civilian vessels used for strategic deception. These capabilities could bolster maritime law enforcement, countering illegal fishing, smuggling or grey-zone coercion. They could also support real-time monitoring of strategic chokepoints such as the Strait of Malacca, the Taiwan Strait or the South China Sea.

China is committed to integrating AI into its maritime strategy. Its navy and coast guard are using AI to enhance unmanned surface and underwater vessel operations, automate maritime surveillance across disputed waters and support rapid data fusion in joint maritime command centres. Beijing’s use of AI to combine satellite and oceanographic data allows it to monitor adversaries and assert maritime claims with greater confidence and persistence. This increases regional navies’ concerns about China’s ability to dominate the decision-making cycle in contested waters.

The United States, Australia, Japan, South Korea and India are responding with their own AI-enabled initiatives. The US Navy’s Project Overmatch and Australia’s Ghost Shark undersea drone project are examples of efforts to integrate AI into distributed maritime operations, autonomous platforms and decision-support tools. AI-driven swarming technology, cooperative autonomous undersea systems and real-time target classification are likely to become central to allied force postures across the Indo-Pacific. Additionally, AI-powered centres for combining data, such as India’s Information Fusion Centre—Indian Ocean Region, are being used to coordinate multinational responses to maritime threats in real time.

However, these technological advancements are not without risks. Integration of AI into maritime systems increase the possibility of escalation through automation and miscalculation. In contested zones, such as the South China Sea, naval and paramilitary forces from multiple countries already operate in close proximity. Autonomous vessels or decision-support algorithms in such places could misinterpret intent or escalate incidents due to flawed pattern recognition, bias in data sets, or overly aggressive operational parameters. The lack of transparency into the decision-making processes of AI systems—particularly those based on deep learning—may complicate efforts to attribute actions, assign responsibility, or de-escalate tensions once a confrontation begins.

The proliferation of AI in the maritime sphere poses challenges for smaller states with limited technological capacity. Southeast Asian nations such as Indonesia, the Philippines and Vietnam may find themselves at a disadvantage as larger powers deploy AI-enhanced naval assets that can dominate surveillance, disrupt communications, or project force with fewer personnel and faster reaction times. Without regional frameworks to ensure transparency, interoperability, or norms of conduct, the Indo-Pacific could devolve into a tiered security environment where technological inequality exacerbates strategic asymmetry.

Moreover, the lack of international regulation on the use of AI in naval systems creates a dangerous legal vacuum. Key questions remain unanswered: what constitutes appropriate human control over AI-enabled maritime systems? How should accountability be assigned for incidents involving autonomous vessels? Can existing rules on the use of force at sea be adapted to AI-enhanced environments?

Without coordinated answers, states may pursue national strategies that prioritise speed and advantage over safety and stability.

To mitigate these risks, regional and extra-regional powers must consider developing a set of AI-focused maritime confidence-building measures. These could include pre-notification of autonomous vessel deployments, joint AI stress-testing exercises to assess the reliability and behaviour of unmanned systems in shared waters and regional agreements on minimum levels of human oversight. Multilateral forums, including the ASEAN Defence Ministers’ Meeting-Plus, the Indian Ocean Rim Association, and even informal groupings such as the Quad, could serve as platforms for these discussions.

The future of maritime security in the Indo-Pacific will not only be shaped by the size of navies or the reach of fleets but increasingly by the logic—and the limits—of the algorithms guiding them.

AI is reshaping security, and the intelligence review sets good direction

The 2024 Independent Intelligence Review found the NIC to be highly capable and performing well. So, it is not a surprise that most of the 67 recommendations are incremental adjustments and small but nevertheless important recalibrations. However, to thrive in a contested, fragile and volatile security environment more must be done, and collectively.

The review found that despite great progress, some practical and cultural barriers impede the interoperability and adoption of data and technology. Most of the review’s technology recommendations are clear-cut and important steps to overcome this.

Robust data is the foundation for good intelligence and a versatile, strategic asset. Thus, the review recommends that National Intelligence Community (NIC) agencies should develop a top secret cloud transition strategy and should support data cataloguing efforts to maximise interoperability.

The AI recommendations focus on AI governance principles, frameworks, senior officer responsibility and, crucially, NIC-wide senior officer training to understand ‘applications, risks and governance requirements of AI for intelligence’. These measures will hopefully establish an educated and accountable leadership cohort in the NIC that can drive AI adoption while thinking critically about risks and effective governance. As always, execution will be key—assuming the government in office after the 3 May election accepts the recommendation.

Despite the global hype about AI, the review acknowledges its significant risks as well as many opportunities. It echoes my book in suggesting that, in practice, NIC agencies are predominately using AI for collection and such analytical functions as triage and translation. Given the review’s focus on improving the intelligence-policy interface, how technology could contribute, beyond using AI to curate intelligence for consumers, was curiously absent.

Using AI to ‘transform and improve the intelligence cycle’ is necessary. However, there is space for a more imaginative approach to using it—for example, identifying and monitoring new threats, anomalies and providing early warning.

Curiously, the basic question of what it means to know something—fundamental to both AI and intelligence—didn’t feature. While it was excellent to see misinformation and disinformation addressed, albeit not publicly, I’d argue government and intelligence expertise on it is already crucial, not just ‘becoming essential’, as the review puts it. It was also interesting to see the evaluation of open-source intelligence (OSINT), an important function, kicked down the road to the next review.

We should approach technology as an ecosystem. This is reflected in the review, which noted: ‘NIC needs a stronger enterprise approach to technology, one that recognises and exploits the interdependencies of the technology ecosystem.’ We need to develop a technology strategy to articulate the vision, requirements, priorities, as well as current and future technology risks. While that seems a little vague, if it were well executed, it would be a big step forward.

Technology, data and privacy were, in the main, addressed well as threats in the scene-setting section, where the review highlighted technology’s foundational role in Australia’s context, alongside global contest and fragmentation and transnational challenges like climate change. But few recommendations dealt with current issues with data harvesting, cyberattacks or AI in biotech, let alone their use in conflict. Yet, our technology will be a target and our dependencies on data, technology and AI infrastructure will be weak points for adversaries to exploit.

There is, naturally, a section on innovation. The recommendation that government scope the establishment of a national security focused technology investment fund is welcome and balances narrowing the Joint Capability Fund, which has had mixed results.

While exorbitantly expensive, secure spaces outside Canberra are critical to mission. While the review addressed options for integrated locations outside of Canberra, the lack of urgency on this is a missed opportunity. In conflict, they’d be not only ‘helpful’, as per the review; they’d be essential. They also support recruitment and retention, stakeholder engagement, collaborative operational and strategic work, and contingency planning.

The technology-related oversight recommendations are noteworthy and substantial. I welcome the recommendation that the first full-time Independent National Security Legislation Monitor undertake a review of the NIC’s use of AI to inform legislative and policy changes. I’ve long advocated for a panel of technology advisers to serve the oversight bodies so was pleased to see this recommended, although I would have preferred one advisory body accessible to NIC and oversight agencies. Perhaps to elected officials too.

The terms of reference explicitly included the NIC’s preparedness for crisis and conflict. It is addressed throughout—and I agree with the co-authors that most adaption will happen in a crisis or conflict, not before—however, I also think more could be done to prepare. OSINT, disinformation and AI or technology frameworks and strategies are important, but action will need to be expedited if, or as, conflict looms. Increased centralisation, while straightforward for education, workforce and policy, could be an asset or a risk in conflict, especially with varying levels of technology sophistication and expertise.

Technologies’ effects are not occurring alone. They converge with a crumbling global order as well as increasing contest and uncertainty in international trade, creating a tinderbox for exponential change. Occurring amid a vacuum of global leadership, there is no clear path for non-transactional collaboration on technology or climate change; the same possibly holds true for intelligence.

The review handled how technology is affecting our security environment and how it can be harnessed by the NIC well. The recommendations are sensible and considered improvements for an already world-class intelligence enterprise. The real test, if it happens, will be how it performs in conflict.

As China’s AI industry grows, Australia must support its own

The growth of China’s AI industry gives it great influence over emerging technologies. That creates security risks for countries using those technologies. So, Australia must foster its own domestic AI industry to protect its interests.

To do that, Australia needs a coordinated national AI strategy grounded in long-term security, capability building and international alignment.

The Australian government’s decision in February to ban Chinese AI model DeepSeek from government devices showed growing concern about the influence of foreign technology. While framed as a cybersecurity decision, the ban points to a broader issue: Chinese-linked platforms are already present across Australia, in cloud services, academic partnerships and hardware supply chains. Banning tools after they’re embedded is too late. The question is how far these dependencies reach, and how to reduce them.

China’s lead in AI isn’t just due to planning and investment. It has also benefited from state-backed strategies that exploit gaps in international rules.

In early 2025, OpenAI accused DeepSeek of using its proprietary models without permission. Weeks later, a former Google engineer was indicted in the United States for stealing AI trade secrets to help launch a Chinese startup. A US House of Representatives Committee report logged 60 cases of Chinese-linked cyber espionage across 20 states. In 2023, Five Eyes intelligence leaders directly accused Beijing of sustained intellectual property theft campaigns targeting advanced technologies. And a recent CrowdStrike report documented a 150 percent surge in China-backed cyber espionage in 2024, with critical industries hit hardest.

Such methods help Chinese firms accelerate development and release advanced versions of tools first created elsewhere.

ASPI’s Tech Tracker shows the effect of these strategies. China leads Australia by a wide margin in research output and impact in such field as machine learning, natural language processing, AI hardware and integrated circuit design. These technologies form the foundation of modern AI systems and academic disciplines.

And the research gap is growing. China produces more AI research and receives more citations, allowing it to shape the global AI agenda. In contrast, Australia’s contribution is limited in advanced data analytics, adversarial AI and hardware acceleration. And Australia is dependent on imported ideas and models when it comes to natural language processing and machine learning.

China also outpaces Australia in talent acquisition. In every major AI domain, including natural language processing, integrated circuits and adversarial AI, China is a top destination for leading researchers. Australia struggles to recruit and retain high-end AI talent, which limits its ability to scale local innovation.

China’s tech giants are closely aligned with state goals. Following the strategy of military-civil fusion, Chinese commercial breakthroughs are routinely directed into national security or surveillance applications. That creates risk when their technologies are used third countries, through applications in transport, education, health and infrastructure.

Australia is accelerating domestic AI development but lacks a coordinated national strategy. The country remains heavily reliant on foreign-built systems and opaque partnerships that carry long-term strategic and economic costs. This embeds AI systems that Australia does not control into Australia’s critical infrastructure. The more dependent Australia is on these systems, the more it will struggle to disentangle itself in the future.

A coordinated national strategy should rest on four key pillars.

First, AI infrastructure should be treated as critical infrastructure. This includes not just hardware, but also training datasets, foundational models, software libraries and deployment environments. A government-led audit should trace where AI systems are sourced, who maintains them and what hidden dependencies exist, especially for public services, utilities and strategic industries. This baseline is essential for identifying risks and opportunities.

Second, Australia should invest in trusted alternatives and sovereign capabilities. Australia alone cannot build an entire AI stack—including data infrastructure, machine learning frameworks, models and applications—but it can co-develop secure technologies with trusted allies. It should use partnerships such as AUKUS and the Quad to explore open foundational models, ways to secure compute infrastructure, and the development of interoperable governance frameworks.

Third, Australia must manage research collaboration more carefully. Australian universities and labs are globally respected, but they are navigating a geopolitical landscape with little structured guidance. Building on 2019 guidelines to counter foreign interference in universities, the government should establish clearer rules around high-risk partnerships. For example, it could develop tools to assess institutional exposure and track dual-use research. Risk management should not be punitive but rather support researchers to make informed choices.

Fourth, Australia can lead on standard-setting in the Indo-Pacific. Many countries in the region also wonder how to harness AI while preserving autonomy, enhancing prosperity and minimising security risks. Australia can play a regional leadership role by promoting transparent development practices, fair data use and responsible AI deployment.

AI is shaping everything from diplomacy to defence. Australia cannot be dependent on foreign-built models. The question is whether Australia wants to shape those systems or be shaped by them.

How to spot AI influence in Australia’s election campaign

Be on guard for AI-powered messaging and disinformation in the campaign for Australia’s 3 May election.

And be aware that parties can use AI to sharpen their campaigning, zeroing in on issues that the technology tells them will attract your vote.

In 2025, there are still ways to detect AI-generated content. Voters can use this knowledge. So can the authorities trying to manage a proper election campaign. The parties can, too, as they try to police each other. In the digital age, we must be vigilant against various tactics that are strengthened or driven by AI and aim to manipulate and deceive.

Some tactics are already heavily associated with AI. Deepfakes—images or videos that use hyper-realistic fabricated visuals to deceive—are a particularly concerning example. Automated engagement is another example, involving AI-driven bots and algorithms to amplify likes, shares and comments to create the illusion of widespread support.

But political actors are now using AI to improve tried-and-tested influence tactics. These methods include:

—Sponsored posts that mimic authentic content, such as news, to subtly promote a product, service or agenda without clear disclosure, potentially influencing opinions;

—Clickbait headlines that are crafted to grab attention and drive clicks, often exaggerating claims or omitting key context to lure readers;

—Fake endorsements providing false credibility, authenticity or authority through fabricated testimonials or endorsements;

—Selective presentation of facts, skewing narratives by focusing on specific data points that support one perspective while omitting contradictory evidence; and

—Emotionally charged content aimed at provoking strong reactions, clouding judgment and influencing impulsive decisions.

Deepfakes can be identified by inconsistencies in lighting, unnatural facial movements or mismatched audio and lip-syncing. Tools such as reverse image search or AI detection software can help verify authenticity. Automated engagement typically involves accounts that have generic usernames, minimal personal information, and display repetitive posting patterns. These are strong indicators that an account may be an AI-driven bot.

Sponsored posts can be checked for disclaimer labels such as ‘sponsored’ or ‘ad’. Users should be cautious of posts that seem overly polished or perfectly tailored to their interests.

Clickbait headlines, if they seem too outrageous or emotionally charged, should be read critically to verify their claims. Cross-checking with reputable sources can help users to spot inaccuracies. As well as this, one-sided arguments and missing context are both strong indicators of a selective presentation of facts. Consulting multiple sources can help build balanced view of the issue.

Fake endorsements can be verified by checking the official channels of the purported endorser. Inconsistencies in language or tone between the channels and the post may indicate fabrication.

For parties, AI is offering transformative opportunities for campaigning. Data-driven targeting can help to more effectively analyse voter demographics, preferences, and behaviours. This allows parties to craft highly targeted messages, ensuring campaigns reach the right audience with the right message.

Predictive analytics forecast voter turnout and behaviour, helping campaigns focus efforts on swing regions or undecided voters. For campaigns aiming to narrow their focus, AI can help to craft personalised communication. This content is tailored to individual voters, making interactions feel more personal and engaging.

AI can also be used to monitor social media and public sentiment, providing real-time feedback. These instant insights into voter reactions allows campaigns to adapt their strategies on the fly. Beyond analytics and outreach, AI programs can be developed to optimise campaign budgets by identifying the most impactful channels and strategies, reducing waste and ensuring effective resource allocation.

Finally, while it can be used to mislead, automated engagement has ethical applications. Through chatbots and virtual assistants powered by AI, parties can handle voter queries, provide information and streamline processes such as voter registration.

AI is reshaping political campaigning, offering unprecedented opportunities and challenges. While it sharpens strategies and enhances efficiency, it also necessitates vigilance to ensure ethical use and protect against manipulation. By staying informed and critical, individuals can navigate this evolving landscape with confidence.

As tensions grow, an Australian AI safety institute is a no-brainer

Australia needs to deliver its commitment under the Seoul Declaration to create an Australian AI safety, or security, institute. Australia is the only signatory to the declaration that has yet to meet its commitments. Given the broader erosion of global norms, now isn’t the time to break commitments to allies and partners such as Britain, South Korea and the European Union.

China has also entered this space: it has created an AI safety institute, signalled intent to collaborate with the Western network of such organisations and commented on the global governance of increasingly powerful AI systems.

Developments in the United States further demand an Australian safety institute. The US is radically deregulating its tech sector, taking risky bets on integrating AI with government, and racing to beat China to artificial general intelligence—a theoretical system that would rival human thinking. Collectively, these trends mean that AI risks—such as cyber offensive capability; widespread availability of chemical, biological, radiological and nuclear weapons; and loss of control over advanced systems—are less likely to be addressed at their source: the frontier labs. Australia needs to act.

Fortunately, we have options for addressing AI safety and security concerns. Minister for Industry and Science Ed Husic’s ‘mandatory guardrails’ consultation mooted an Australian AI Act that would align with the EU and impose basic requirements on high-risk AI models. Australia can foster its domestic AI assurance technology industry, and we can expand our productive involvement in multilateral approaches, ensuring that safety and security remain a global priority.

While an Australian AI Act has policy merit, it might face a rocky political path. In March, the Computer & Communications Industry Association—a peak body with members including Amazon, Apple, Google, X and Meta—urged US President Donald Trump to bring the News Media Bargaining Code into a US-Australia trade war. In the same submission, the association complained about the proliferation of AI laws and the proposed Australian regulation of high-risk AI models.

An Australian AI safety institute would be an immediate way to protect Australian interests and create a new path to collaborate with our allies without these political risks. In addition to giving us a seat at the table, such an institute would reduce our dependency on others for technical AI safety and security. In other security domains, we’ve seen dependency used as a bargaining chip in transactional negotiations. This is still something we have time to avoid for AI.

Domestic pressure is building. In March, Australia’s AI experts united in a call for action, including the establishment of an Australian safety institute and an Australian AI Act. The letter will remain open to expert and public support until the election.

Australian AI expert and philosopher Toby Ord, a senior researcher at Oxford University and author of The Precipice: Existential risks and the future of humanity, said:

Australia risks being in a position where it has little say on the AI systems that will increasingly affect its future. An [Australian AI safety institute] would allow Australia to participate on the world stage in guiding this critical technology that affects us all.

And it’s not just the experts. Australians are more worried about AI risks than the people of any other nation for which we have data.

The experts and the public are right. It’s realistic that we will see transformative AI during the next term of government, though expert opinion varies on the exact timing. Regardless, the window for Australia to have any influence over these powerful and risky systems is rapidly closing.

Britain recently renamed its ‘AI Safety Institute’ as the ‘AI Security Institute’ but without significantly changing its priorities. The institute targets AI capabilities that enable malicious actors and the potential loss of control of advanced AI systems, including the ability to deceive human operators or autonomously replicate.

Given that these are fundamentally national security issues, perhaps ‘security’ was a better name from the start and appropriate for Australia to use for our institute.

The US has many chip vulnerabilities

Although semiconductor chips are ubiquitous nowadays, their production is concentrated in just a few countries, and this has left the US economy and military highly vulnerable at a time of rising geopolitical tensions. While the United States commands a leading position in designing and providing the software for the high-end chips used in AI technologies, production of the chips themselves occurs elsewhere. To head off the risk of catastrophic supply disruptions, the US needs a coherent strategy that embraces all nodes of the semiconductor industry.

That is why the CHIPS and Science Act, signed by President Joe Biden in 2022, provided funding to reshore manufacturing capacity for high-end chips. According to the Semiconductor Industry Association, the impact has been significant: currently planned investments should give the US control of almost 30 percent of global wafer fabrication capacity for chips below ten nanometres by 2032. Only Taiwan and South Korea currently have foundries to produce such chips. China, by contrast, will control only 2 percent of manufacturing capacity, while Europe and Japan’s share will rise to about 12 percent.

But US President Donald Trump is now trying to roll back this strategy, describing the CHIPS Act—one of his predecessor’s signature achievements—as a waste of money. His administration is instead seeking to tighten the export restrictions that Biden introduced to frustrate China’s AI ambitions.

It is a strategic mistake to de-emphasise strengthening domestic capacity through targeted industrial policies. Coercive measures against China not only have proved ineffective, but may have even accelerated Chinese innovation. DeepSeek’s highly competitive models were apparently developed at a fraction of the cost of OpenAI’s. A substantial share of the semiconductors used in DeepSeek’s R1 model comprises chips that were smuggled through intermediaries in Singapore and other Asian countries, and DeepSeek relied on clever engineering techniques to overcome the remaining hardware limitations it faced. Meanwhile, Chinese tech giants such as Alibaba and Tencent are developing similar AI models under similar supply constraints.

Even before the DeepSeek breakthrough, there were doubts about the effectiveness of US trade restrictions. The Biden administration’s export ban, adopted in October 2022, targeted chips smaller than 16nm, banning not only exports of the final product, but also the equipment and the human capital needed to develop them. Less than a year later, in August 2023, Huawei launched a new smartphone model (the Mate 60) that uses a 7nm chip.

Even if China no longer has access to the most advanced lithography machines, it can still use old ones to produce 7nm chips, albeit at a higher cost. While these older machines do not allow it to go below 7nm (Taiwan Semiconductor Manufacturing Company is working on 1nm chips), Huawei and DeepSeek’s achievements are a cautionary tale. China now has every reason to develop its own semiconductor industry, and it may have made more progress than we think.

To reduce its own supply-chain vulnerabilities, the US cannot rely on an isolationist reshoring-only approach. Given how broadly the current supply chain is distributed, leveraging existing alliances is the only viable way forward. ASML, the Dutch firm with a near-monopoly on the high-end lithography machines used to make the most advanced chips, cannot simply be recreated overnight.

So far, the US has focused on reducing security risks related to the most sophisticated chips, giving short shrift to the higher-node chips that are needed to run modern economies. Yet these legacy chips (those above 28nm) are key components in cars, airplanes, fighter jets, medical devices, smartphones, computers and much more.

According to the Semiconductor Industry Association, China is expected to control almost 40 percent of global wafer fabrication capacity for these types of chips by 2032, while Taiwan, the US and Europe will account for 25 percent, 10 percent, and 3 percent, respectively. China will thus control a major strategic chokepoint, enabling it to bring the US economy to a halt with its own export bans. It also will have a sizable military edge, because it could impair US defences by cutting off the supply of legacy chips. Finally, China’s security services could put back doors into Chinese-made chips, allowing for espionage or even cyberattacks on US infrastructure.

Compounding the challenge, Chinese-made chips are usually already incorporated into final products by the time they reach the US. If the US wants to curtail imports of potentially compromised hardware, it will have to do it indirectly, tracking down chips at customs by dismantling assembled products. That would be exceedingly costly.

Fortunately, the US does not lack policy tools to reduce its vulnerabilities. When it comes to military applications of legacy chips, it can resort to procurement restrictions, trade sanctions (justified on national-security grounds), and cybersecurity defences. As for expanding domestic production capacity, it can use anti-dumping and countervailing duties to counter unfair Chinese practices, such as its heavy subsidisation of domestic producers.

Chips, and the data they support, will be the oil of the future. The US needs to devise a comprehensive strategy that addresses the full range of its current vulnerabilities. That means looking beyond the most advanced chips and the AI race.

South Korea has acted decisively on DeepSeek. Other countries must stop hesitating

South Korea has suspended new downloads of DeepSeek, and it was were right to do so.

Chinese tech firms operate under the shadow of state influence, misusing data for surveillance and geopolitical advantage. Any country that values its data and sovereignty must watch this national security threat and take note of South Korea’s response.

Every AI tool captures vast amounts of data, but DeepSeek collects data unnecessary to its function as a simple chatbot. The company was caught over-collecting personal data and failed to be transparent about where that data was going. This typifies China’s lack of transparency about data collection, usage and storage.

South Korea’s National Intelligence Service flagged the chatbot for logging keystrokes and chat interactions, which were all stored on Chinese-controlled servers.

Once data enters China’s jurisdiction, it’s fair game for Beijing’s intelligence agencies. That’s not paranoia; it’s the law. Chinese companies must hand over data to the government upon request. South Korea saw the writing on the wall and acted before it was too late.

Data in the wrong hands can be weaponised. By cross-referencing DeepSeek’s collected data with other stolen datasets, Chinese intelligence agencies could build profiles on foreign officials, business leaders, journalists and dissidents. Keystroke tracking could help to identify individuals even when they use anonymous communication platforms. AI-powered analysis could pinpoint behavioral patterns, making it easier to manipulate public opinion or even blackmail individuals with compromising data.

If this sounds familiar, you’re not mistaken. Huawei was banned from operating 5G networks in multiple countries based on similar concerns. TikTok has come under scrutiny for its ties to the Chinese government. China has spent years perfecting cyber-espionage and DeepSeek appears to be the latest tool in its arsenal, joining the growing list of Chinese tech products raising red flags.

Chinese actors have displayed a pattern of digital intrusion. Recent events include the Volt Typhoon and Salt Typhoon operations, which targeted US digital infrastructure and telecom networks. These attacks compromised the data of more than one million people, including government officials. Looking to Europe, Germany fell victim to Chinese-backed hackers breaching its federal cartography agency.

China is using AI tools for influence, data gathering and geopolitical maneuvering. AI is a versatile tool through which the flow of information is controlled.

The risk goes far beyond espionage. It extends to economic coercion and intellectual property theft. For example, multinational companies relying on AI-powered tools may unknowingly send sensitive business strategies to foreign adversaries. Government agencies may unknowingly feed points of information that would be classified in aggregate into an AI system that Beijing can tap into. The consequences would be far-reaching and deeply troubling.

What if South Korea had looked the other way? Millions of South Korean citizens would have been at risk of Chinese coercion and exposed to data harvesting under the guise of harmless AI. In an era where data shapes power, handing control to foreign entities is dangerous.

Some countries are beginning to grasp these threats. India and Australia are ramping up scrutiny of foreign AI applications, and Australia and Taiwan have banned DeepSeek on government devices. The European Union is tightening regulations to demand transparency and accountability for data usage.

The United States, on the other hand, is still deliberating. President Donald Trump has focused on AI as a push for Silicon Valley to lift its game, rather than considering the technology’s national security implications. US lawmakers are beginning to propose restrictions on AI tools linked to foreign adversaries. For Texan officials and US navy personnel, for example, DeepSeek has been banned due to its links to the Chinese government.

However, regulatory action has been slow to gain traction, caught in a web of political disagreements and lobbying pressures. Meanwhile, security agencies warn that inaction could leave critical infrastructure and government institutions vulnerable to AI-driven espionage. Without decisive policies, the US risks becoming not only a prime target for data manipulation and intelligence gathering, but a soft target. It must act to prevent another major data breach, before it finds itself reacting to one. Waiting is not an option.

China’s AI ambitions aren’t slowing down, and global vigilance must not flag. The battle for digital sovereignty is already underway, and governments that hesitate will find themselves at a disadvantage from both economic and security standpoints.

Act now or pay later. AI is the new frontier of global competition, and data is the ultimate weapon. Those who don’t secure it will face the consequences. South Korea made the right move—who’s next?

Southeast Asia faces AI influence on elections

Artificial intelligence is becoming commonplace in electoral campaigns and politics across Southeast Asia, but the region is struggling to regulate it.

Indonesia’s 2024 general election exposed actual harms of AI-driven politics and overhyped concerns that distracted from its real dangers. As the Philippines and Singapore head to the polls in 2025, they can draw lessons from Indonesia’s experience, while tailoring insights for their electoral landscapes.

While deepfakes dominated concerns in last year’s elections, a quieter threat loomed: unregulated AI-driven microtargeting. These covert and custom messages are delivered at scale via private channels or dark posts—targeted advertisements that don’t appear on the publisher’s page, making them difficult to track. This isolates recipients, making verification trickier. The risk is even greater in Southeast Asia, where fake news thrives amid low media literacy rates.

AI in Indonesia’s general election was more commonly used for image polishing and rebranding than attacking opponents, though some attacks occurred. Prabowo Subianto, a retired military general known for his fiery nationalism, rebranded himself as a cuddly grandfather to soften his strongman image. This redirected the focus from substantial issues, such as corruption and economic challenges, to superficial narratives, including his cheerful dances.

Darker deepfakes also emerged, such as an audio clip of then presidential candidate Anies Baswedan being scolded by the chair of the National Democrat Party, Surya Paloh. A video of late President Suharto endorsing the Golkar party also went viral. This was controversial given Suharto’s dictatorship and violent record.

Microtargeting in Indonesia also notably focused on young voters instead of racial segments. Prabowo’s rebranding resonated with youth—usually first time voters who lacked political maturity. This demographic emerged as an important voter segment, comprising about 60 percent of the total electorate in Indonesia’s 2024 general election.

The situation emphasises a need for intentional regulations. Currently, the Indonesian Electronic Information and Transactions and Personal Data Protection laws address electronic content, including deepfakes, but lack election-specific AI guidelines. The General Election Committee could have helped, but it earlier declared AI regulation beyond its jurisdiction. Instead, Indonesia’s Constitutional Court now prohibits AI for political campaigning.

Indonesia’s experience offers valuable lessons for its close neighbours. In May 2025, the Philippines will hold mid-term elections, and Singapore will have a general election this year too. Both nations are enforcing some rules but their approaches differ to Indonesia’s.

Given the Philippines’ complex experience enforcing technology-related bans (some effective, others not so much), simply prohibiting AI during elections may not be ideal. Instead, the Commission on Elections is taking the transparency route, requiring candidates to register their digital campaign platforms—including social media accounts, websites and blogs—or face penalties. While the use of deepfakes is prohibited, AI is permitted with disclosure.

Singapore has previously implemented measures that ensure comprehensive coverage. For instance, its Elections Bill complements its legislation on falsehoods by barring AI-generated deepfakes targeting candidates. However, the proposed legislation applies only during the official election period and excludes private conversations, potentially leaving gaps for disinformation outside election season, microtargeting through private messaging and deepfakes of influential non-candidates. Such vulnerabilities have already been observed in Indonesia.

These cases also highlight Southeast Asia’s uneven regulatory readiness. Tackling AI risks demands a stronger stance, more binding than a guide or roadmap, bolstered by a whole of society collaboration to address complex challenges.

An article in Time argued the effect of AI on elections in 2024 was underwhelming, pointing to the quality—or lack thereof—of viral deepfakes. But Indonesia’s case suggests that power may lie not just in persuasiveness but also in appeal. Prabowo’s camp successfully used AI-generated figures to polish his image and distract people from real problems.

To dismiss the effect of AI is to miss the normalisation of unregulated AI-powered microtargeting. Last year revealed AI’s capability to target vulnerable yet sizable populations such as the youth in Indonesia, potentially beyond election cycles.

Blanket bans are an easy cop-out and may just encourage covert uses of AI. With choices available, people can simply use other companies. When OpenAI banned its use for political campaigning and generating images of real people, Prabowo turned to Midjourney, an AI image generator.

An alternative solution is to ensure transparent and responsible AI use in elections. This requires engaging those with contextual knowledge of the electorate—academics, industry leaders, the media, watchdogs and even voters themselves—alongside policymakers such as electoral commissions and national AI oversight bodies. But a key challenge remains: some Southeast Asian countries still lack dedicated AI regulatory bodies, or even AI strategies.

In the development of such bodies and strategies, public participation in AI policy consultations could ensure electorate concerns are heard. For instance, Malaysia’s National AI Office recently opened a call for experts and community representatives to help shape the country’s AI landscape. International organisations may also contribute through capacity building and stakeholder engagement, fostering relevant AI policies and regulations.

Certainly, further studies are needed for tailored AI governance for specific societies. But overall, adaptive and anticipatory regulation that evolves as technology advances will help mitigate AI-related risks in Southeast Asian elections and beyond.

DeepSeek is in the driver’s seat. That’s a big security problem

Democratic states have a smart-car problem. For those that don’t act quickly and decisively, it’s about to become a severe national security headache.

Over the past few weeks, about 20 of China’s largest car manufacturers have rushed to sign new strategic partnerships with DeepSeek to integrate its AI technology into their vehicles. This poses immediate security, data and privacy challenges for governments.  While international relations would be easier if it weren’t the case, China’s suite of national security and intelligence laws makes it impossible for Chinese companies to truly protect the data they collect.

China is the world’s largest producer of cars, and is now making good quality, low-cost and tech-heavy vehicles at a pace no country can match. It has also bought European industry stalwarts, including Volvo, MG and Lotus. Through joint ventures, it builds and exports a range of US and European car models back into global markets.

DeepSeek has struck partnerships with many large companies, such as BYD, Great Wall Motor, Chery, SAIC (owner of MG and LDV) and Geely (owner of Volvo and Lotus). In addition, major US, European and Japanese brands, including General Motors, Volkswagen and Nissan, have signed on to integrate DeepSeek via their joint ventures.

Australia is one of the many international markets where Chinese cars have gained enormous traction. More than 210,000 new cars were sold into Australia in 2024, and Chinese brands are set to take almost 20 percent of the market in 2025, up from 1.7 percent in 2019. Part of this new success is due to the government’s financial incentives encouraging Australians to purchase electric vehicles. China now builds about 80 percent of all electric vehicles sold in Australia.

Then, there are global markets where Chinese car brands are not gaining the market share they have in Australia (or in Russia, the Middle East and South America), but where Chinese-made cars are. This is the case in the United States and in Europe, for example. This is because many foreign companies use their joint ventures in China to sell China-made, foreign-branded cars into global markets. Such companies include Volkswagen, Volvo, BMW, Lincoln, Polestar, Hyundai and Kia.

Through its Chinese joint venture, Volkswagen will reportedly partner with DeepSeek. General Motors has also said it will integrate DeepSeek into its next-generation vehicles, including Cadillacs and Buicks. It’s unclear how many such cars may end up in overseas markets this year; that will likely depend on each country’s regulations.

It is not surprising that DeepSeek is a sought-after partner, with companies scrambling to integrate and build off its technology. It also shouldn’t have been a shock to see this AI breakthrough coming out of China—and we should expect a lot more. Chinese companies, universities and scientific institutions made impressive gains over the past two decades across most critical technology areas. Other factors, such as industrial espionage, have also helped.

But widespread integration of Chinese AI systems into products and services carries serious data, privacy, governance, censorship, interference and espionage risks. These risks are unlikely ever to go away, and few government strategies will be able to keep up.

For some nations, especially developing countries, this global integration will be a bit of a non-event. It won’t be seen as a security issue that deserves urgent policy attention above other pressing climate, human security, development and economic challenges.

But for others, it will quickly become a problem—a severe one, given the speed at which this integration could unfold.

Knowing the risks, governments (federal and state), militaries, university groups and companies (such as industrial behemoth Toyota) have moved quickly to ban or limit the use of DeepSeek during work time and via work devices. Regulators, particularly across Europe, are launching official investigations. South Korea has gone further than most and taken it off local app stores after authorities reportedly discovered that DeepSeek was sending South Korean user data to Chinese company ByteDance, whose subsidiaries include including TikTok.

But outside of banning employee use of DeepSeek, the integration of Chinese AI systems and models into data-hungry smart cars has not received due public attention. This quick development will test many governments globally.

Smart cars are packed full of the latest technology and are built to integrate into our personal lives. As users move between work, family and social commitments, they travel with a combination of microphones, cameras, voice recognition technology, radars, GPS trackers and increasingly biometric devices—such as those for fingerprint scanning and facial recognition to track driver behaviour and approve vehicle access. It’s also safe to assume that multiple mobile phones and other smart devices, such as smart watches, are present, some connecting to the car daily.

Then there is the information aspect—a potential influx of new AI assistants who will not always provide drivers with accurate and reliable information. At times, they may censor the truth or provide Chinese Communist Party talking points on major political, economic, security and human rights issues. If such AI models remain unregulated and continue to gain popularity internationally, they will expose future generations to systems that lack information integrity. As China’s internal politics and strategic outlook evolve, the amount of censored and false information provided to users of these systems will likely increase, as it does domestically for Chinese citizens.

Chinese built and maintained AI assistants may soon sit at the heart of a growing number of vehicles driven by politicians, military officers, policymakers, intelligence officials, defence scientists and others who work on sensitive issues. Democratic governments need a realistic and actionable plan to deal with this.

It may be possible to ensure that government-issued devices never connect to Chinese AI systems (although slip-ups can happen when people are busy and rushing), but it’s hard to imagine how users could keep most of their personal data from interacting with such systems. Putting all security obligations on the individual will not be enough.

Australia has been here before. Australia banned ‘high-risk vendors’ in from its 5G telecommunications network in 2018, and the debates leading up to and surrounding that decision taught us how valuable it was for the business community to be given an early and clear decision—something some other countries struggled with. Geostrategic circumstances haven’t improved since Australia banned high-risk vendors from 5G; unfortunately, they’ve worsened.

Australia’s domestic policy settings are also driving consumers towards the very brands that will soon integrate DeepSeek’s technology, which politicians and policymakers have been told not to use. Politicians from all parties test-driving BYD and LDV vehicles highlights that parliamentarians may need greater access to more regular security briefings to ensure they are fully across the risks, with updates provided to them in a timely fashion as and when those risks evolve.

Tackling this latest challenge head-on is a first-order priority that can’t wait until after the 2025 federal election.

Governments must ensure this issue is given immediate attention from their security agencies. This needs to include an in-depth assessment of the risks, as well as a consideration of future challenges. Partners and allies should share their findings with each other. An example of the type of activity that should be incorporated into such an assessment is Australia’s experience in 2017 and 2018 leading up to its 5G decision, when the Australian Signals Directorate conducted technical evaluation and scenario-planning.

There is also a question of choice, or rather lack of it, that needs deeper reflection from governments when it comes to high-risk vendors. Democratic governments should not allow the commercial sector to offer only one product if that product originates from a high-risk vendor. Yet there are major internet providers in Australia which provide only Chinese TP-Link modems for some internet services, and businesses which only sell Hikvision or Dahua surveillance systems (both Chinese companies were added to the US Entity List in 2019 because of their association with human rights abuses and violations).

Not only do the digital rights of consumers have to be better protected; consumers must also be given genuine choices, including the right to not choose high-risk vendors. This is especially important in selecting vendors that will have access to personal data of citizens or connect to national critical infrastructure. Currently, across many countries, those rights are not being adequately protected.

As smart cars integrate AI systems, consumers deserve a choice on the origin of such systems, especially as censorship and information manipulation will be a feature of some products. Governments must also provide a commitment to their citizens that they are only greenlighting AI systems that have met a high standard of data protection, information integrity and privacy safeguards.

Which brings us back to DeepSeek and other AI models that will soon come out of China. If politicians, government officials, companies and universities around the world are being told they cannot use DeepSeek because such use is too high-risk, governments need to ensure they aren’t then forcing their citizens to take on those same risks, simply because they’ve given consumers no other choice.

Tag Archive for: Artificial Intelligence

The road to artificial general intelligence, with Helen Toner

Australian AI expert Helen Toner is the Director of Strategy and Foundational Research Grants at Georgetown University’s Center for Security and Emerging Technology (CSET). She also spent two years on the board of OpenAI, which put her at the centre of the dramatic events in late 2023 when OpenAI CEO Sam Altman was briefly sacked before being reinstated.

David Wroe speaks with Helen about the curve humanity is on towards artificial general intelligence—which will be equal to or better than humans at everything—progress with the new “reasoning” models; the arrival of China’s DeepSeek; the need for regulation; democracy and AI; and the risks of AI.

They finish by discussing what will life be like if we get AI right and it solves all our problems for us? Will it be great, or boring?

Artificial intimacy, persuasive technologies, and how bots can manipulate us

Today on Stop the World, David Wroe speaks with Casey Mock and Sasha Fegan from the US-based Center for Humane Technology. The CHT is at the forefront of efforts to ensure technology makes our lives better, and strengthens rather than divides our communities. They also produce the podcast, Your Undivided Attention—one of the world’s most popular forums for deep and serious conversations about the impact of technology on society. David, Casey and Sasha discuss the tragic case of 14-year-old Sewell Setzer, who took his life after forming an intimate attachment to an online chatbot. They also talk about persuasive technologies that influence users at deep emotional and even unconscious levels, disinformation, the increasingly polluted information landscape, deepfakes, the pros and cons of age verification for social media and Australia’s approach to these challenges.

To read ASPI’s latest report, ‘Persuasive technologies in China: Implications for the future of national security’, please visit ⁠https://www.aspi.org.au/report/persuasive-technologies-china-implications-future-national-security⁠

Warning: this episode discusses mental health and suicide, which some listeners might find distressing. If you need someone to talk to, help is available through a range of services, including ⁠Lifeline⁠ on 13 11 14 and ⁠Beyond Blue⁠ on 1300 22 46 36.

Stop the World: TSD Summit Sessions: How to navigate the deep fake and disinformation minefield with Nina Jankowicz

The Sydney Dialogue is over, but never fear, we have more TSD content coming your way! This week, ASPI’s David Wroe speaks to Nina Jankowicz, global disinformation expert and author of the books How to Lose the Information War and How to Be a Woman Online.

Nina takes us through the trends she is seeing in disinformation across the globe, and offers an assessment of who does it best, and whether countries like China and Iran are learning from Russia. She also discusses the links between disinformation and political polarisation, and what governments can do to protect the information domain from foreign interference and disinformation.

Finally, Dave asks Nina about her experience being the target of disinformation and online harassment, and the tactics being used against many women in influential roles, including US Vice President Kamala Harris and Australia’s eSafety Commissioner Julie Inman Grant, in attempts to censor and discredit them.

Guests:
⁠David Wroe
⁠Nina Jankowicz

Stop the World: TSD Summit Sessions: Defence, intelligence and technology with Shashank Joshi

In the final lead-in episode to the Sydney Dialogue (but not the last in the series!), ASPI’s Executive Director, Justin Bassi, interviews Shashank Joshi, Defence Editor at the Economist.  

They discuss technology, security and strategic competition, including the impact of artificial intelligence on defence and intelligence operations, the implications of the no-limits partnership between Russia and China and increasing alignment between authoritarian states. They also cover the challenge of protecting free speech online within a framework of rules which also protects public safety.

They talk about Shashank’s latest Economist report ‘Spycraft: Watching the Watchers’, which explores the intersection of technology and intelligence, and looks at the history of intel and tech development, including advancements from radio to the internet and encryption.

The Sydney Dialogue (TSD) is ASPI’s flagship initiative on cyber and critical technologies. The summit brings together world leaders, global technology industry innovators and leading thinkers on cyber and critical technology for frank and productive discussions. TSD 2024 will address the advances made across these technologies and their impact on our societies, economies and national security.

Find out more about TSD 2024 here: ⁠https://tsd.aspi.org.au/⁠    

Mentioned in this episode: ⁠https://www.economist.com/technology-quarterly/2024-07-06⁠  

Guests:
⁠Justin Bassi⁠
Shashank Joshi