It’s not just software. Physical critical equipment can’t be trusted, either

Just auditing the software in critical equipment isn’t enough. We must assume that adversaries, especially China, will also exploit the hardware if they can.
The latest report on the dangers from China-made solar inverters is a strong reminder that the physical part of equipment must not be trusted. Reuters said on 15 May that investigators had discovered rogue communication modules embedded in Chinese solar inverters installed in critical US energy infrastructure.
These ghost machines, capable of wireless data transmission, had not been declared by the manufacturers and had no documented function. They were, in effect, silent participants on the US grid.
No specific act of sabotage has been confirmed, but the purpose of the devices is unclear. Are they passive intelligence collectors, quietly contributing to foreign data aggregation? Or are they latent access points with offensive potential, waiting to be activated?
Inverters could be coordinated to disrupt voltage regulation or overload circuits across distributed energy resources, causing instability or damage to grid infrastructure.
Presence of the rogue communications modules in the inverters reminds us that adversaries can exploit vulnerabilities hidden deep in hardware, creating potential strategic leverage across essential systems.
As globalisation enters a new phase defined by contested technologies and fragmented supply chains, treating hardware as implicitly trustworthy builds hidden risks into the systems designed to ensure our national resilience.
Governments and industries have embraced zero-trust security models, in which no user, device or connection is trusted by default. But, too often, this principle is only applied to software and user access, not to the hardware operating inside our critical systems. Physical infrastructure, such as the devices that run power, water and transport systems, is rarely scrutinised to the same degree. In part, that’s because hardware threats are harder to observe, difficult to attribute, and require specialised tools and skills to detect.
This is a dangerous oversight.
Modern infrastructure is not a single system. It is a complex patchwork of such globally sourced equipment as sensors, inverters, routers and gateways. Many of these devices run proprietary firmware, are updated irregularly and operate with little visibility once installed. The complexity is such that no single organisation, and often no single person, fully understands how it all works. Much like modern vehicles, we no longer repair these systems part by part. We replace entire black-box subsystems, trusting that what’s inside the new ones will be what we expect.

Inverters and controllers similar to those exposed in the United States are deployed across Australia’s solar energy sector, particularly in residential and commercial-scale distributed energy resources. Many are connected to the grid and managed remotely via mobile networks.
The structural risks highlighted in the US investigation almost certainly exist here. It’s likely we simply haven’t looked closely enough. We may not even know how to look or where to start.
So far, Australia’s policy focus has been on network security and operational resilience. The Security of Critical Infrastructure Act and its Risk Management Program reforms have strengthened awareness and governance. But hardware integrity remains a blind spot. Vendors are often evaluated based on documentation or country of origin. Few components are independently tested for physical or embedded threats. Even fewer are built with tamper evidence or verifiable firmware.
So what would a zero-trust model for hardware look like? It starts with rejecting assumptions. Devices used in critical environments should be able to prove they are genuine and unaltered, using cryptographic signatures that can be independently verified. Firmware should be auditable and digitally signed. Hardware platforms need runtime integrity checks, tamper detection and the ability to isolate or disable compromised components. These capabilities exist today but are rarely adopted at scale.
Procurement models must also evolve. Hardware cannot be selected from trusted supply chains on cost-efficiency alone. It requires a broader risk lens, one that accounts for consequence, likelihood, adversary capability and intent. Mitigation may require investment in local capacity, or co-development with partners that share our security posture. Where equipment cannot come from reliable foreign countries, governments must cultivate domestic sources.
This is not about closing off our economy. It is about building resilience. In a contested region, the ability to operate independently of hostile actors may determine national outcomes.
We must also let go of the idea that trust is permanent. Zero-trust is not a one-off assessment; it’s a posture of continuous verification. Supply chains evolve. Vendors change hands. Firmware updates introduce new code. Just as we monitor software over time, we must now monitor the behaviour and integrity of the physical devices that carry our systems.
The ghost machines uncovered in US infrastructure cannot be understood as an isolated glitch. They represent a deliberate strategy: embedding long-term access and influence into the physical core of critical systems.
Australia has made progress in recognising cyber threats as national security challenges. The next step is to extend that awareness to hardware.
We cannot secure the future while building on unexamined trust.