AI is reshaping security, and the intelligence review sets good direction

The 2024 Independent Intelligence Review found the NIC to be highly capable and performing well. So, it is not a surprise that most of the 67 recommendations are incremental adjustments and small but nevertheless important recalibrations. However, to thrive in a contested, fragile and volatile security environment more must be done, and collectively.
The review found that despite great progress, some practical and cultural barriers impede the interoperability and adoption of data and technology. Most of the review’s technology recommendations are clear-cut and important steps to overcome this.
Robust data is the foundation for good intelligence and a versatile, strategic asset. Thus, the review recommends that National Intelligence Community (NIC) agencies should develop a top secret cloud transition strategy and should support data cataloguing efforts to maximise interoperability.
The AI recommendations focus on AI governance principles, frameworks, senior officer responsibility and, crucially, NIC-wide senior officer training to understand ‘applications, risks and governance requirements of AI for intelligence’. These measures will hopefully establish an educated and accountable leadership cohort in the NIC that can drive AI adoption while thinking critically about risks and effective governance. As always, execution will be key—assuming the government in office after the 3 May election accepts the recommendation.
Despite the global hype about AI, the review acknowledges its significant risks as well as many opportunities. It echoes my book in suggesting that, in practice, NIC agencies are predominately using AI for collection and such analytical functions as triage and translation. Given the review’s focus on improving the intelligence-policy interface, how technology could contribute, beyond using AI to curate intelligence for consumers, was curiously absent.
Using AI to ‘transform and improve the intelligence cycle’ is necessary. However, there is space for a more imaginative approach to using it—for example, identifying and monitoring new threats, anomalies and providing early warning.
Curiously, the basic question of what it means to know something—fundamental to both AI and intelligence—didn’t feature. While it was excellent to see misinformation and disinformation addressed, albeit not publicly, I’d argue government and intelligence expertise on it is already crucial, not just ‘becoming essential’, as the review puts it. It was also interesting to see the evaluation of open-source intelligence (OSINT), an important function, kicked down the road to the next review.
We should approach technology as an ecosystem. This is reflected in the review, which noted: ‘NIC needs a stronger enterprise approach to technology, one that recognises and exploits the interdependencies of the technology ecosystem.’ We need to develop a technology strategy to articulate the vision, requirements, priorities, as well as current and future technology risks. While that seems a little vague, if it were well executed, it would be a big step forward.
Technology, data and privacy were, in the main, addressed well as threats in the scene-setting section, where the review highlighted technology’s foundational role in Australia’s context, alongside global contest and fragmentation and transnational challenges like climate change. But few recommendations dealt with current issues with data harvesting, cyberattacks or AI in biotech, let alone their use in conflict. Yet, our technology will be a target and our dependencies on data, technology and AI infrastructure will be weak points for adversaries to exploit.
There is, naturally, a section on innovation. The recommendation that government scope the establishment of a national security focused technology investment fund is welcome and balances narrowing the Joint Capability Fund, which has had mixed results.
While exorbitantly expensive, secure spaces outside Canberra are critical to mission. While the review addressed options for integrated locations outside of Canberra, the lack of urgency on this is a missed opportunity. In conflict, they’d be not only ‘helpful’, as per the review; they’d be essential. They also support recruitment and retention, stakeholder engagement, collaborative operational and strategic work, and contingency planning.
The technology-related oversight recommendations are noteworthy and substantial. I welcome the recommendation that the first full-time Independent National Security Legislation Monitor undertake a review of the NIC’s use of AI to inform legislative and policy changes. I’ve long advocated for a panel of technology advisers to serve the oversight bodies so was pleased to see this recommended, although I would have preferred one advisory body accessible to NIC and oversight agencies. Perhaps to elected officials too.
The terms of reference explicitly included the NIC’s preparedness for crisis and conflict. It is addressed throughout—and I agree with the co-authors that most adaption will happen in a crisis or conflict, not before—however, I also think more could be done to prepare. OSINT, disinformation and AI or technology frameworks and strategies are important, but action will need to be expedited if, or as, conflict looms. Increased centralisation, while straightforward for education, workforce and policy, could be an asset or a risk in conflict, especially with varying levels of technology sophistication and expertise.
Technologies’ effects are not occurring alone. They converge with a crumbling global order as well as increasing contest and uncertainty in international trade, creating a tinderbox for exponential change. Occurring amid a vacuum of global leadership, there is no clear path for non-transactional collaboration on technology or climate change; the same possibly holds true for intelligence.
The review handled how technology is affecting our security environment and how it can be harnessed by the NIC well. The recommendations are sensible and considered improvements for an already world-class intelligence enterprise. The real test, if it happens, will be how it performs in conflict.