Tag Archive for: Of minds and machines: an AI series

The US and Australia need generative AI to give their forces a vital edge

The advent of generative AI (GenAI) has marked a significant milestone in the evolution of technology, pushing boundaries on what seemed possible. Leading American technology companies, like OpenAI and Anthropic, have been at the forefront of this revolution, releasing large language models (LLMs) with unprecedented capabilities.

But in our report on the implementation of generative artificial intelligence (GenAI), the Special Competitive Studies Project warns that while the US has opened a critical technological advantage over the rest of the world in this area, this moment may not last.

From mastering complex games to revolutionising medical diagnostics and acing standardised tests, AI’s potential is clear. Its applications have permeated finance, marketing, data management, game development, computer science, healthcare, and more, signaling a paradigm shift in how the average human interacts with computers and data and, ultimately, makes decisions.

As established and emerging leaders in this domain, the United States and Australia should leverage the window of opportunity brought on by GenAI to capture an enduring strategic advantage in the Indo Pacific.

Harnessing GenAI for strategic advantage will require a concerted effort by the US, Australia, and partner nations. Fortunately, the AUKUS partnership is a robust mechanism for technology sharing between allies and is already beginning to bear fruit. While much of the attention for AUKUS has been given to the agreement’s first pillar, which lays the foundation for a landmark Virginia-class submarine sharing, arguably the most imminent component of the treaty lies on Pillar Two. The second pillar of AUKUS holds the potential to turbocharge technology adoption through jointly developing advanced capabilities, which could ensure that Australia, the UK, and the US remain interoperable and interchangeable in conflict.

As part of the latest round of talks in December, the UK, US and Australia agreed to deploy ‘advanced AI algorithms on multiple systems, including P-8A maritime patrol aircraft to process each nation’s sonobuoys’. Defence leaders from the three countries also announced that they are delivering resilient and autonomous artificial intelligence technologies. This system delivers ‘artificial intelligence algorithms and machine learning to enhance force protection, precision targeting, and intelligence, surveillance, and reconnaissance’. Increased defence spending on mutually beneficial capabilities, including $25 million on ‘AUKUS innovation initiatives’, reinforces the need for allies to stay on the cutting edge of technology.

Although substantial progress is being made with the AUKUS agreements, much more can be done to fully harness the innovation power that GenAI stands to bring to allied defence capabilities. At a minimum, we should ensure that a gap does not emerge between allies in access to and integration of the most advanced GenAI models. This could lead to a bifurcation, where the militaries of some nations press ahead in adopting GenAI while others lack resources or trust in the technology. Over time, growing disparities could undermine coalition interoperability, interchangeability, and technological edge.

In SCSP’s report report, we outlined four areas that show potential for transformation of allied military power.

GenAI models could greatly enhance military decision-making. The war in Ukraine has demonstrated that policymakers and military commanders are increasingly being confronted with exceptionally compressed timelines. An Indo-Pacific contingency would most certainly underscore the importance of speed, placing a premium on information and decision advantage. The only way to survive and win in such an environment is to augment our human decision-making with AI-enabled capabilities.

GenAI promises to improve the conduct of military operations across vast distances—a priority as China expands its strategic assertiveness in the Pacific. Already the US, UK, and Australian navies are deploying AI algorithms to accelerate the processing of sonar data from their underwater sensors. By further incorporating such capabilities, allies can quicken identification and response to threats over expansive terrain.

Realising the transformative potential of GenAI requires investing in personnel across capability areas. This means recruiting top AI talent and scientists to construct advanced defence-tailored autonomous systems, reskilling service members across domains to integrate these tools, and identifying existing military occupational series that may no longer be relevant and new ones still to be created. AI-enabled warfare is already here; we need allied military personnel who can deliver AI effects.

New opportunities with GenAI will also create new challenges that will require new defences, including known threats like disinformation, cyber, and biological agents. GenAI models will increasingly lower the threshold of action for malign actors, old and new. Our nations must actively identify the threats that AI will change and the new vectors of attack it will create and start building defences now. We are certainly headed in a direction where AI-enabled systems will battle each other, not just on the battlefield.

There is reason to be concerned that we are approaching a critical juncture when our nations will be called on to safeguard democracy across the Indo-Pacific. We should be doing everything we can to capitalise on the strategic window of opportunity that GenAI has opened for allies. If done properly, we can achieve unmatched human-machine teaming within and between our militaries to deter adversaries or, if combat becomes necessary, to win a future conflict. We have the moment, we have the willing partners in AUKUS, and we have the will to overcome any obstacle. The time for action is now.

 

Will 2024 be the year of responsible AI?

The start of 2024 has been marked by a wave of predictions regarding the trajectory of artificial intelligence, ranging from optimistic to cautious. Nevertheless, a clear consensus has emerged: AI is already reshaping human experience. To keep up, humanity must evolve.

For anyone who has lived through the rise of the internet and social media, the AI revolution may evoke a sense of déjà vu—and raise two fundamental questions. Is it possible to maintain the current momentum without repeating the mistakes of the past? And can we create a world in which everyone, including the 2.6 billion people who remain offline, is able to thrive?

Harnessing AI to bring about an equitable and human-centered future requires new, inclusive forms of innovation. But three promising trends offer hope for the year ahead.

First, AI regulation remains a top global priority. From the European Union’s AI Act to US President Joe Biden’s October 2023 executive order, proponents of responsible AI have responded to voluntary commitments from Big Tech firms with policy suggestions rooted in equity, justice, and democratic principles. The international community, led by the newly established United Nations High-Level Advisory Body on AI (one of us, Dhar, is a member) is poised to advance many of these initiatives over the coming year, starting with its interim report on Governing AI for Humanity.

Moreover, this could be the year to dismantle elite echo chambers and cultivate a global cadre of ethical AI professionals. By expanding the reach of initiatives like the National Artificial Intelligence Research Resource Task Forceestablished by the United States’ 2020 AI Initiative Act—and localising implementation strategies through tools such as the UNESCO readiness assessment methodology, globally inclusive governance frameworks could shape AI in 2024.

At the national level, the focus is expected to be on regulating AI-generated content and empowering policymakers and citizens to confront AI-powered threats to civic participation. As a multitude of countries, representing more than 40% the world’s population, prepare to hold crucial elections this year, combating the imminent surge of mis- and disinformation will require proactive measures. This includes initiatives to raise public awareness, promote broad-based media literacy across various age groups, and address polarisation by emphasising the importance of empathy and mutual learning.

As governments debate AI’s role in the public sphere, regulatory shifts will likely trigger renewed discussions about using emerging technologies to achieve important policy goals. India’s use of AI to enhance the efficiency of its railways and Brazil’s AI-powered digital-payment system are prime examples.

In 2024, entities like the UN Development Programme are expected to explore the integration of AI technologies into digital public infrastructure (DPI). Standard-setting initiatives, such as the upcoming UN Global Digital Compact, could serve as multi-stakeholder frameworks for designing inclusive DPI. These efforts should focus on building trust, prioritising community needs and ownership over profits, and adhering to ‘shared principles for an open, free, and secure digital future for all.’

Civil-society groups are already building on this momentum and harnessing the power of AI for good. For example, the non-profit Population Services International and London-based start-up Babylon Health are rolling out an AI-powered symptom checker and health-provider locator, showcasing AI’s ability to help users manage their health. Similarly, organisations like Polaris and Girl Effect are working to overcome the barriers to digital transformation within the non-profit sector, tackling issues like data privacy and user safety. By developing centralised financing mechanisms, establishing international expert networks, and embracing allyship, philanthropic foundations and public institutions could help scale such initiatives.

As nonprofits shift from integrating AI into their work to building new AI products, our understanding of leadership and representation in tech must also evolve. By challenging outdated perceptions of key players in today’s AI ecosystem, we have an opportunity to celebrate the true, diverse face of innovation and highlight trailblazers from a variety of genders, races, cultures, and geographies, while acknowledging the deliberate marginalisation of minority voices in the AI sector.

Organisations like the Hidden Genius Project, Indigenous in AI, and Technovation are already building the ‘who’s who’ of the future, with a particular focus on women and people of color. By collectively supporting their work, we can ensure that they take a leading role in shaping, deploying, and overseeing AI technologies in 2024 and beyond.

Debates over what it means to be ‘human-centered’ and which values should guide our societies will shape our engagement with AI. Multi-stakeholder frameworks like UNESCO’s Recommendation on the Ethics of Artificial Intelligence could provide much-needed guidance. By focusing on shared values such as diversity, inclusiveness, and peace, policymakers and technologists could outline principles for designing, developing, and deploying inclusive AI tools. Likewise, integrating these values into our strategies requires engagement with communities and a steadfast commitment to equity and human rights.

Given that AI is well on its way to becoming as ubiquitous as the internet, we must learn from the successes and failures of the digital revolution. Staying on our current path risks perpetuating—or even exacerbating—the global wealth gap and further alienating vulnerable communities worldwide.

But by reaffirming our commitment to fairness, justice, and dignity, we could establish a new global framework that enables every individual to reap the rewards of technological innovation. We must use the coming year to cultivate multi-stakeholder partnerships and promote a future in which AI generates prosperity for all.

 

Building trust in artificial intelligence: lessons from the EU AI Act

Artificial intelligence will radically transform our societies and economies in the next few years. The world’s democracies, together, have a duty to minimise the risks this new technology poses through smart regulation, without standing in the way of the many benefits it will bring to people’s lives.

There is strong momentum for AI regulation in Australia, following its adoption of a government strategy and a national set of AI ethics. Just as Australia begins to define its regulatory approach, the European Union has reached political agreement on the EU AI Act, the world’s first and most comprehensive legal framework on AI. That provides Australia with an opportunity to reap the benefits from the EU’s experiences.

The EU embraces the idea that AI will bring many positive changes. It will improve the quality and cost-efficiency of our healthcare sector, allowing treatments that are tailored to individual needs. It can make our roads safer and prevent millions of casualties from traffic accidents. It can significantly improve the quality of our harvests, reducing the use of pesticides and fertiliser, and so help feed the world. Last but not least, it can help fight climate change, reducing waste and making our energy systems more sustainable.

But the use of AI isn’t without risks, including risks arising from the opacity and complexity of AI systems and from intentional manipulation. Bad actors are eager to get their hands on AI tools to launch sophisticated disinformation campaigns, unleash cyberattacks and step up their fraudulent activities.

Surveys, including some conducted in Australia, show that many people don’t fully trust AI. How do we ensure that the AI systems entering our markets are trustworthy?

The EU doesn’t believe that it can leave responsible AI wholly to the market. It also rejects the other extreme, the autocratic approach in countries like China of banning AI models that don’t endorse government policies. The EU’s answer is to protect users and bring trust and predictability to the market through targeted product-safety regulation, focusing primarily on the high-risk applications of AI technologies and powerful general-purpose AI models.

The EU’s experience with its legislative process offers five key lessons to approaching AI governance.

First, any regulatory measures must focus on ensuring that AI systems are safe and human-centric before they can be used. To generates the necessary trust, AI systems must be checked for core principles such as non-discrimination, transparency and explainability. AI developers must train their systems on adequate datasets, maintain risk-management systems and provide for technical measures for human oversight. Automated decisions must be explainable; arbitrary ‘black box’ decisions are unacceptable. Deployers must also be transparent and inform users when an AI system generates content such as deepfakes.

Second, rules should focus not on the AI technology itself—which develops at lightning speed—but on governing its use. Focusing on use cases—for example, in health care, finance, recruitment or the justice system—ensures that regulations are future-proof and don’t lag behind rapidly evolving AI technologies.

The third lesson is to follow a risk-based approach. Think of AI regulation as a pyramid, with different levels of risk. In most cases, the use of AI poses no or only minimal risks—for example, when receiving music recommendations or relying on navigation apps. For such uses, no or soft rules should apply.

However, in a limited number of situations where AI is used, decisions can have material effects on people’s lives—for example, when AI makes recruitment decisions or decides on mortgage qualifications. In these cases, stricter requirements should apply, and AI systems must be checked for safety before they can be used, as well as monitored after they’re deployed. Some uses that pose unacceptable risks to democratic values, such as social scoring systems, should be banned completely.

Specific attention should be given to general-purpose AI models, such as GPT-4, Claude and Gemini. Given their potential for downstream use for a wide variety of tasks, these models should be subject to transparency requirements. Under the EU AI Act, general-purpose AI models will be subject to a tiered approach. All models will be required to provide technical documentation and information on the data used to train them. The most advanced models, which can pose systemic risks to society, will be subject to stricter requirements, including model evaluations (‘red-teaming’), risk identification and mitigation measures, adverse event reporting and adequate cybersecurity protection.

Fourth, enforcement should be effective but not burdensome. The act aligns with the EU’s longstanding product-safety approach: certain risky systems need to be assessed before being put on the market, to protect the public. The act classifies AI systems into the high-risk category if they are used in products covered by existing product-safety legislation, and when they are used in certain critical areas, including employment and education. Providers of these systems must ensure that their systems and governance practices conform to regulatory requirements. Designated authorities will oversee providers’ conformity assessments and take action on non-compliant providers. For the most advanced general-purpose AI models, the new regulation establishes an EU AI Office to ensure efficient, centralised oversight of the models posing systemic risks to society.

Lastly, developers of AI systems should be held to account when those systems cause harm. The EU is currently updating its liability rules to make it easier for those who have suffered damages from AI systems to bring claims and obtain relief—surely prompting developers to exercise even greater due diligence before putting AI into the market.

The EU believes an approach built around these five key tenets is balanced and effective. However, while the EU may be the first democracy to establish a comprehensive framework, we need a global approach to be truly effective. For this reason, the EU is also active in international forums, contributing to the progress made, for example, in the G7 and the OECD. To ensure effective compliance, though, we need binding rules. Working closely together as like-minded countries will enable us to shape an international approach to AI that is consistent with—and based on—our shared democratic values.

The EU supports Australia’s promising efforts to put in place a robust regulatory framework. Together, Australia and the EU can promote a global standard for AI governance—a standard that boosts innovation, builds public trust and safeguards fundamental rights.

 

How to win the artificial general intelligence race and not end humanity

In 2016, I witnessed DeepMind’s artificial-intelligence model AlphaGo defeat Go champion Lee Sedol in Seoul. That event was a milestone, demonstrating that an AI model could beat one of the world’s greatest Go players, a feat that was thought to be impossible. Not only was the model making clever strategic moves but, at times, those moves were beautiful in a very deep and humanlike way.

Other scientists and world leaders took note and, seven years later, the race to control AI and its governance is on. Over the past month, US President Joe Biden has issued an executive order on AI safety, the G7 announced the Hiroshima AI Process and 28 countries signed the Bletchley Declaration at the UK’s AI Safety Summit. Even the Chinese Communist Party is seeking to carve out its own leadership role with the Global AI Governance Initiative.

These developments indicate that governments are starting to take the potential benefits and risks of AI equally seriously. But as the security implications of AI become clearer, it’s vital that democracies outcompete authoritarian political systems to ensure future AI models reflect democratic values and are not concentrated in institutions beholden to the whims of dictators. At the same time, countries must proceed cautiously, with adequate guardrails, and shut down unsafe AI projects when necessary.

Whether AI models will outperform humans in the near future and pose existential risks is a contentious question. For some researchers who have studied these technologies for decades, the performance of AI models like AlphaGo and ChatGPT are evidence that the general foundations for human-level AI have been achieved and that an AI system that’s more intelligent than humans across a range of tasks will likely be deployed within our lifetimes. Those systems are known as artificial general intelligence (AGI), artificial superintelligence or general AI.

For example, most AI models now use neural networks, an old machine-learning technique created in the 1940s that was inspired by the biological neural networks of animal brains. The abilities of modern neural networks like AlphaGo weren’t fully appreciated until computer chips used mostly for gaming and video rendering, known as graphics processing units, became powerful enough in the 21st century to process the computations needed for specific human-level tasks.

The next step towards AGI was the arrival of large-language models, such as OpenAI’s GPT-4, which are created using a version of neural networks known as ‘transformers’. OpenAI’s previous version of its chatbot, GPT-3, surprised everyone in 2020 by generating text that was indistinguishable from that written by people and performing a range of language-based tasks with few or no examples. GPT-4, the latest model, has demonstrated human-level reasoning capabilities and outperformed human test-takers on the US bar exam, a notoriously difficult test for lawyers. Future iterations are expected to have the ability to understand, learn and apply knowledge at a level equal to, or beyond, humans across all useful tasks.

AGI would be the most disruptive technology humanity has created. An AI system that can automate human analytical thinking, creativity and communication at a large scale and generate insights, content and reports from huge datasets would bring about enormous social and economic change. It would be our generation’s Oppenheimer moment, only with strategic impacts beyond just military and security applications. The first country to successfully deploy it would have significant advantages in every scientific and economic activity across almost all industries. For those reasons, long-term geopolitical competition between liberal democracies and authoritarian countries is fuelling an arms race to develop and control AGI.

At the core of this race is ideological competition, which pushes governments to support the development of AGI in their country first, since the technology will likely reflect the values of the inventor and set the standards for future applications. This raises important questions about what world views we want AGIs to express. Should an AGI value freedom of political expression above social stability? Or should it align itself with a rule-by-law or rule-of-law society? With our current methods, researchers don’t even know if it’s possible to predetermine those values in AGI systems before they’re created.

It’s promising that universities, corporations and civil research groups in democracies are leading the development of AGI so far. Companies like OpenAI, Anthropic and DeepMind are household names and have been working closely with the US government to consider a range of AI safety policies. But startups, large corporations and research teams developing AGI in China, under the authoritarian rule of the CCP, are quickly catching up and pose significant competition. China certainly has the talent, the resources and the intent but faces additional regulatory hurdles and a lack of high-quality, open-source Chinese-language datasets. In addition, large-language models threaten the CCP’s monopoly on domestic information control by offering alternative worldviews to state propaganda.

Nonetheless, we shouldn’t underestimate the capacity of Chinese entrepreneurs to innovate under difficult regulatory conditions. If a research team in China, subject to the CCP’s National Intelligence Law, were to develop and tame AGI or near-AGI capabilities first, it would further entrench the party’s power to repress its domestic population and ability to interfere with the sovereignty of other countries. China’s state security system or the People’s Liberation Army could deploy it to supercharge their cyberespionage operations or automate the discovery of zero-day vulnerabilities. The Chinese government could embed it as a superhuman adviser in its bureaucracies to make better operational, military, economic or foreign-policy decisions and propaganda. Chinese companies could sell their AGI services to foreign government departments and companies with back doors into their systems or covertly suppress content and topics abroad at the direction of Chinese security services.

At the same time, an unfettered AGI arms race between democratic and authoritarian systems could exacerbate various existential risks, either by enabling future malign use by state and non-state actors or through poor alignment of the AI’s own objectives. AGI could, for instance, lower the impediments for savvy malicious actors to develop bioweapons or supercharge disinformation and influence operations. An AGI could itself become destructive if it pursues poorly described goals or takes shortcuts such as deceiving humans to achieve goals more efficiently.

When Meta trained Cicero to play the board game Diplomacy ‘honestly’ by generating only messages that reflected its intention in each interaction, analysts noted that it could still withhold information about its true intentions or not inform other players when its intentions changed. These are serious considerations with immediate risks and have led many AI experts and people who study existential risk to call for a pause on advanced AI research. But policymakers worldwide are unlikely to stop given the strong incentives to be a first mover.

This all may sound futuristic, but it’s not as far away as you might think. In a 2022 survey, 352 AI experts put a 50% chance of human-level machine intelligence arriving in 37 years—that is, 2059. The forecasting community on the crowd-sourced platform Metaculus, which has a robust track record of AI-related forecasts, is even more confident of the imminent development of AGI. The aggregation of more than 1,000 forecasters suggests 2032 as the likely year general AI systems will be devised, tested and publicly announced. But that’s just the current estimate—experts and the amateurs on Metaculus have shortened their timelines each year as new AI breakthroughs are publicly announced.

That means democracies have a lead time of between 10 and 40 years to prepare for the development of AGI. The key challenge will be how to prevent AI existential risks while innovating faster than authoritarian political systems.

First, policymakers in democracies must attract global AI talent, including from China and Russia, to help align AGI models with democratic values. Talent is also needed within government policymaking departments and think tanks to assess AGI implications and build the bureaucratic capacity to rapidly adapt to future developments.

Second, governments should be proactively monitoring all AGI research and development activity and should pass legislation that allows regulators to shut down or pause exceptionally risky projects. We should remember that Beijing has more to worry about with regard to AI alignment because the CCP is too worried about its own political safety to relax its strict rules on AI development.

We therefore shouldn’t see government involvement only in terms of its potential to slow us down. At a minimum, all countries, including the US and China, should be transparent about their AGI research and advances. That should include publicly disclosing their funding for AGI research and safety policies and identifying their leading AGI developers.

Third, liberal democracies must collectively maintain as large a lead as possible in AI development and further restrict access to high-end technology, intellectual property, strategic datasets and foreign investments in China’s AI and national-security industries. Impeding the CCP’s AI development in its military, security and intelligence industries is also morally justifiable in preventing human rights violations.

For example, Midu, an AI company based in Shanghai that supports China’s propaganda and public-security work, recently announced the use of large-language models to automate reporting on public opinion analysis to support surveillance of online users. While China’s access to advanced US technologies and investment has been restricted, other like-minded countries such as Australia should implement similar outbound investment controls into China’s AI and national-security industries.

Finally, governments should create incentives for the market to develop safe AGI and solve the alignment problem. Technical research on AI capabilities is outpacing technical research on AI alignment and companies are failing to put their money where their mouth is. Governments should create prizes for research teams or individuals to solve difficult AI alignment problems. One model potential model could be like the Clay Institute’s Millennium Prize Problems, which provides awards for solutions to some of the world’s most difficult mathematics problems.

Australia is an attractive destination for global talent and is already home to many AI safety researchers. The Australian government should capitalise on this advantage to become an international hub for AI safety and alignment research. The Department of Industry, Science and Resources should set up the world’s first AGI prize fund with at least $100 million to be awarded to the first global research team to align AGI safely.

The National Artificial Intelligence Centre should oversee a board that manages this fund and work with the research community to create a list of conditions and review mechanisms to award the prize. With $100 million, the board could adopt a similar investment mandate as Australia’s Future Fund to achieve an average annual return of at least the consumer price index plus 4–5% per annum over the long term. Instead of being reinvested into the fund, the 4–5% interest accrued each year on top of CPI should be used as smaller awards for incremental achievements in AI research each year. These awards could also be used to fund AI PhD scholarships or attract AI postdocs to Australia. Other awards could be given to research, including research conducted outside Australia, in annual award ceremonies, like the Nobel Prize, which will bring together global experts on AI to share knowledge and progress.

A $100 million fund may seem a lot for AI research but, as a comparison, Microsoft is rumoured to have invested US$10 billion into OpenAI this year alone. And $100 million pales in comparison to the contributions safely aligned AGI would have on the national economy.

The stakes are high for getting AGI right. If properly aligned and developed, it could bring an epoch of unimaginable human prosperity and enlightenment. But AGI projects pursued recklessly could pose real risks of creating dangerous superhuman AI systems or bringing about global catastrophes. Democracies must not cede leadership of AGI development to authoritarian systems, but nor should they rush to secure a Pyrrhic victory by going ahead with models that fail to embed respect for human rights, liberal values and basic safety.

This tricky balance between innovation and safety is the reason policymakers, intelligence agencies, industry, civil society and researchers must work together to shape the future of AGIs and cooperate with the global community to navigate an uncertain period of elevated human-extinction risks.