Tag Archive for: international law

Should Australia be LAWSless?

Australia has traditionally been a norm-maker when it comes to arms control. A quick click through the pages on non-proliferation, disarmament and arms control on the Department of Foreign Affairs and Trade website reveals a country that has a strong commitment to responsible, lawful controls of weapons types ranging from small arms to nuclear bombs. Australia even took the lead in the final text of the Arms Trade Treaty, a breakthrough in conventional arms control, at least in theory. It entered into force during the Arab uprisings and the supersonic rise of Islamic State, and remains disappointingly aspirational rather than best practice.

Nonetheless, this active commitment to arms control is a great example of how Australia can build its reputation and influence in multilateral institutions in order to shape the norms and standards that help make our strategic environment safer and make catastrophic conflict less likely.

But emerging technologies like robotics and artificial intelligence are set to complicate arms control even further, and the international community is attempting to wrap its collective head around the implications of lethal autonomous weapons systems (LAWS).

Australia has an opportunity to play an integral role in putting controls on a set of technologies that could have devastating effects on global and national security. As the international environment becomes more adversarial, these norm-shaping skills will become critical for Australia.

But Australia’s current position actually embraces LAWS development. It argues that a treaty on LAWS development and deployment is premature because there’s no agreement on the definition of a LAWS. It asserts a core interest in developing AI technologies because of their potential for improving safety and reducing risks, and in fielding defensive autonomous systems.

This position mirrors that of some of Australia’s allies. The US also contends that it is ’premature’ to support a pre-emptive ban, and that it may be ‘compelled’ to develop fully autonomous weapons. The UK recently confirmed a similar position, stating that a ban could be ‘counterproductive’.

While Canada has hedged its position, New Zealand has recently clarified its support for an outright ban. In May, New Zealand’s minister for disarmament and arms control, Phil Twyford, said that the development and deployment of fully autonomous weapons creates ‘the potential for a continuous global battlefield’. He said he was ‘committed to building an alliance of countries working towards an international and legally binding instrument prohibiting and regulating unacceptable autonomous weapons systems’.

New Zealand joins 30 other states in supporting a ban. These states, most of which are in the diplomatic grouping known as the Non-Aligned Movement, have little chance of joining the lethal autonomy race and recognise that these new weapons will change the character of warfare to their likely disadvantage.

The systems in development promise to be faster than humans, be scalable at minimal cost and reduce risks in the war zone. However, they can also make mistakes and make escalation more likely. The critical question for many is whether a machine should be allowed to make life-and-death decisions.

Australia announced its official position on LAWS at a roundtable on the issue held at the ANU School of Law in March. Conducted under the Chatham House Rule, it provided a forum for freely discussing the challenges and opportunities that LAWS present for Australia.

Participants included academics, lawyers, political scientists, technologists and representatives from the Department of Defence and the private sector. Both serving and retired officers of the Australian Defence Force were present, as well as a former secretary of defence. It was the first event of its kind in Australia to address the technological, legal and ethical dimensions of LAWS.

A chair’s summary detailing the main topics addressed and outlining the key points of the day’s discussion was circulated to all participants in May. A version of this summary was also published in the ANU Journal of Law and Technology.

The summary identified four main themes in the discussions: Australia’s current position; definitions; international law and norms; and development, deployment and personnel. Reference was made to the ADF’s Concept for robotic and autonomous systems, which categorises military AI technologies on a spectrum from remotely operated systems to automatic systems, autonomic systems to autonomous systems.

It is the development of fully autonomous systems that some scholars and civil society groups, as well as the UN secretary-general, are concerned about. The removal of human control over a final decision means that a machine will be deciding about the use of lethal force. As South African legal scholar Christof Heyns put it, ‘While earlier revolutions in military affairs gave the warrior control over ever-more-powerful weapons, autonomous weapons have the potential to bring about a change in the identity of the decision-maker. The weapon may now become the warrior.’

The lack of agreement over the meanings of autonomy and automation was considered an obstacle to making progress. Civil society groups refer to ‘meaningful human control’ over LAWS; however, it’s not clear what ‘meaningful’ means in practice. Lethality was also raised, for while this term is  commonly used term in international forums, its relevance and centrality remain under discussion. The International Committee of the Red Cross, for example, doesn’t include the term ‘lethal’ in its position on LAWS. For some, whether a weapons system is lethal to humans is only one consideration.

Also discussed was the desirability and possibilities of predictability, both of the systems themselves and in combat, including how machines fail, how they can be fooled, and how predictability in combat makes defending against them easier.

But countries should also consider the growing public alarm about autonomous weapons systems, and the effect of this alarm on national resilience and social cohesion. For example, one participant noted that thousands of technologists and others, including Elon Musk, Stephen Hawking and Jack Dorsey, have signed an open letter which states, ‘[A] military AI arms race would not be beneficial for humanity. There are many ways in which AI can make battlefields safer for humans, especially civilians, without creating new tools for killing people.’

The discussion also touched on the question of whether giving machines the power to decide to take a human life would violate the core principles of proportionality, distinction, targeting and human dignity set out in international humanitarian law. The need for new black-letter law and norms of behaviour was discussed at length. For example, can autonomous lethality can be dealt with under the framework of the Convention on Conventional Weapons, which includes 11 guiding principles?

It was also noted that the Martens clause, which, in the absence of treaty protection, protects people who find themselves in battle zones under the principles of humanity and ‘the dictates of human conscience’, may negate the need for new laws.

But in May, in an important development since the roundtable was convened, the ICRC updated its position. It now recommends that states adopt new legally binding rules on LAWS.

On the issue of compliance, parallels were drawn with the global normative acceptance by states of the Comprehensive Nuclear-Test-Ban Treaty and general consensus on non-deployment of biological and chemical weapons.

Participants also noted precedents for pre-emptive bans on exploding bullets and blinding lasers, as well as with the civil society campaigns that led to bans on anti-personnel landmines and cluster munitions—many of whom now also comprise the Campaign to Stop Killer Robots. No clear agreement was reached, though the remarkably civil discussions found more commonality than anticipated.

In reflecting on the roundtable, if Australia wants to develop defensive LAWS capabilities, what might this look like in practice? Autonomous sentries across our northern borders replacing NORFORCE? Could Australia station LAWS with the expansion of the US presence in the Northern Territory? Might we deploy loitering and suicide munitions using dubious face recognition in new expeditionary conflict zones?

These sentry and loitering capabilities are already in use in the Middle East, developed and deployed by Israel and Turkey. A report emerged late last month that a Turkish weaponised drone seemingly made its own decision to target a human. Yet even if their tactical and operational use keeps a human in the final decision to fire, and arguably held to account, the contribution of new technologies to strategic success needs to be considered carefully; what will be the real legacy of almost two decades of drone strikes in Afghanistan?

Political and civil-society support for an outright ban is likely to grow, especially if levels of trust in governments continue to be low. Australia needs to think about whether it wants to encourage a LAWS arms race, given questions over its ability to compete with the scale of systems being developed by potential adversaries. The ANU LAWS roundtable was a good start, but there’s a long way to go before Australia fully understands the implications of its current position.

China’s Chang’e-5 mission: flagging the moon for conflict?

While the landing of China’s Chang’e-5 spacecraft on the moon earlier this month was a significant event on humanity’s collective journey through space, the planting of a Chinese flag on the lunar surface raises serious concerns about the extension of national laws to other celestial bodies.

As Beijing has increasingly sought to extend its laws beyond its borders, it may seek to use the flag-planting to justify the extension of its jurisdiction to activities on the lunar surface.

On 1 December, the Chang’e-5 probe touched down near Mons Rümker in the moon’s Oceanus Procellarum region, an area characterised by a high volcanic complex, and some distance from the US Apollo mission sites. The craft collected rock and dirt samples before lifting off on 3 December. A capsule holding the samples was delivered safely to earth on 17 December.

The probe’s successful landing was well received by the international community, with scientists optimistic that the mission would contribute to a greater understanding of the moon’s history. NASA’s science mission chief, Thomas Zurbuchen, tweeted a hope that ‘everyone will benefit from being able to study this precious cargo that could advance the international science community’.

On 4 December, the China National Space Administration released images showing the national flag unfurled on the moon, the first flag planted on the lunar surface since the Apollo 17 mission. Images of the flag-planting were disseminated widely across Chinese state media channels Xinhua and CGTN. News reports focused on the flag’s ‘genuine fabric’ materials, designed to withstand all manner of lunar conditions, and explained that it was a reminder of the ‘excitement and inspiration’ felt during the Apollo missions.

However, while the flag-planting may have been overlooked or dismissed by international observers as a purely symbolic gesture without legal consequences, the reality is more nuanced. Beyond the Chang’e-5 mission’s stated intent to further scientific progress, the placing of the Chinese flag on the moon bears significant national importance for Beijing, carries heavy political overtones and elicits various concerns under international law.

Historically, the planting of flags by nations and colonial powers has been often associated with the declaration of sovereignty, involving an assertion of title over territory based on acts of discovery. Six American flags were planted during the Apollo missions across their landing sites, though the US later clarified its intentions in a law passed by Congress, describing the flag-planting as ‘a symbolic gesture of national pride in achievement’.

In Beijing’s previous attempts to apply domestic national security laws extraterritorially, the protection of China’s national flag in all circumstances has been central. On 2 October, after pro-Hong Kong protestors burned the Chinese flag outside Beijing’s embassy in London, Chinese officials issued a statement saying that this action amounted to desecration and violated China’s 1990 Flag Law and the 2020 Hong Kong National Security Law. They urged UK officials to investigate and prosecute those responsible.

Protection of the flag is a national security matter under China’s flag law and any attempt to deface or tarnish the national flag is illegal. In the US, in contrast, legal precedent set in 1989 holds that desecration of the American flag is a protected form of free speech under the constitution, despite its being codified as an offence under US law.

China’s extraterritorial reach under Article 38 has been one of the Hong Kong security law’s defining features. The law applies to offences committed against Hong Kong ‘from outside the Region by a person who is not a permanent resident of the Region’. While this sparked derision over the absurdity and infeasibility of the law being applicable anywhere on earth, it did not stop Beijing from charging American citizens residing in the US under the law. However, as opposed to the noncorporeal imposition of the security law, the flag represents a stronger case for extraterritorial enforcement as a physical object and extension of Chinese jurisdiction.

The flag-planting therefore reinforces the extraterritorial extension of Chinese law on the lunar surface in two ways. First, because treatment of the flag is tied to national security, any attempt to remove it or disturb its environment by future astronauts may be construed as desecration. Second, Beijing’s actions symbolically constitute the creation of a ‘lunar safety zone’, denoting the exercise of de facto ownership of the area around an object. This would undercut Washington’s designs to establish safety zones on the moon in the near future, as outlined under the recent Artemis Accords.

Article 2 of the 1967 Outer Space Treaty prohibits any country from claiming territory on the moon through an assertion of sovereignty or any other means. Under international law, therefore, China cannot use the flag-planting to claim sovereignty over any part of the moon. However, Beijing may seek to emulate Washington’s legal reasoning under the accords, arguing that establishing safe zones around its flag doesn’t constitute static borders, but rather an incorporation of environmental factors and circumstances pertinent to the flag’s continuing condition.

China’s calculated decision to plant its national flag on the moon could well be the start of further escalation in the ongoing competition between Washington and Beijing in outer space. It is to be hoped that both sides continue to abide by the international rules-based order laid down by the Outer Space Treaty and respect the continued exploration and use of outer space as the province of all mankind.

Filling the void: protecting detainees in armed conflict

Whether carried out by government authorities or non-state armed actors, seizing and holding one’s adversaries continues to be an innate and expected feature of war. In 2013 alone, the International Committee of the Red Cross (ICRC) visited more than 756,000 detainees in over 1,700 places of detention. A majority of those detainees were held in situations of ongoing armed conflict.

International humanitarian law (IHL) generally does not prohibit the taking of detainees by either government armed forces or armed groups. Indeed, from a humanitarian perspective, the availability of detention as an option can, in many cases, mitigate the lethal violence and overall human cost of war. Rather, IHL focuses on ensuring that any detention is carried out humanely. Detainees might consider themselves lucky to be alive, but their fate may be uncertain and conditions in detention may be harsh. Read more