Introduction
States are developing and exercising offensive cyber capabilities. The United States, the United Kingdom and Australia have declared that they have used offensive cyber operations against Islamic State,1 but some smaller nations, such as the Netherlands, Denmark, Sweden and Greece, are also relatively transparent about the fact that they have offensive cyber capabilities.2 North Korea, Russia and Iran have also launched destructive offensive cyber operations, some of which have caused widespread damage.3 The US intelligence community reported that as of late 2016 more than 30 states were developing offensive cyber capabilities.4
There is considerable concern about state-sponsored offensive cyber operations, which this paper defines as operations to manipulate, deny, disrupt, degrade, or destroy targeted computers, information systems or networks.
It is assumed that common definitions of offensive cyber capabilities and cyber weapons would be helpful in norm formation and discussions on responsible use.
This paper proposes a definition of offensive cyber operations that is grounded in research into published state doctrine, is compatible with definitions of non-kinetic dual-use weapons from various weapons conventions and matches observed state behaviour.
In this memo, we clearly differentiate offensive cyber operations from cyber espionage. We address espionage only in so far as it relates to and illuminates offensive operations. Only offensive cyber operations below the threshold of armed attack are considered, as no cyber operation thus far has been classified as an armed attack, and it appears that states are deliberately operating below the threshold of armed conflict to gain advantage.5
This paper examines the usefulness of defining cyber weapons for discussions of responsible use of offensive cyber capabilities. Two potential definitions of cyber weapons are explored—one very narrow and one relatively broad—before we conclude that both definitions are problematic and that a focus on effects is more fruitful.
Finally, the paper proposes normative courses of action that will promote greater strategic stability and reduce the risk of offensive cyber operations causing extensive collateral damage.
Definitions of offensive cyber capabilities
This section examines definitions of offensive cyber capabilities and operations in published military doctrine and proposes a definition consistent with state practice and behaviour. We first define operations and capabilities to clarify the language used in this report.
What are capabilities? In the context of cyber operations, having a capability means possessing the resources, skills, knowledge, operational concepts and procedures to be able to have an effect in cyberspace. In general, capabilities are the building blocks that can be employed in operations to achieve some desired objective. Offensive cyber operations use offensive cyber capabilities to achieve objectives in or through cyberspace.
US military joint doctrine defines offensive cyber operations as ‘operations intended to project power by the application of force in and through cyberspace’. One category of offensive cyber operations that US doctrine defines is ‘cyberspace attack’—actions that manipulate, degrade, disrupt or destroy targets.6
UK military doctrine defines offensive cyber operations as ‘activities that project power to achieve military objectives in, or through, cyberspace. They can be used to inflict temporary or permanent effects, thus reducing an adversary’s confidence in networks or capabilities. Such action can support deterrence by communicating intent or threats.’7 UK doctrine further notes that ‘cyber effects will primarily be in the virtual or physical domain, although some may also be in the cognitive domain, as we seek to deny, disrupt, degrade or destroy.’
In both UK and US military doctrine, offensive operations are a distinct subset of cyberspace operations that also include defensive actions; intelligence surveillance and reconnaissance and operational preparation of the environment—non-intelligence enabling activities conducted to plan and prepare for potential follow-on military operations.
This is consistent with the Australian definition, which is that offensive cyber operations ‘manipulate, deny, disrupt, degrade or destroy targeted computers, information systems or networks’.8
The Netherlands’ defence organisation sees offensive cyber operations as ‘digital resources whose purpose it is to influence or pre-empt the actions of an opponent by infiltrating computers, computer networks and weapons and sensor systems so as to influence information and systems’.9
Two common threads in state definitions are identified. Offensive cyber operations:
- are intended to deny, disrupt, degrade, destroy or manipulate targets to achieve broader objectives (henceforth called denial and manipulation effects)
- have a ‘direct real-world impact’.10
Another observation is that these definitions stress that ‘while cyber operations can produce stand-alone tactical, operational, and strategic effects and achieve objectives, they must be integrated’ in a military commander’s overall plan.6 This doctrine, however, originates from military establishments within a relatively narrow range of countries. In other states, offensive cyber operations may well be less integrated into military planning and will occur to achieve the political and/or strategic goals of the state leadership.11
There are relatively few publicly available offensive cyber doctrine documents, but observed behaviour indicates that states such as Iran, North Korea and Russia are using operations that cause denial and manipulation effects to support broader strategic or military objectives.
By definition, offensive cyber operations are distinct from cyber-enabled espionage, in which the goal is to gather information without having an effect. When information gathering is a primary objective, stealth is needed to avoid detection in order to maintain persistent access that allows longer term intelligence gathering.
This definition does classify relatively common events, such as ransomware attacks, website defacements and distributed denial of service (DDoS) attacks, as offensive cyber operations.
Although the ‘manipulate, deny, disrupt, degrade or destroy’ element of the definition lends itself to segmentation into different levels, further examination shows that segmentation based on the type of attack is not particularly useful. Information and communication technology (ICT) infrastructure is inherently interconnected, and even modest disruption can cause relatively drastic second-order effects. Modifying the state of a control system, for example, could lock a person’s garage or launch a nuclear missile.
Conversely, seriously destructive attacks, such as data wipers, can have damaging effects on different scales. Compare the damage caused when North Korea infiltrated the Sony Pictures Entertainment network12 with the damage caused during the Russian-launched NotPetya attack’13 At Sony Pictures, more than 4,000 computers were wiped and, although that cost US$35 million to investigate and repair, it did not significantly affect the broader Sony corporation14 and did not directly affect other entities. The NotPetya event also involved data destruction, but it was probably the most damaging cyberattack thus far: US$300 million in damages for FedEx; US$250–300 million for Danish shipper Maersk15; more than US$310 million for American pharmaceutical giant Merck; US$387 million for French construction giant Saint-Gobain; and US$150 million for UK chocolate maker Mondelez International. It is possible that flow-on effects from the disruption to the logistics and pharmaceutical industries may have affected the broader global economy.
Table 1 is a selected list of state activities that this paper defines as offensive cyber operations. Those operations are assessed for the scale, seriousness, duration and specificity of their effect.
Ultimately, the seriousness of a cyberattack is based on its ultimate effects or on the effects that it enables. The scale and seriousness of incidents should be based upon measuring the ultimate consequences of an incident and the economic and flow-on effects.
Table 1: State offensive cyber operations
Operation | Seriousness | Scale | Duration | Specific |
---|---|---|---|---|
NotPetya | High—data destruction | Global. Affected organisations in Europe, US and Asia (Maersk, Merck, Rosneft, Beiersdorf, DHL and others) but also a concentration in Ukraine (banking, nuclear power plant, airports, metro services). | Short-term, with recovery over months to a year. | No |
WannaCry | High—data destruction | Global, but primarily in Russia, Ukraine, India and Taiwan, affecting multinationals, critical infrastructure and government. | Short-term, with recovery over months to a year. | No |
Sony Pictures Entertainment | High—data destruction | Focused on Sony Pictures Entertainment (<7,600 employees), a subsidiary of Sony Corporation (131,700 employees in 2015) (a) | Short-term, with recovery in months. | Yes |
Stuxnet | High—destruction of centrifuges | Focused on Iran’s nuclear weapon development programme | <1 year | Yes |
Various offensive cyber operations against ISIS by US, Australia, UK | Varied—some data destruction but also denial and manipulation effects | Focused on Islamic State | Unknown | Yes |
Estonia 2007 | Medium—temporary denial of service | Principally Estonian electronic services, affecting many European telcos and US universities | 3 weeks | Yes |
(a) Sony Corporation, US Securities and Exchange Commission Form 20-F, FY 2016 [online]
Cyber weapons and arms control
Cyber weapons are often conceived of as ‘powerful strategic capabilities with the potential to cause significant death and destruction’,16 and in an increasingly interconnected world it is easy to speculate about catastrophic effects. It is also difficult to categorically rule out even seemingly outlandish offensive cyber scenarios; for example, it seems unlikely that a fleet of self-driving cars could be hacked to cause mass destruction, but it is hard to say with certainty that it is impossible.17 Although the reality is that offensive cyber operations have never caused a confirmed death, this ‘uncertainty of effect’ is potentially destabilising, as states may develop responses based on practically impossible worst-case scenarios.
In a Global Commission on the Stability of Cyberspace issue brief, Morgus et al. look at countering the proliferation of offensive cyber capabilities and conclude that limiting the development of cyber weapons through traditional arms control or export control is unlikely to be effective.18 This paper agrees, and contends that previous arms or export control agreements may succeed where the following three conditions are present:
- Capability development is limited to states, usually because weapons development is complex and highly industrialised.
- There is a common interest in limiting proliferation.
- Verification of compliance is possible.
Perhaps only one of these three conditions—a common interest in limiting proliferation—exists in the world of cyber weapons, although even this is not immediately self-evident.
In the context of international arms control, a limited number of capability developers usually means that only states (and ideally only a small number of states) have the ability to develop weapons of concern, that states have effective means to control proliferation, or both. In cyberspace, however, there are many non-state actors—in the cybersecurity industry and in the criminal underworld19—developing significant cyber capability. Additionally, the exchange of purely digital goods is relatively difficult for states to control compared to exchanges of physical goods. States do not have a monopoly on capability development and find it difficult to effectively control the spread of digital goods, and so therefore cannot credibly limit broader capability development.
For chemical, biological and nuclear weapons, the human suffering caused by their use is generally abhorred and there is a very broad interest in restraining the use of those weapons. Offensive cyber operations, by contrast, could achieve military objectives without causing human suffering; for example, the warfighting capability of an adversary could be degraded by disrupting their logistics such that military objectives could be achieved without fighting. It has been suggested that states have a ‘duty to hack’ when the application of offensive cyber operations will result in less harm than all other applications of force,20 and the UK’s Minister of State for the Armed Forces, Nick Harvey, noted in 2012 that offensive cyber operations could be ‘quite a civilised option’ for that reason.21
Additionally, cyber weapons can be developed entirely in environments where visibility for verification is impossible, such as in air-gapped networks in nondescript office buildings. Unlike for weapons of mass destruction, there are no factories or supply chains that can be examined to determine whether capabilities exist and stockpiles are being generated.22
Unlike many military capabilities—say, nuclear-armed submarines or ballistic missiles—offensive cyber capabilities are unique in that once defenders have technical knowledge of the potential attack, effective countermeasures can be developed and deployed relatively easily.23
For this reason, states already have considerable interest in limiting the proliferation of offensive cyber capabilities—they want to keep those capabilities secret so they can exploit them. The US Vulnerabilities Equities Process (VEP) policy document24 states that when the US Government discovers vulnerabilities25 most are disclosed, but some will be kept secret to satisfy law enforcement or national intelligence purposes where the risk of the vulnerability is judged to be outweighed by possible intelligence or other benefits. Undoubtedly, all states that engage in vulnerability discovery will have a common interest in keeping at least some secret so that they can be exploited for national security purposes.
Defining cyber weapons
Despite scepticism about the effectiveness of traditional arms control, this paper develops both a narrow and a broad definition of cyber weapons to test whether those definitions could be useful in arms control discussions. The definitions have been developed by examining selected international weapons conventions and previously published definitions.
One problem with defining cyber weapons is that cyber technologies are primarily dual-use: they can be used for both attack and defence, for peaceful and aggressive purposes, for legal and illegal activities. Software can also be quite modular, such that many cybersecurity or administrative tools can be brought together to form malware.
Weapons in the physical domain have been categorised into three groups: small arms and light weapons; conventional arms; and weapons of mass destruction (WMD).26 Given that cyber weapons are often conceived of as potentially causing mass destruction and because WMDs are subject to the most rigorous international counter-proliferation regimes, this paper examines definitions through the perspective of the dual-use WMD counter-proliferation Chemical Weapons Convention and Biological Weapons Convention.27
Biological weapons, a class of WMD, are described as (our emphasis):28
- microbial or other biological agents, or toxins whatever their origin or method of production, of types and in quantities that have no justification for prophylactic, protective or other peaceful purposes;
- weapons, equipment or means of delivery designed to use such agents or toxins for hostile purposes or in armed conflict.
The Chemical Weapons Convention defines chemical weapons as (our emphasis):29
- toxic chemicals and their precursors, except where intended for purposes not prohibited under the Convention and as long as the types and quantities are consistent with such purposes; and
- munitions and devices, specifically designed to cause death or other harm through the toxic properties of those chemicals …
These conventions, both of which deal with dual-use goods, define by exclusion: only substances that do not or cannot have peaceful purposes are defined as weapons. The material of concern is not inherently a problem—it is how it is used.
In the context of armed conflict, the Tallinn Manual characterises cyber weapons by the effects they have, not by how they are constructed or their means of operation:
cyber weapons are cyber means of warfare that are used, designed, or intended to be used to cause injury to, or death of, persons or damage to, or destruction of, objects, that is, that result in the consequences required for qualification of a cyber operation as an attack.30
Herr and Rosenzweig define cyber weapons as malware that has a destructive digital or physical effect, and exclude malware used for espionage.31 Herr also considers that malware is modular and consists of a propagation element that the malware uses to move from origin to target; an exploit that will allow the malware to execute arbitrary commands on the target system; and a payload that will execute some malicious instructions.
Rid and McBurney define cyberweapons as ‘computer code that is used, or designed to be used, with the aim of threatening or causing physical, functional, or mental harm to structures, systems, or living beings’.32
A narrow definition
Following the logic of dual-use weapons conventions, a narrow definition of cyber weapons is software and information technology (IT) systems that, through ICT networks, cause destructive effects and have no other possible uses. The IT system aspect of this definition requires some level of integration and automation in a weapon: code that wipes a computer hard disk is not a weapon by itself—by itself it cannot achieve destructive effects through cyberspace—but could form part of a weapon that wipes hard drives across an entire organisation.
Based on this narrow definition, Table 2 shows our assessment of whether reported malware examples would be defined as cyber weapons.
Table 2: Cyber weapon assessment
Malware or system | Description | Weapon |
Distributed denial of service (DDoS) systems | Aggregation of components, including bots and control software, such that they have no other purpose than to disrupt internet services. | Yes, although this is arguable because effects tend to be temporary (disruptive and not destructive). Each individual component is likely to have non-destructive uses. |
Dragonfly a.k.a. Energetic Bear campaign (a) | Espionage campaign against energy critical infrastructure operators that developed industrial control system sabotage capabilities. | No. This was both manual and for espionage only; it never disrupted critical operations. However, the intent demonstrated is to develop capabilities to disrupt critical infrastructure. |
Blackenergy 2015 Ukrainian energy grid attack (b) | Access to Ukrainian energy company was used to disrupt electricity supply. | No. Blackenergy malware was very modular and this attack was quite manual. This malware does contain destructive capability. |
Industroyer a.k.a. Crashoverride malware (c) | Malware in a Ukrainian energy supply company was used to disrupt electricity supply. | Yes. Integrated malware disrupted electricity supply automatically. |
TRISIS malware (d) | Malware intended to sabotage a Saudi Arabian petrochemical plant. | Yes. Malware with no espionage capability was specifically designed to destroy a petrochemical plant. |
WannaCry | A self-propagating data wiper. | Yes. Malware with no espionage capability was designed to irreversibly encrypt computer hard drives. |
Metasploit | An integrated collection of hacking tools that can be used for defence, for espionage, or for destruction and manipulation. | No. Metasploit has many non-destructive uses and is not integrated into a system that causes destruction. |
NotPetya | A self-propagating data wiper. | Yes. Automatically destroyed data. |
Flame, Snake, Regin | Very advanced modular malware. | No. These could cause denial and manipulation effects and could be automated but have other uses. They seem to be designed primarily for espionage. |
Stuxnet | Self-propagating malware that subverted industrial control systems to destroy Iranian nuclear fuel enrichment centrifuges. | Yes. Highly tailored to automatically destroy targeted centrifuges. |
Large-scale man-in-the-middle attack system (e.g. mass compromise of routers) (e) | Compromise of many mid-points could enable large-scale access that could be used to enable intelligence, destruction or manipulation, or even to patch systems. | No. Intent is everything here. |
Powershell | A powerful scripting and computer administration language installed by default with the Windows operating system. | No. Many non-destructive uses. |
A Powershell script designed to automatically move through a network and wipe computers. | Destructive intent is codified within the script commands. | Yes. |
- a) Symantec, Dragonfly: Western energy companies under sabotage threat, 2014, online.
- b) Kim Zetter, ‘Inside the cunning, unprecedented hack of Ukraine’s power grid’, Wired, 3 March 2016, online.
- c) Andy Greenburg, ‘“Crash override”: the malware that took down a power grid’, Wired, 12 June 2017, online; Robert M Lee, ‘Crashoverride’, Dragos, 12 June 2017, online; Anton Cherepanov, Robert Lipovsky, ‘Industroyer: biggest threat to industrial control systems since Stuxnet’, welivesecurity, 12 June 2017, online.
- d) Nicole Perlroth, Clifford Krauss, ‘A cyberattack in Saudi Arabia had a deadly goal: experts fear another try’, New York Times, 15 March 2018, online; TRISIS malware: analysis of safety system targeted malware, Dragos, online.
- e) US CERT, Russian state-sponsored cyber actors targeting network infrastructure devices, Alert TA18-106A, 16 April 2018, online.
This narrow definition is consistent with the narrowness of definitions from both the Biological Weapons Convention and the Chemical Weapons Convention, both of which deal with dual-use goods.
The definition captures intent by excluding all other tools where intent is ambiguous; only tools that can only be used for destruction are included.
This narrow definition is problematic for at three reasons.
First, it does not map directly onto state definitions of offensive cyber activities—actions that manipulate, disrupt, deny and degrade would likely not be captured and so much offensive cyber activity will not involve cyber weapons. The offensive cyber operation, for example, that US Cyber Command conducted against Islamic State’s propaganda operations did not require cyber weapons. Cyber Command obtained Islamic State administrator passwords and deleted content and changed passwords to lock out the original owners.33 This offensive cyber operation could have been entirely conducted using standard computer administration tools. No malware, no exploit, no software vulnerability and certainly no cyber weapon was needed.
Second, even the most destructive offensive cyber operations could be executed without ever using a cyber weapon. For example, a cyber operation that triggered the launch of conventional or nuclear weapons would not require a cyber weapon.
Third, this definition could easily be gamed by adding non-destructive functionality to otherwise malicious code.
A broader definition
A broader definition of cyber weapons could be software and IT systems that, through ICT networks, manipulate, deny, disrupt, degrade or destroy targeted information systems or networks.
This definition has the advantage that it would capture the entirety of tools that could be used for offensive cyber operations.
Many cyber operations techniques, however, take advantage of computer administration tools, and the difference between espionage and offensive action is essentially a difference in intent; for example, the difference between issuing a command to copy files and issuing one to delete files. Indeed, it is possible to conduct cyber operations—both intelligence and offensive operations—using only legitimate tools such as the scripting language Windows Powershell.34 Yet it makes no sense to define what could be used for destructive effects as a cyber weapon; it is nonsensical to label Powershell as a cyber weapon.
This definition would also include perfectly legitimate tools that state authorities and the cybersecurity community use for law enforcement, cyber defence, or both.
These two definitions highlight the dilemma involved in defining cyber weapons. A narrow definition can perhaps be more readily agreed to by states, but excludes so much potential offensive cyber activity that efforts to limit cyber weapons based on that definition seem pointless. The broader definition would capture tools used for so many legitimate purposes that agreement on their status as weapons is unlikely, and limitations could well harm network defenders more than attackers.
Options for control
This paper therefore agrees with Morgus et al.35 that limiting the development of cyber weapons by controlling the development of defined classes of weapons is unlikely to be effective. There are, however, options for more effective responses that focus on affecting the economics of offensive cyber operations and the norms surrounding their application.
Affecting the markets involved in offensive cyber capability development would raise the cost of capability development and encourage states to conduct operations sparingly.
One market associated with cyber capabilities is that for software vulnerabilities and their associated exploits (code that takes advantage of a vulnerability). Software vulnerabilities are often exploited by malware to gain unauthorised access to computer systems and are often—although not always—required for offensive cyber capabilities. Ablon and Bogart have found that the market price for software exploits is sensitive to supply and that prices can rise dramatically for in-demand, low-supply products.36 A multifaceted approach to restricting supply could raise the cost of acquiring exploits and therefore the cost of building offensive cyber capabilities.
Shifting the balance of vulnerability discovery towards patching (rather than exploitation for malicious purposes) would raise the value of all vulnerabilities. As suggested by Morgus et al., one possibility is that software vulnerabilities are bought for the express purpose of developing fixes and patches, as suggested by Dan Geer in a 2014 BlackHat conference keynote.37
A secondary response would be to enable more effective repair of vulnerabilities that would close the loopholes that enable computer exploitation. NotPetya, assessed by the US Government to be the most destructive cyberattack thus far,38 used publicly known vulnerabilities for which patches had been available for months. Effective cyber hygiene would have prevented much of the damage that NotPetya caused.
From a policy point of view, this could be attacked at several levels by encouraging research into vulnerability mitigation and more effective patching processes; educating decision-makers to prioritise and resource vulnerability discovery and patching; government policy to encourage more effective patching regimes; and promoting VEP policies in other states (discussed below).
Whenever a vulnerability is exploited for any purpose—including cyber espionage, offensive operations and cybercrime—there is a risk of discovery, which could ultimately result in patching and loss of the ability to exploit the vulnerability. Raising the value of all vulnerabilities will encourage states to use offensive cyber capabilities sparingly to avoid discovery and hence loss of capability via patching.
A complementary approach would be to change incentives within software development to encourage secure application development. Again, this could be approached at many levels: altering computer science curriculums; promulgating secure coding standards;39 and altering the balance of liability in commercial code, for example.
Reducing the supply of exploits and raising their cost encourages states to conduct cyber operations in a way that avoids attracting attention to mitigate the risk of discovery and loss of capability. This effort to operate quietly would vastly reduce the risk of inadvertent large-scale damaging events.40
Recommendation: Encourage the establishment of national vulnerabilities equities processes
There is a common interest among all states that are conducting cyber operations—defensive or offensive—in actively assessing the risk and benefits of keeping vulnerabilities secret for exploitation. The US VEP document states that in ‘the vast majority of cases, responsibly disclosing a newly discovered vulnerability is clearly in the national interest’. Assuming this is true, the presence of VEP policies in many states would tend to result in more responsible disclosure and patching and therefore result in a reduced supply of vulnerabilities and exploits.
This reduced supply of vulnerabilities would raise the cost of offensive capability development and therefore restrict proliferation and reduce the use of offensive operations.
Recommendation: Promote focused operations
Unlike a kinetic weapon, for which direct consequences such as blast radius may be well understood, offensive cyber operations can easily have unintended consequences. Since states are conducting offensive cyber operations below the threshold of armed conflict, another option to limit offensive operations is to promote operations that are tightly focused so that operations do not affect innocent bystanders.
We have assessed that both the Sony Pictures and Stuxnet attacks were specific, as both affected specific targets and did not cause direct effects elsewhere (Table 1). The NotPetya and WannaCry incidents were not specific: they affected many organisations world-wide.
It is possible, therefore, to conduct focused offensive cyber operations that are specific and limit collateral damage; it is not an inherent fact of cyberspace that operations cannot be targeted and specific. To reduce the risks of collateral damage, there would be merit in promoting a norm of ‘due diligence’ for offensive cyber operations, requiring that states invest in rigorous testing to ensure that effects are contained before engaging in offensive cyber operations.
Recommendation: Measure damage for more effective responses
In addition to altering the computer vulnerability lifecycle, governments should also respond directly to cyber operations. Effective responses should be both directed against perpetrators and proportionate. Currently, both the identification of perpetrators (attribution) and the assessment of damage (to determine a proportionate response) are suboptimal. Much has been said about attribution, and this paper will not cover it further.
When state-sponsored operations such as NotPetya and WannaCry occur, there is no independent assessment of damage. An accurate accounting of harm could be used to justify an appropriately proportionate response.
NotPetya has been called ‘the most destructive and costly cyber-attack in history’.41 It seems that total cost estimates of over US$1 billion are based on collating the financial reports of public companies such as Merck,42 Maersk,43 Mondelez International44 and FedEx,45 and then adding a ‘fudge factor’ to account for all other affected entities. Publicly listed companies have formal reporting obligations, but the vast majority of entities affected by NotPetya do not, and it seems likely that the cost of NotPetya has been significantly understated.
An independent body that identifies common standards, rules and procedures for assessing the cost of cyberattacks could enable a more accurate measure of damage. The International Civil Aviation Organization’s system for air crash investigations may provide a framework.46 It assigns a role for various stakeholders, including the airline, the manufacturer, the registrar and so on. The investigation is assigned to an autonomous safety board with the task of assessing what happened, not who was at fault.47 For a cyber incident, an investigation board could include a national cybersecurity centre, the affected entity, the manufacturer of the affected IT system, relevant software developers and other stakeholders.
Using assessments of scope and seriousness to develop proportionate responses would encourage attackers to construct focused and proportionate offensive cyber operations.
Recommendation: Invest in transparency and confidence building
We have noted above that uncertainty about the effects caused by offensive cyber operations has the potential to be destabilising. State transparency in the use of offensive cyber operations could address this concern and help promote norms of responsible state behaviour.
Figure 1 shows the lifecycle of an offensive cyber capability, starting at the point that a state forms an intent to develop capability. Resources are committed; intelligence is gathered to support capability development; capability is developed; the environment is prepared (by deploying malware, for example); and finally the operation is launched and effects are observed. Crucially, there are distinct elements during this lifecycle that require operation on the public internet and are therefore potentially observable: intelligence gathering, operational preparation of the environment, and offensive cyber effects (in orange).48
Figure 1: Offensive cyber capability lifecycle
Although it is not possible to see or measure cyber weapons, to quantify them or inspect ‘cyber weapon factories’, a level of confidence-building transparency can still be achieved. Public doctrine that defines a nation’s strategic intent and its assessment of acceptable and responsible uses of offensive cyber operations would be extremely helpful.
This visibility may be sufficient to enhance confidence building as predictability is increased. Many responsible states will be reluctant to deviate from public statements regarding offensive cyber capability development because effects will possibly become visible at a later stage that will prompt incident response, forensic analysis and maybe political attribution and embarrassment.
There is already some public documentation of offensive cyber capabilities. There are unclassified doctrines, official statements and unofficial reporting on the states that have—or are developing—offensive capability. There are also voluntary national reports in the context of the UNGGE. Additionally, open source verification by research institutes such as the SIPRI Yearbook, IISS Military Balance and reports similar to the Small Arms Survey are authoritative and credible sources that inform policy actions by states. Finally, independent analysis and reporting from cybersecurity companies such as Symantec, Crowdstrike, BAE Systems and FireEye provides invaluable technical information. These firms also play a key role in early detection and response.
Summary and conclusion
Offensive cyber capabilities are defined as operations in cyberspace to manipulate, deny, disrupt, degrade, or destroy targeted computers, information systems or networks.
This paper has examined narrow and broad definitions of cyber weapons and found them problematic for use in control discussions.
However, a range of other measures would help limit the use of offensive cyber capabilities and reduce the risk of collateral damage when they are used:
- Markets for the vulnerabilities that are used to create offensive cyber capabilities can be affected to make capability development more expensive. VEP processes would form one element of a broader effort to patch vulnerabilities and restrict supply.
- Promoting the principle that offensive cyber operations should be focused and taking active steps to limit unintended consequences could limit the effects of operations on innocent bystanders, including through the promotion of the concept of ‘due diligence’.
- Responses to cyber incidents could also be improved by better accounting of the damage incurred. A robust assessment of damage using agreed standards would enable a more directly proportionate response and would help reinforce the expectation of specific and proportionate offensive cyber operations.
Finally, increased state transparency would promote acceptable norms of behaviour. Although monitoring and verification are difficult, this paper presents an offensive cyber operation lifecycle that indicates that various stages provide some visibility, which could build confidence.