Last month, cyber-defense analyst and geostrategist Pukhraj Singh penned a stinging epitaph, published by MWI, for global norms-formulation processes that are attempting to foster cyber stability and regulate cyber conflict—specifically, the Tallinn Manual. His words are important, and should be taken seriously by the legal and technical communities that are attempting to feed into the present global governance ecosystem. However, many of his arguments seem to suffer from an unjustified and dismissive skepticism of any form of global regulation in this space.
He believes that the unique features of cyberspace render governance through the application of international law close to impossible. Given the range of developments that are in the pipeline in the global cyber norms proliferation process, this is an excessively defeatist attitude toward modern international relations. It also unwittingly encourages the continued weaponization of cyberspace by fomenting a “no holds barred” battlespace, to the detriment of the trust that individuals can place in the security and stability of the ecosystem.
“The Fundamentals of Computer Science”
Singh argues that the “fundamentals of computer science” render rules of international humanitarian law (IHL)—which serve as the governing framework during armed conflict in other domains—inapplicable, and that lawyers and policymakers have gotten cyber horribly wrong. Singh theorizes that in the case of the United States having pre-positioned espionage malware in Russian military networks, that malware could have been “repurposed or even reinterpreted as an act of aggression.”
The possibility of a fabricated act of espionage being used as justification for an escalated response exists within the realm of analogous espionage, too. A reconnaissance operation that has been compromised can also be repurposed midway into a full-blown armed attack, or could be reinterpreted as justification for an escalatory response. However, international law states that self-defense can only be exercised when the “necessity of self-defense is instant, overwhelming, leaving no choice of means, and no moment of deliberation.” In order to legitimize any action taken under the guise of self-defense, the threat would have to be imminent and the response both necessary and proportionate. There is nothing inherently unique in the nature of cyber conflict that would render the traditional law of self-defense moot.
Further, the presumption that cyber operations are ambiguous and often uncontrollable, as Singh suggests, is flawed. An exploit that is considered “deployment-ready” is the result of an attacker’s attempts at fine-tuning variables—until it is determined that the particular vulnerability can be exploited in a manner that is considered to be reasonably reliable. An exploit may have to be worked upon for quite some time for it to behave exactly how the attacker intends it to. While it is true that there still may be unidentified factors that can potentially alter the behavior of a well-developed exploit, a skilled operator or malware author would nonetheless have a reasonable amount of certainty that an exploit code’s execution will result in the realization of only a certain possible set of predefined outcomes.
It is true that a number of remote exploits that target systems and networks may make use of unreliable vulnerabilities, where outcomes may not be fully apparent prior to execution—and sometimes even afterward. However, for most deployment-ready exploits, this would simply not be the case. In fact, the example of the infamous Stuxnet malware, which Singh uses in his article, helps buttress our point.
Singh questions whether India should have interpreted the widespread infection of systems within the region—which also happened to affect certain critical infrastructure—as an armed attack. This question can cursorily be dismissed since we now know that Stuxnet did not cause any deliberate damage to Indian computing infrastructure. A 2013 report by journalist Joseph Menn correctly states that “the only place deliberately affected [by Stuxnet] was an Iranian nuclear facility.” Therefore, for India to claim mere infection of systems located within the bounds of its territory as having been an armed attack, it would have to concretely demonstrate that the operators of Stuxnet caused “grave harm”—as described in IHL—purely by way of having infected those machines, through execution of malicious instructions programmed in the malware’s payload.
At the same time, it should not be dismissed that the act of the Stuxnet malware infecting a machine could very well be interpreted by a state as constituting an armed attack. However, given the current state of advancement in malware decompilation and reverse-engineering studies, the process of deducing instructions that a particular malicious program seeks to execute can in most cases be performed in a reasonably reliable manner. Thus, for a state to make such a claim, it would have to prove that the malware did indeed cause grave harm, that which meets the criteria of the “scale and effects” threshold laid down in Nicaragua v. United States—whether it was caused due to operator interaction or preprogrammed instructions—along with sufficient reasoning and evidence for attributing it to a state.
An analysis of the Stuxnet code made it apparent that operators were seeking out machines that had the Siemens STEP 7 or SIMATIC WinCC software installed. The authors of the malware quite clearly had prior knowledge that the nuclear centrifuges that they intended to target made use of a particular type of programmable logic controllers, which the STEP 7 and WinCC software interacted with. On the basis of this prior knowledge, the authors of Stuxnet made design choices by which, upon infection, target machines would communicate to the Stuxnet command-and-control server—including identifiers such as operating system version, IP address, workstation name, and domain name—whether or not the infected system had the STEP 7 or WinCC software installed. This allowed the operators of Stuxnet to easily identify and distinguish machines that they would ultimately attack for fulfilling their objectives. In effect, this gave them some amount of control over the scale of damage they would deliberately cause.
It has been theorized that the malware reached the nuclear facility in Iran through a flash drive. It may be true that widespread and unnecessary propagation of the worm—which could be described as it “going out of control”—was not something the operators had intended (as it would attract unwanted attention and raise alarm bells across the board). It has nonetheless been several years since Stuxnet was in action, and there have been no documented cases of Stuxnet having caused grave harm to Indian (or other) computers. For all purposes, it could be said that the risk of collateral damage was minimized as the control operators were able to direct the execution of damaging components of the malware, to a degree that could be interpreted as having complied with IHL—thereby making it a calculated cyberattack, with controllable effects.
However, if the adverse effects of the operation were to be indiscriminate (i.e., machines were tangibly damaged immediately upon being infected), and could not be controlled by the operator within reasonable bounds, then the rules of IHL would render the operation illegal—a red line that, among other declarations, the recent French statement on the application of international law to cyberspace recognizes.
“Bizarre and Regressive”: The Westphalian Precept of Territoriality
Singh’s next grievance is with the precept of territoriality and sovereignty in cyberspace. However, the reasoning he provides decrying this concept is unclear at best. The International Group of Experts authoring the Tallinn Manual argued that “cyber activities occur on territory and involve objects, or are conducted by persons or entities, over which States may exercise their sovereign prerogatives.” They continued to note that even though cyber operations can transcend territorial domains, they are conducted by “individuals and entities subject to the jurisdiction of one or more state.”
Contrary to Singh’s assertions, our reasoning is entirely in line with the “defend forward” and “persistent engagement” strategies adopted by the United States defense experts. In fact, Gen. Paul Nakasone, commander of US Cyber Command—whose interview Singh cites to explain these strategies—explicitly states in that interview that “we must ‘defend forward’ in cyberspace as we do in the physical domains. . . . [Naval and air forces] patrol the seas and skies to ensure that they are positioned to defend our country before our borders are crossed. The same logic applies in cyberspace.” This is a recognition of the Westphalian precept of territoriality in cyberspace—which includes the right to take pre-emptive measures against adversaries before the people and objects within a nation’s sovereign borders are negatively impacted.
Below-the-Threshold Operations
Singh also argues that most cyber operations would not reach the threshold armed attack to invoke IHL. He concludes, therefore, that applying the rules of IHL “bestows another garb of impunity upon rogue cyber attacks.” However, as discussed above, the application of IHL does not require a certain threshold of intensity, but the mere application of armed force that is attributable to a state.
Therefore, laying down “red lines” by, for example, applying the principle of distinction, which seeks to minimize damage to civilian life and property, actually works toward setting legal rules that seek to prevent the negative civilian fallout of cyber conflict. There appears to be no reason why any cyberattack by a state should harm civilians without the state using all means possible to avoid this harm. If there is an ongoing armed conflict, this entails compliance with the IHL principles of necessity and proportionality, ensuring that any collateral damage ensuing as a result of an operation is proportionate to the military advantage being sought.
Moreover, we agree that certain information operations may not cause any damage in terms of injury to human life or property. But IHL is not the only framework for governing cyber conflict. Ongoing cyber norms proliferation efforts are attempting to move beyond the rigid application of international law to account for the unique challenges of cyberspace. Despite the flaws in the process thus far, individuals from a variety of backgrounds and disciplines must engage meaningfully and shape effective regulation in this space. Singh’s “garb of impunity” exists when there are a lack of restrictions on collateral damage caused by cyber operations, to the detriment of civilian life and property alike.
Obstacles in Developing Customary International Law
His third argument is on the fetters limiting the development of customary international law in the cyber domain. This is a valid concern. Until recently, most states involved in cyber operations have adopted a stance of silence and ambiguity with regard to their legal position on the applicability of international law in cyberspace or their position on the Tallinn Manual.
This is due to multiple reasons: First, states are not certain if the rules of the Tallinn Manual protect their long-term interests of gaining covert operational advantages in the cyber domain, which acts as a disincentive for strongly endorsing the rules laid out therein. Second, even those states keen on applying and adhering to the manual may not be able to do so in the absence of technical and effective processes that censure other states that do not comply. Given this ambiguity, states have demonstrated a preference to engage in cyber operations and counteroperations that are below the threshold—in other words, those that do not bring IHL into play. However, as others have convincingly argued, it is incorrect to assume that the current trend of silence and ambiguity will continue.
Recent developments indicate that the variety of normative processes and actors alike may render the Tallinn Manual more relevant as a focal point in the discussions. The UK, France, Germany, Estonia, Cuba (backed by China and Russia), and the United States have all engaged in public posturing in advocacy of their respective positions regarding the applicability of international law in cyberspace, in varying degrees of detail—which is essentially customary international law in the making. The statements made by a number of delegations at the recently concluded first substantive session of the United Nations’ Open-Ended Working Group covered a broad range of issues, from capacity building to the application of international law, which is the first step towards fostering consensus among the variety of global actors.
Positive Conflict and the Future of Cyber Norms
The final argument—a theme that runs from the beginning of Singh’s article—is a stark criticism of Western-centric cyber policy processes. Despite attempts to foster inclusivity, efforts like those that produced the Tallinn Manual are still driven largely from and by the United States in an attempt to, as Singh describes it, keep “cyber offense fully potentiated.” This is an unfortunate reality, but one that is not limited solely to the cyber domain. For example, in an excellent paper written in 2001, retired US Air Force Maj. Gen. Charles Dunlap explained “that ‘lawfare,’ that is, the use of law as a weapon of war, is the newest feature of 21st century combat.”
We are presented therefore with two options: either sit back and witness the hegemonization of policy discourse by a limited number of powerful states, or actively seek to contest these assumptions by undertaking adversarial work across standards-setting bodies, multilateral and multi-stakeholder norms-setting forums, as well as academic and strategic settings. In a recent paper, international law scholar Monica Hakimi argues that international law can serve as a fulcrum for facilitating positive conflict in the short run between a variety of actors across industry, civil society, and military and civilian government entities, which can lead to the projection of shared governance endeavors in the long run. Despite its several flaws, the Tallinn Manual can serve as a this type of fulcrum for facilitating this conflict.
In writing a premature eulogy of efforts to bring to realization a set of norms in cyberspace, Singh dismisses that historically, global governance regimes have taken considerable time and effort to come into being and emerge after an arduous process of continuous prodding and probing. This process necessitates that any existing assumptions—and the bases on which they are constructed—are challenged regularly, so that we can enumerate and ultimately arrive at an agreeable definition for what works and what does not. Rejecting these processes in their entirety foments a global theater of uncertainty, with no benchmarks for cooperation that stakeholders in this domain can reasonably rely on.
Arindrajit Basu is an international lawyer by training, working as a Research Manager at the Centre for Internet and Society, India. His work has previously appeared in a variety of academic journals and magazines, including The Diplomat, The Wire, and Astropolitics.
Karan Saini is a security technologist and researcher, currently working as a Programme Officer at the Centre for Internet and Society, India. He has worked closely with several security-critical projects, and has discovered and reported security vulnerabilities and weaknesses in the infrastructures of several major companies.
I am honoured by the authors' interest in my essay. Without distracting the reader, I would like to briefly point out a fundamental technical anomaly in Basu and Saini's central argument.
If the reader spends enough time mulling it over, she may realise that due to this very anomaly, the presupposition of applicability of IHL to cyber operations also crumbles.
Had I known that a specific discursive approach of my essay would become the opening salvo, I would have made it clearer that cyber operations, offensive toolchains and exploitation — while overlapping — have their own distinct emergent properties which feed into overarching improbability, uncertainty and ambiguity. My argument based on ambiguity began from the former and ended with the latter. I was trying to apprise the reader of its cumulative effect, but perhaps I should have been even more verbose.
I will take specific real-world examples:
CYBER OPERATIONS:
Despite the many global Active Defence, counterintelligence and counter-offense programs which the NSA fields, it had to rely on phone intercepts and HUMINT to pinpoint the 2012 DDoS attacks to Iran. Former NSA deputy director Rick Ledgett conceded that while the NSA had complete access to the Iranian bot-herding system, it just could not rely on it to take an executive decision. Mind you, a DDoS attack is generally considered low-grade.
It is clear from the DoJ indictments that the NSA was getting a livestream of intelligence on APTs 28-29 from its Dutch partner AIVD. Obama still had to rely on highly placed double-agents like Oleg Smolenkov, Sergei Mikhailov and Ruslan Stoyanov to complete the picture.
Despite substantial TECHINT on Bureau 121, it was HUMINT again that came to the USG’s rescue during the Sony Pictures escalation.
And these are trivial attacks that we are talking about!
If the topmost tier of your threat intelligence framework is plagued with so much uncertainty and warped cost-benefits (mole in the Kremlin just for cyber intel, seriously?), how can the chain of command prosper?
OFFENSIVE TOOLCHAINS:
We still don’t fully understand the functionality of all modules of the Equation Group (NSA’s TAO), Slingshot (SOCOM) and Lamberts (CIA’s Vault 7). We don’t understand their targeting criterions, their geopolitical imperatives, and their CONOPS just from the reverse-engineered code. We don’t understand how they all map to the larger national security portfolio or remit – which is a must for any norm-setting exercise. This, even though a major portion of these toolchains have been leaked.
Reverse engineering is not a panacea. The intent of an operation doesn’t reside in the code. One must read JA Guerro-Saade’s paper on false flags to realise that, even though the whole industry was picking apart the technical evidence, we almost got carried away by the deception. We DO get carried away by false flags despite sure-shot technical evidence.
If you can’t even rely on technical evidence, then what?
Companies like Google and Microsoft have invested a LOT OF MONEY in building high-level ontological frameworks fusing various techniques like tactical cyber intelligence, code similarity engines, heuristics and telemetry, etc. And now it’s possible to undertake basic strategic intelligence case management like profiling of threat actors to understand their limits, boundaries of knowledge and incentives.
How can this be considered standard? How can cyber norms exercises expect nation states to have such cost-prohibitive capabilities? Forget about the costs, how would governments get the required telemetry? And would this ever be fully declassified for normative frameworks?
Guerro-Saade’s blog post on GossipGirl, supra threat actors and Flame 2.0 is telling in terms of the amount of analytical horsepower that was required to derive such strategic conclusions. How can this be considered normal or expected? And I am not even talking about the imbalance of power due to the politics of access. Can cyber norms be built on such an inequitable foundation?
From an operator’s standpoint, generally only a small portion of the toolchain manifests over adversarial infrastructure. It, quintessentially, is like a rocket programme with many launch stages (e.g. Kill Chain, ATT&CK) and a massive mission control (90% of effort is spent on the targeting framework). I don’t even want to explain how many things can and DO go wrong.
EXPLOITATION
Unfortunately, the authors mostly focused on the lowermost tier of my argument – exploitation. Sure, within a very narrow adversarial environment, it does look like a predictable exercise. But operators spend majority of their time and resources in keeping the adversarial infrastructure primed for a cyber operation. That’s like a bunch of river rafters trying to keep the raft STILL in a torrent of water. Chris Inglis calls the preparatory aspects of exploitation as the cyber-ISR framework. He feels that battling this uncertainty and fluidity should be the main aim of a cyber intelligence agency – it’s THAT crucial and expensive. On a lighter note, despite the hype around the Whatsapp zero-day, the NSO group giving multiple missed calls to its targets to activate the exploit is a case in point. We haven't even talked about Aaron Adam's paper on the layers of exploit mitigations that the operator may encounter.
SUMMARY
The legal argument is fine and stuff. My problem is that a part of this essay confuses operations, toolchains and exploitation. I am back to square one, as the emerging geo-strategic taxonomy relies on THIS VERY delineation.
Cyber operations, offensive toolchains and exploitation are three different things. Geo-strategy lives in the first, mechanised warfare in the second, and the political economy of proliferation in the third.