The future of US national security will be monumentally shaped by how machines transform and accelerate the growth of humanity. Machine learning has already resulted in cutting-edge developments such as a natural language processing model called GPT-3 that can imitate human text, deep fakes enabled by general adversarial networks, and AlphaGo, the very first computer program to beat a professional Go player. Human-machine teaming is the vanguard of the future of military and defense innovation. However, effectively leveraging this mind-boggling phenomenon—and mitigating the dangers of an adversary’s nefarious use of it—is unimaginable without public-private partnerships.
Currently, much of the debate surrounding AI public-private partnerships is characterized by tech employees refusing to directly aid in what they view as the business of war. However, the actual sentiments by tech professionals toward collaboration with DoD are more nuanced. According to a survey conducted by Georgetown University’s Center for Security and Emerging Technology, only a small minority—7 percent—of respondents expressed extremely negative feelings about working on DoD AI projects. Still, the survey found that many AI professionals associate such projects with “killer drones.” Due to DoD’s technical readiness and modernization goals, however, the government’s involvement in the commercial development of emerging technologies cannot be avoided on a broader scale. For this reason, increased awareness in the private sector regarding the defense applications of AI is crucial.
The US government and private sector companies must partner on artificial intelligence; without such collaboration, the United States is at risk of both trailing behind its adversaries on AI development and failing to establish ethical AI frameworks.
Adversarial Applications of Military AI
Artificial intelligence is at the forefront of US great power competition with China, and the Chinese Communist Party (CCP) has displayed a deep commitment to developing AI. The CCP’s Military-Civil Fusion (MCF) strategy seeks to take advantage of the private sector to develop core technologies such as AI, quantum computing, semiconductors, big data, 5G, and others. Though only in its early stages, it aims to massively mobilize civilian economic sectors to serve the CCP’s defense ambitions by offering incentives to Chinese enterprises. Total investment for MCF over the span of several years is estimated at $68.5 billion. China is able to directly fund civilian innovation for military applications more directly than the United States, due to its unique system of authoritarian state capitalism.
The increasing expansion of AI in commercial products also raises the risk of nonstate actors employing AI maliciously. Apart from state adversaries, dangerous nonstate actors such as the Islamic State have dabbled in rudimentary AI technology, specifically drones, for use in their operations. Most standard commercial drones use computer-vision technology, which detects, classifies, and tracks objects. Unfortunately, the dual-use nature of drones and AI-enabled tech makes their regulation and proliferation difficult to control. The Islamic State’s drone of choice is the DJI Phantom, a model with obstacle sensing and avoidance capabilities, manufactured in China. Furthermore, the group has refurbished commercial drones to fit its purposes and used them for not just for reconnaissance and recording aerial videos, but also for geographic mapping and delivering explosives. By using commercially available drones, the Islamic State has been able to conduct reconnaissance and intelligence operations swiftly, while reducing threats to its fighters. As commercial drones become more advanced with enhanced machine-learning capabilities, nonstate groups will capitalize on their increased sophistication. According to a 2018 report, there is a credible threat of terrorists repurposing autonomous vehicles and advanced commercial AI drones for explosives delivery and targeted assaults. Nonstate actors’ interest in emerging technologies and AI is a dangerous trend with future implications for escalation.
No nation or private organization has a monopoly on AI technologies; therefore, we cannot directly curb the development of AI by nonstate actors or state adversaries. In fact, most AI scientists and researchers openly publish their algorithms, code libraries, and training data sets—the integral pieces that can be assembled by individuals and put to use, including for malign purposes. The United States and its allies could fall behind the AI curve if public-private collaboration is not prioritized, as its adversaries continue to accelerate technological acquisition and usage.
Collaboration for Ethical AI
Public-private collaboration is instrumental in the creation of ethical AI frameworks for US defense and national security and must expand to prevent the future detrimental use of AI. Several private-sector companies and academic consortiums have already developed their own guidelines. For example, the Institute of Electrical and Electronics Engineers has published a global treatise on AI, which emphasizes ethically designed AI systems that promote universal human values, and the Partnership on AI to Benefit People and Society unites civil society and research groups to create best practices, increase public awareness, and serve as a discussion forum for AI. Additionally, within DoD itself, DARPA is an excellent example of how private-sector expertise paired with government support produces ethical innovation. For example, DARPA’s Urban Reconnaissance through Supervised Autonomy program aims to prevent civilian casualty issues on the battlefield, falling well within ethical norms for AI use.
The value of public-private and cross-sector initiatives goes beyond just defense applications. In light of the recent domestic siege against the US Capitol by violent hostile actors, conversations surrounding online content moderation and Big Tech’s obligation to protect democracy have skyrocketed. Artificial intelligence is at the forefront of this challenge. Specifically, developing algorithmic accountability to mitigate AI’s blackbox dilemma is instrumental in reducing algorithmic bias, and conducting sustainable and repeatable content moderation policies. Effective content moderation is vital for the US government’s national security interests, and now increasingly more important for the brand reputations of technology companies such as Twitter, Facebook, and Apple—which means there are real incentives to work together.
Recommendations
There is a wide array of avenues for promising cooperation between public and private entities. US policymakers can work with technology companies to strengthen operational security, incentivize socially beneficial AI research, and enforce intellectual property regimes—all key steps toward the overarching objective of reducing the probability of the malign usage of AI by both non-state and state actors. Expanding public-private partnerships on AI will ensure that the United States does not trail China in AI strategy. While the United States may not have an equivalent of the CCP’s MCF strategy, through mutually beneficial contracts and private-sector incentives, it can bridge the gap in its ongoing AI race with China. Lastly, the US government can bolster ethical frameworks for AI by working in tandem with private entities and research institutions to utilize pre-existing private-sector guidelines for AI use, work on refining content moderation policies and machine-learning algorithms with Big Tech to help prevent offline threats, and use the private sector’s technical expertise for ethical AI designs for the battlefield.
Artificial intelligence is both a thrilling beacon of modernization for the government, and an area of promising growth for private firms. Neither can afford to silo themselves, as a lack of collaboration will hinder both US national security interests and opportunities for private-sector innovation. The labyrinthine threat environment of unyielding US adversarial interests and the need for ethical AI frameworks both require cooperation; without it, we are doomed to chaos.
Bilva Chandra is a data analyst at Zignal Labs, a media intelligence firm, and master’s student in the Georgetown Security Studies Program, focusing on technology and security. Bilva has been previously published as a coauthor in The Strategy Bridge of a piece about Biosecurity in a Post-COVID-19 America, and was featured in two panels on Zignal’s Disinformation and Social Movements Town Hall and the US Army Future Command’s Mad Scientist event to present original group research on deplatforming effects. Her areas of expertise and interests include mis/disinformation, domestic extremism, artificial intelligence, data analytics, data privacy/content moderation issues, and public sector advisory.
The views expressed are those of the author and do not reflect the official position of the United States Military Academy, Department of the Army, or Department of Defense.
Image credit: Ars Electronica
My interest is in the nexus between AI and such things as influence operations today; this, in the age of global capitalism and its ravenous "change" demands. In this regard, consider my thoughts here:
Q: What is it that causes the U.S./the West (and others) to be so vulnerable to an opponent's influence operations today?
A: The fact that the "creative destruction" requirements of global capitalism — so heavily promoted by the U.S./the West for the last few decades — has created so many "losers" throughout the world now.
“Capitalism is the most successful wealth-creating economic system that the world has ever known; no other system, as the distinguished economist Joseph Schumpeter pointed out, has benefited ‘the common people’ as much. Capitalism, he observed, creates wealth through advancing continuously to every higher levels of productivity and technological sophistication; this process requires that the ‘old’ be destroyed before the ‘new’ can take over. … This process of ‘creative destruction,’ to use Schumpeter’s term, produces many winners but also many losers, at least in the short term, and poses a serious threat to traditional social values, beliefs, and institutions.”
(See the book “The Challenge of the Global Capitalism: The World Economy in the 21st Century,” by Robert Gilpin, look to the Introduction.)
As can so easily be gleaned from Robert Gilpin's observations above:
a. Should you, as an enemy, wish to target and manipulate a particular group and/or groups in the U.S./the West today (or indeed, given global capitalism's reach now, groups and/or groups almost any where in the world today?),
b. And should you wish to use these folks, for example, as "a permanently operating front through the entire territory of the enemy state" (Gen. Valery Gerasimov, Chief of the Russian General Staff — discussing Russian New Generation Warfare), then:
c. You certainly could do no better than to look to the "losers" identified by Robert Gilpin above, to wit: those individuals and groups whose "traditional social values, beliefs, and institutions” are so constantly, and so dangerously, being threatened by such things as (a) global capitalism and (b) its now seemingly constant "creative destruction" requirements.
In this regard, let us now consider how AI might be used by our enemies in these endeavors:
"New technologies encourage people, groups, and states to conduct influence operations and manipulation at scale. Intelligent machines can identify susceptible groups of people and 'measure the response of individuals as well as crowds to influence efforts,' according to Rand Waltzman, deputy chief technology officer at RAND Corporation. Cognitive hacking, a form of attack that seeks to manipulate people’s perceptions and behavior, takes place on a diverse set of platforms, including social media and new forms of traditional news channels. The means are increasingly diversified, as distorted and false text, images, video, and audio are weaponized to achieve the desired effects. Cognitive security is a new multisectoral field in which actors engage in what Waltzman called 'a continual arms race to influence—and protect from influence—large groups of people online.' "
(See the Carnegie-Europe document "Artificial Intelligence and the Future of Conflict" by Can Kasapoglu and Baris Kirdemir.)
Bottom Line Thought — Based on the Above:
The events of Sep 11, 2001 (9/11) — (and the events of Jan 6, 2021 [1/6] even more so?) — and indeed many others;
a. All of these can be seen from the perspective of a nexus between AI and influence operations, and, therein,
b. The manner in which one's enemies can so easily identify, and can so easily manipulate, the "losers" in the U.S./the West's "global capitalism" promotions and campaigns of the last few decades?
“All in all, the 1980s and 1990s were a Hayekian moment, when his once untimely liberalism came to be seen as timely. The intensification of market competition, internally and within each nation, created a more innovative and dynamic brand of capitalism. That in turn gave rise to a new chorus of laments that, as we have seen, have recurred since the eighteenth century: Community was breaking down; traditional ways of life were being destroyed; identities were thrown into question; solidarity was being undermined; egoism unleashed; wealth made conspicuous amid new inequality; philistinism was triumphant.”
(From the book “The Mind and the Market: Capitalism in Western Thought" by Jerry Z. Miller, from the section therein on Friedrich Hayek)
Rewrite of my "Bottom Line Thought" above:
The events of Sep 11, 2001 (9/11) — and the events of Jan 6, 2021 also — and indeed many others; all of these can be seen from the perspective of our enemies' ability to identify, and to manipulate, the "losers" in the U.S./the West's "global capitalism" promotions and campaigns of the last few decades?
“All in all, the 1980s and 1990s were a Hayekian moment, when his once untimely liberalism came to be seen as timely. The intensification of market competition, internally and within each nation, created a more innovative and dynamic brand of capitalism. That in turn gave rise to a new chorus of laments that, as we have seen, have recurred since the eighteenth century: Community was breaking down; traditional ways of life were being destroyed; identities were thrown into question; solidarity was being undermined; egoism unleashed; wealth made conspicuous amid new inequality; philistinism was triumphant.”
(From the book “The Mind and the Market: Capitalism in Western Thought" by Jerry Z. Miller, from the section therein on Friedrich Hayek)
The availability and use of AI in these efforts, today, dramatically enhances these such capabilities?