During Iraq and Afghanistan wars, American service members grappled with not merely an elusive adversary but also a terrifyingly rudimentary yet lethal innovation—the improvised explosive device (IED). Accounting for a staggering 48 percent (2,640) of known cause-of-death operations for US soldiers, these makeshift weapons underscored a bitter paradox of modern warfare: technological preeminence does not necessarily confer battlefield supremacy.
For decades, the US Department of Defense has committed prodigious resources to preserve a technological advantage in military engagements, an undertaking that has catalyzed groundbreaking innovation yet has also been mired in instances of profligate expenditure and, distressingly, adoption of inferior technological solutions. This long-standing approach is now set to be applied to artificial intelligence. As the AI revolution introduces a new dimension of complexity to warfare, senior policymakers are captivated by the potential to integrate this transformative technology into the DoD operational ecosystem seamlessly.
Yet, the US military’s experience with the IED—a weapon as cost-effective as it was devastating—should serve as a cautionary tale. As the world ventures into AI integration, DoD must remain cognizant of the lessons gleaned from the post-9/11 wars and the inefficacious efforts of the Joint IED Defeat Organization (JIEDDO). It is insufficient to simply procure the most advanced technology; understanding its intricate workings, fostering transparency in its acquisition, and nurturing synergies between the government and defense contractors are essential to circumventing past pitfalls. As we stand on the brink of the AI epoch, the DoD approach must be one of both eager anticipation and judicious caution.
Fiscal Mismanagement and Strategic Missteps in the Counter-IED Effort
A Government Accountability Office report revealed that JIEDDO received $18 billion in funding from 2006 to 2011. Even with years of funding and a DoD directive, JIEDDO failed to develop a comprehensive strategic plan to account for total research and development costs or to measure the effectiveness of the technology purchased. The absence of a comprehensive plan rendered it impossible to assess the efficacy of counter-IED efforts and facilitated the creation of subpar defense contracting technologies. As a result, millions of dollars went to defense contracting firms that produced equipment that lacked end-user feedback, necessitated extensive reliance on on-site contractor maintenance, or exhibited substandard performance.
Despite the disappointing outcomes and inefficiencies of JIEDDO, DoD surprisingly amplified its efforts, evolving JIEDDO into the even more expansive Joint Improvised-Threat Defeat Agency (JIDA). With the expansion came a 33 percent increase in the budget, accompanied by a substantial staff of three thousand. In 2015, the DoD inspector general discovered that JIDA neglected to evaluate the efficacy of over $100 million of counter-IED equipment. Out of the ninety-five counter-IED initiatives that were either approved for further funding or terminated from 2012 to 2015, eight had funding or termination decisions made without adequate assessments, costing $112.5 million. Furthermore, the systematic failure extended to the military services and combatant commands, which did not provide sufficient data to JIDA, thereby hindering the determination of the effectiveness of these counter-IED efforts.
As the lessons from the counter-IED program illustrate, DoD must employ careful judgment and thorough scrutiny when embracing new technologies such as AI. It’s incumbent upon senior national security and DoD officials to prevent wasteful spending and inferior performance by meticulously evaluating AI contracts before their procurement. Despite the transformative potential of advanced AI technologies, their ethical and efficacious deployment is paramount. Failure to manage this properly could expose us to severe national security threats and significant financial losses, especially if opportunistic vendors take advantage of DoD. To navigate the complexities inherent in adopting AI programs, DoD must devise comprehensive strategies involving all pertinent stakeholders, ensure judicious budget allocation, and implement measures to thwart contractor exploitation.
Ensuring Prudent AI Procurement
There are many ways to exploit the government, but a lack of transparency often allows companies to charge exorbitant prices for products and services. One example is TransDigm, a sole-source supplier of parts to DoD, which charged the government up to 4,451 percent more than the actual cost of the items purchased. After a congressional scolding and DoD inspector general investigation confirmed the predatory practices, TransDigm agreed to refund the Pentagon $16 million. Shockingly, TransDigm did this legally without violating regulations or DoD policies.
Instances of defense contractors exploiting legal loopholes (e.g., labeling goods and services as specialized military products to exempt them from providing cost or pricing data during contract negotiations) underscore the importance of accountability and transparency within DoD programs. The case of TransDigm’s price predation is a stark reminder of this fact. Senior officials pursuing cutting-edge AI technologies must guard against the risk of adopting these solutions hastily and without thorough scrutiny. Therefore, a stringent vetting and oversight process is not just desirable but essential to ensure the responsible utilization of AI in defense programs.
In May 2021, the Department of Defense took a significant step forward by issuing a memorandum outlining its strategic approach to implementing what it calls RAI—responsible artificial intelligence. The RAI pathway prioritizes ethical and lawful AI use, governance modernization, system operator proficiency standardization, and risk reduction measures throughout the AI product and acquisition life cycle. The RAI strategy could serve as a blueprint for a broader, government-wide plan to coordinate efforts around AI technology acquisition effectively. However, while this is a step in the right direction, more must be done.
In light of JIDA’s failures to effectively address issues with counter-IED equipment, the Department of Defense must prioritize accountability and transparency in AI-specific defense contracting. These changes include but are not limited to giving special attention to the POM (program objective memorandum) cycle, revising pricing regulations, and eliminating exemptions that allow for withholding cost or pricing data. By doing so, the department will promote ethical AI use and safeguard taxpayer dollars and the integrity of defense programs.
The budgeting POM cycle is an annual process that DoD leverages to devise its budget requests for the forthcoming fiscal year. Aimed at aligning strategic objectives with the efficient use of taxpayer dollars, this complex process draws input from diverse stakeholders, including military leaders, civilian officials, and Congress. By integrating a broad spectrum of viewpoints, DoD can prioritize mission-critical programs and subject them to comprehensive evaluation and review.
However, the POM cycle has its shortcomings. Its time-consuming, bureaucratic, and resource-intensive nature may be at odds with the agility required to adapt to rapidly changing circumstances. Furthermore, the process can be skewed by biases, leading to discord and potential compromise, which may culminate in suboptimal decision-making. To address these limitations, DoD officials must initiate measures to streamline the POM process, ensuring it is an effective tool in selecting AI project partners.
Promoting transparency and stakeholder collaboration is vital for enhancing efficiency, effectiveness, and equity in the POM process. We can achieve these improvements by incorporating end-user feedback, leveraging data and analytics for informed decision-making, and periodically reassessing and adjusting the process to confront emerging challenges. Clear criteria and standards for evaluating AI proposals are essential, as is engaging with a diverse range of potential partners to ensure a broad perspective on available options. The ultimate goal is to streamline the POM process and enable DoD to identify and partner with the most innovative and capable AI vendors and suppliers.
Considering AI’s inherent uncertainty and complexity, expectation management and heightened risk awareness become indispensable. DoD must approach AI adoption cautiously, avoiding overly optimistic vendor projections or promises of guaranteed results. Contracts must align with national security interests above all else, with all parties clearly understanding their roles and responsibilities. Transparency between the vendor and the government is paramount. DoD should require vendors to offer comprehensive insights into their AI algorithms, machine learning procedures, neural network development, data sourcing, and overall methodologies in a way that senior officials can understand. Senior officials, in turn, must understand the intricacies of AI technologies and effectively communicate them to all involved parties—from their teams within DoD to vendors and, when necessary, the public. These officials must also exercise transparency about the processes and decisions made within AI adoption.
Moreover, the performance of AI models fundamentally depends on the quality of their training data. A biased or discriminatory dataset can trigger adverse effects and jeopardize the safety of DoD service members and broader national security interests. This reality underscores the necessity for AI model training to utilize unbiased and diverse datasets. Therefore, DoD must require vendors to implement stringent privacy measures, identify and mitigate biases proactively, and swiftly eliminate discriminatory data.
DoD, and the US government as a whole, must ensure that what it procures are not substandard products or those unwanted by the civilian market that the vendor is attempting to offload onto the military. Commitment to interoperability and open standards from vendors is also key, as it prevents vendor lock-in and ensures adaptability to evolving needs and technologies. Lastly, DoD must navigate the adoption of AI with an understanding of its long-term financial and operational implications. Beyond just evaluating the costs, DoD needs to examine how AI might enhance productivity, streamline tasks, and bolster decision-making capabilities while being conscious of the potential security vulnerabilities it could introduce. Furthermore, this analysis must consider the need for extensive training for uniformed personnel, potentially hiring specialized civilian personnel, and the responsibility of managing emerging ethical and legal issues related to AI. Ultimately, DoD should look beyond immediate effects to evaluate how AI could mold its long-term operations, financial state, and overall capabilities.
By prioritizing these factors, DoD can make informed decisions about AI implementation and avoid the potential consequences of ineffective AI software and wasteful contracts. Given AI’s significant impact on national security, it’s essential to be vigilant in this process and ensure its use is ethical and effective.
Navigating the AI Revolution Responsibly
The undeniable power of AI stands poised to revolutionize defense capabilities. With AI’s unmatched data processing capabilities, DoD is on the brink of an era in which new tools will dramatically transform battlefield dynamics. However, harnessing this transformative technology will be no small feat; it requires careful navigation between the extraordinary potential of AI and the dangers of misstep and misuse.
Drawing from the costly lessons of counter-IED efforts, where thousands of American service members lost their lives and billions of taxpayer dollars were spent, the procurement and implementation of revolutionary AI technology demand a higher level of prudence, understanding, and transparency. For DoD, integrating AI into its operations involves rigorous scrutiny of potential partners and solutions. This includes a firm commitment to leveraging AI to enhance defense capabilities without compromising ethical standards or national security. Meticulous evaluation of AI technology and prioritizing data quality, unbiased training, and interoperability are essential. Furthermore, a realistic assessment of long-term costs and benefits is imperative. Undoubtedly, the stakes are high in navigating this advanced technology landscape. Still, by adopting these measures, DoD can prevent strategic missteps and strengthen its defense posture amid an ever-changing geopolitical environment.
Maj. Nicholas Dockery is a Downing scholar, a graduate of the Yale Jackson School of Global Affairs, and an active duty Special Forces officer.
The views expressed are those of the author and do not reflect the official position of the United States Military Academy, Department of the Army, or Department of Defense.
Image credit: NORAD and USNORTHCOM Public Affairs
Once again, the lessons learned of Vietnam should have been learned. Though we called them “booby traps,” they ranged from the rudimentary to the improved. Regardless, “between January 1965 and June 1970 11 percent of US troop deaths in action and 17 percent of injuries were by caused booby traps and mines.”