Avançar para o conteúdo
Artificial Intelligence & Aggressive Intentions: State Deterrence in a Disruptive Future

Dmytro Sochnyev

Dmytro Sochnyev is a writer for EPIS and is passionate about communicating and narrating issues at the cutting edge of our contemporary security to the wider public. With stops at the University of Toronto, SciencesPo and the Hertie School, his academic journey has cultivated an outside-of-the-box, interdisciplinary and detail-oriented approach to research.

Vitaliy Venislavskyy

Vitaliy Venislavskyy is a PhD candidate in Military Naval History, focusing on Byzantine expansion in the Black Sea. He earned a Masterʼs in Military History from the Military Naval Academy in Lisbon in 2024 and a Bachelorʼs in International Relations from the University of Coimbra in 2020. A researcher at EuroDefense Portugal since 2021, he became President of its Youth Program in 2023. Since 2022, Vitaliy has appeared on national TV as an expert in International Relations and Geopolitics, covering the wars in Ukraine and Gaza. He is passionate about the history of strategic thinking in War Studies.

Ferdinand Wegener

Ferdinand Wegener, a founding member and former board member of EPIS Think Tank e.V., studied law at the University of Cologne, focusing on international and humanitarian law. Now a legal researcher at Luther, he specialises in M&A and corporate law. At EPIS, his work centres on security policy, with past publications on military UAVs. He is also editor-in-chief of CTRL, a German law review on digitalisation and legal tech.e


Descarregar pdf

1. Introduction

In a 2017 televised speech, Russian President Vladimir Putin stated that the first country to develop ʻtrue AIʼ will rule the world and that monopolisation in this domain would be “strongly undesirable” for global security (Meyer, 2017). In the previous part of this article in the CTRL Magazine, we discussed the legal ramifications of artificial intelligence (AI) and weapon autonomy in state militaries. In this second part, we examine current and expected tactical applications of autonomy and AI and their consequences for state strategy and deterrence. We will discuss the claim that “killer robots,” a term popularised by disarmament campaigns, portend an evolutionary leap in technology where the strategic balance between the haves and the have nots will be critically altered. Indeed, AI appears to augur a fundamentally disruptive transformation of war in which humans no longer just fight with machines as tools but also with machines as partners. It is no surprise then why dozens of state militaries around the world are procuring or developing weapons systems with AI or autonomous functionalities. Narratives centred around technological dominance are straightforward and palatable, making it no surprise that history is replete with them, but such claims can also be myopic. Even in the era of colonialism, when technological disparities between foes were most pronounced, this was not always the crucial strategic advantage. The lack of immunity to infectious diseases posed a graver existential problem to the indigenous populations of Central America than Spanish steel and gunpowder, and decimated continental French troops sent to stifle Haitian rebels. American forces dropped more than 7.6 million tonnes of ordinance all over Southeast Asia in their war against the Communists of North Vietnam, yet failed to achieve their strategic objectives (Clodfelter, 1995). More recently, the repeated inability of state militaries around the world to  defeat  underequipped  but  highly motivated insurgencies, despite procurement budgets that invariably dwarf those of their foes, reaffirms that technological superiority alone is one of many factors critical to strategic and operational success. As a result, a critical evaluation of the current and near-future state of battlefield autonomy is needed to separate fact from fiction. Indeed, grandiose claims of obsolescence or ʻgame changersʼ are common in many popular discussions on military technology. The tank continues to be requested by procurement officers in militaries around the world, despite allegedly being made too vulnerable by the proliferation of personal anti- tank weapons, attack helicopters and, most recently, FPV drones and loitering munitions.

This Article is the second Part of a collaboration with CTRL Magazine.
CTRL (Contemporary Technology Review & Law) is a German law review on the intersection of law and digitalisation. This free ePaper caters to digital law enthusiasts and aspiring professionals. It features articles by young professionals and students on AI regulation, blockchain, and data protection. Past editions have included interviews with leaders from top law firms, academia, and the German Federal Office for Information Security.
Find the first part here:
Artificial Intelligence and Aggressive Intentions – Laws for AI Warfare (German/English)

Making predictions can often be a foolʼs errand, but our goal is to evaluate—in as tangible a way as possible—the likelihood of significant changes to the global order. Has battlefield autonomy been truly revolutionary on a strategic level, and if so, what domains of state military strategy are most vulnerable to disruption?

In this article, we will first explore the spectrum of autonomy in weapons and battlefield logistics, from Manned-Unmanned Teaming to Lethal Autonomous Weapons Systems, and discuss current and near-future developments. We will explore how autonomy has impacted and might impact battlefield technologies and doctrine, both in the present and in the near future. Based on the publicly available practical evidence, we then analyse the contribution of these developments on conventional deterrence between states, including nuclear deterrence and unconventional or hybrid threats.

2. The Concept of Manned- Unmanned Teaming (MUMT)

    Manned-Unmanned Teaming (MUMT) emerged in response to the evolving challenges of contemporary warfare, where increasingly complex and hostile environments demand enhanced operational effectiveness and survivability. MUMT aims to integrate manned platforms with unmanned systems to leverage the unique strengths of both, creating a more adaptive and resilient military force.

    Several critical developments have driven the evolution of MUMT into a core component of modern military strategy.

    One of the primary factors behind the development of MUMT has been the rapid advancement of Unmanned Aerial Systems (UAS). As these systems, including Unmanned Aerial Vehicles (UAVs), became more sophisticated, their ability to act as force multipliers became clear. UAS technologies allow for extended reconnaissance, surveillance, and combat operations without directly exposing human personnel to the dangers of the battlefield. The integration of UAS with manned systems has proven to be a powerful combination, extending the capabilities of traditional platforms while minimizing risks to human life. The rise of MUMT also coincides with significant progress in automation and artificial intelligence. Advances in AI, machine learning, and sensor technology have enabled unmanned systems to operate with a high degree of autonomy. This technological shift has paved the way for a new operational framework in which human and machine collaboration is central to mission success. Unmanned systems, supported by AI, can now execute complex tasks, making MUMT a valuable tool in modern combat scenarios where speed, precision, and adaptability are critical. The adoption of MUMT has also been driven by the changing nature of warfare, particularly the increasing prevalence of asymmetric conflicts. In such environments, where state actors often engage non-state actors in irregular combat, traditional manned platforms are vulnerable. MUMT addresses this vulnerability by enabling unmanned systems to perform high-risk operations such as reconnaissance, targeting, and strike missions, allowing human operators to remain at a safer distance. This approach reduces casualties while maintaining operational effectiveness in unpredictable and dangerous environments. Moreover, MUMT extends the operational reach of military forces by enabling the pairing of manned platforms with unmanned systems capable of covering greater distances and enduring harsher conditions. Unmanned assets, which typically offer higher endurance, maneuverability, and resilience in hostile zones, increase the tactical flexibility of military units. By deploying unmanned systems in conjunction with manned platforms, forces can project power over broader areas, achieving greater operational reach without compromising human safety.

    In addition to enhancing national military capabilities, MUMT has proven instrumental in joint and coalition operations. As modern warfare increasingly involves coordination between different branches of the military and allied forces, the ability to integrate and share unmanned assets has become essential. MUMT facilitates seamless coordination between manned and unmanned systems, allowing for better situational awareness, information sharing, and overall operational cohesion on the battlefield.

    Figure 1: States developing or integrating autonomous weapons systems in 2024 (own work)

    3. The Evolution Towards Human- Machine Teaming (HMT)

    The evolution of MUMT has progressed further with the advent of Human-Machine Teaming (HMT), particularly in response to lessons learnt from recent conflicts such as the war in Ukraine. In peer-to-peer confrontations, where adversaries possess relatively equal technological and military capabilities, traditional crewed air platforms have demonstrated vulnerabilities, especially in high- intensity, symmetrical warfare environments. These limitations have led to a shift toward a more integrated approach in which human operators work alongside increasingly autonomous unmanned systems, forming what is now called HMT.In the HMT model, small Unmanned Aerial Systems (sUAS) and Autonomous Unmanned Systems (AUS) are embedded within a network of interconnected combat systems referred to as the Ubiquitous Combat Cloud (UCC). This cloud operates within a Mobile Ad-hoc Network (MANET), a self organising, wireless communication network that facilitates coordination in the field without requiring   centralised   infrastructure. This decentralised approach enhances operational flexibility, allowing for the rapid deployment of unmanned systems in response to dynamic battlefield conditions. At the core of this network, a Battlefield Management System (BMS) manages and coordinates the actions of these highly autonomous systems. Whether deployed from manned platforms or launched independently, unmanned systems have become integral to supporting military maneuvers.

    Their ability to operate autonomously while being directed through a robust network system ensures that forces can conduct complex operations more efficiently, even in contested environments. The ongoing evolution of MUMT into HMT reflects the shifting demands of modern warfare, where the integration of manned and unmanned systems is no longer just a tactical advantage but a necessity for success in both current and future conflicts. In other words, the main difference between Manned-Unmanned Teaming (MUMT) and Human-Machine Teaming (HMT) lies in the level of integration and autonomy of the systems involved.

    While MUMT focuses on the collaboration between human operators and unmanned systems, where the human primarily controls or supervises the unmanned assets, HMT represents a more advanced evolution.

    In HMT, the emphasis shifts to true teaming, where autonomous systems act as equal partners alongside humans. These systems are capable of making independent decisions within the scope of their mission, supported by a networked combat environment, thereby reducing the burden on human operators and enhancing overall operational effectiveness.

    4. State of the art in drone warfare: drones as an integrated system

        In the war in Ukraine, Russia has deployed a multi-layered drone warfare strategy, where various drones like Orlan, Lancet, and FPV drones operate in tandem to enhance reconnaissance, target acquisition, and direct attack capabilities, exemplifying the growing role of Manned-Unmanned Teaming (MUMT) in modern warfare.

        Orlan Drones (Reconnaissance and Target Acquisition)

        – Function: Primarily used for real-time battlefield surveillance and target identification.

        – Equipment: Equipped with electro-optical and infrared cameras and signal interception tools.

        – Role: Provide precise intelligence to artillery and missile systems, improving target accuracy.

        Lancet Drones (Loitering Munitions/ Kamikaze Drones)

        – Function: Loitering munitions designed to autonomously locate and destroy specific targets.

        – Equipment: Armed with modular warheads for targeting personnel, vehicles, and fortified positions.

        – Role: Follow Orlan drones to neutralise high- value targets with precision.

        FPV Drones (Close-Range Precision Strikes)

        – Function: Commercial drones modified for military purposes, used for close-range attacks.

        – Equipment: Piloted in real-time by operators through video feeds, carrying small explosive payloads.

        – Role: Effective in urban warfare and complex terrain, targeting infantry and light vehicles with precision.

        5. The ‘Killer Robot’: a Friend and a Foe

        Further along the spectrum of autonomy are various weapons systems with the ability to independently complete tasks, in particular lethal autonomous weapons systems (LAWS). In contrast to the landmines and booby traps sometimes described as the first ʻautonomousʼ weapons, LAWS are capable of autonomous decision-making. Although there is no internationally agreed-upon definition, as the UN publicly admits, these weapons platforms are generally distinguished by their ability to “select and engage targets that have not been previously designated for attack by a human operator” (Work, 2021). This goes beyond the simple assistance provided by HMT or MUMT concepts by actively participating in combat activity. That such actions may or may not result in the death of human combatants, or even civilian bystanders, is a critical ethical quandary explored in the first part of this article.

        Figura 2-Russian mobile command centre “Ranzhir”(Kuzmin, 2017)

        A robust analysis of the state of autonomous weapons is complicated by both state secrecy and a lack of practical evidence. Certainly, some battlefield and even combat decision-making has already been delegated to machines. Autonomous pack mules can individually determine pathways to resupply locations, cruise missiles have been able to independently correct course deviations by comparing pre-loaded maps with live visual data for several decades, and CIWS naval air defense systems like the Phalanx engage projectile-like objects too fast to require human target approval. The first official instance of their use was in a UNSC report on the Libyan Civil War, in which it was alleged that Turkish forces launched Kargu-2 drones to independently locate and engage local rebels (Hernandez, 2021). And in Ukraine it is often claimed that some loitering munitions conduct terminal guidance—the last phase of a strike— independently.

        Nonetheless, evidence of widespread use is not overwhelming; the vast majority of current combat and procurement decisions for the near future still imply a human-intensive battlefield. Consider the “tooth-to-tail ratio,” or the ratio of combat troops to non-combat support personnel in an army. Since the First World War, the US militaryʼs tooth-to-tail ratio has never favoured combat troops, with most figures for subsequent theatres hovering around one-third of the deployment. After the invasion of Iraq in 2005, for example, the US tooth-to-tail ratio was only 25% when including contractors and Kuwaiti allied personnel. That the US military has not widely delegated the simpler and much more numerous support tasks, like logistical supply or administrative functions, even to MUMT or HMT platforms likely means highly automated armies are still far from reality. Tactically and strategically, however, full autonomy has the potential to be an evolutionary leap in military technology. Consider, for example, the plight of short-range FPV drone platoons in Ukraine, as described by renowned military researchers Rob Lee and Michael Kofman in a Russian Contingency podcast earlier this year. Drone teams are transported a short distance to the frontline, where they then carry their equipment to a launch site. While some pilots guide FPV drones for strikes, other personnel operate drones for reconnaissance and target selection or operate retransmitter drones to enable longer- range strikes. Other personnel prepare munitions depending on the kind of targets discovered, and another may be tasked with countermeasures.

        While some of these specialised tasks, like coordination or strikes, are probably unlikely to be outsourced to machines for now, there is significant pressure to delegate so that the number of human personnel at risk of counterfire is reduced. Unmanned systems, as a provider of a different C&C Chain Command and control (C&C) play a pivotal role in coordinating these drone systems. Russiaʼs integrated command and control framework is centered around the MP32M1 command vehicle, which serves as the central hub for managing Orlan operations and ensuring a continuous flow of battlefield intelligence. While this system enhances Russiaʼs ability to control drone operations in real time, it remains reliant on skilled personnel and the security of its communication networks.

        Command and Control (C&C):
        C&C coordinates military operations by managing systems, relaying intelligence, and executing missions. It is vital but vulnerable to cyber and physical threats.

        However, the point of contact between humans and AI in Human-Machine Teaming (HMT) introduces a critical vulnerability. This interaction point, where humans oversee and direct AI- driven unmanned systems, can be exploited as a target for both cyber attacks and conventional weapons. Cyber attackers may disrupt the communication links between human operators and drones, while physical attacks on command centres or key personnel can incapacitate the entire network, further highlighting some fragility of HMT systems. In addition to centralised command, Russia has experimented with drone swarming tactics, in which multiple Orlan, Lancet, and FPV drones operate simultaneously to overwhelm enemy defences. For instance, while Orlan drones provide real-time intelligence, Lancet and FPV drones execute coordinated attacks, making it difficult for Ukrainian forces to respond effectively to multiple, simultaneous threats. To counter this integrated drone warfare system, Ukraine has had to adapt its strategy. One critical method involves employing electronic warfare systems to disrupt communications between Russian drones and their operators. Jamming these signals can effectively neutralise the dronesʼ ability to coordinate and execute attacks. Additionally, Ukraine has prioritised the procurement of high- precision artillery munitions to target Russian command and control vehicles, which are essential for sustaining the effectiveness of drone operations.

        Without these vehicles, Russiaʼs ability to deploy drones is severely compromised. Another key aspect of Ukraineʼs defence strategy involves the establishment of small, mobile air defence units armed with anti-aircraft machine guns, aimed at intercepting and destroying drones before they can strike. Russiaʼs multi-layered use of drones in the war in Ukraine highlights the increasing importance of MUMT in modern conflicts. The combination of reconnaissance, loitering munitions, and precision strike capabilities offers a flexible and highly responsive combat system. However, as Ukraine continues to develop its countermeasures, the effectiveness of these systems will likely shape the future of MUMT in asymmetric and peer-to-peer warfare alike.

        6. Balancing Deterrence in a Disruptive World

          Is the growing concern in academia and public opinion that the continued proliferation of AI and sophisticated unmanned platforms could radically destabilise or even rewrite the current geopolitical order. Given what we now know regarding such systems, are we on the cusp of a truly disruptive era of warfare where hostile actors emboldened by technology cannot be deterred? The concept of deterrence is inherently complex, as it relies not only on the mere existence of a powerful system but also on how adversaries perceive its credibility, functionality, and the consequences of its use. Most academic literature on deterrence in international state and non-state actor relations, such as Schelling (1980), Mearsheimer (1983), or Filippidou (2020), views it as the process of preserving a particular status quo in the face of imminent action by an adversary to change it. John Mearsheimer, the prominent American international relations scholar, described deterrence in his famous dissertation as the persuasion of “an opponent not to initiate a specific action because the perceived benefits do not  justify  the  estimated  costs  and  risks” (Mearsheimer, 1983). However, whether or not the actor is ʻdeterredʼ depends on deeply subjective calculations of military and non- military factors with respect to the expected outcome of the action. If the calculation is based on perceived costs, the perception of costs can be influenced by factors like leadership psychology or simply being unaware of the true capabilities of an adversary. This complicates the ability of autonomous and/or intelligent systems to serve as a deterrent by themselves. In fact, throughout the history of warfare, no weapon system—be it nuclear weapons, long- range missiles, or advanced stealth technologies—has been able to function as an effective deterrent in isolation. Each has required strategic frameworks, political resolve, and the credibility of its use to ensure its deterrent effect. AI is no different. Alone, it cannot guarantee deterrence because it lacks the intrinsic ability to affect human perception, which is at the core of any deterrence strategy. Deterrence is ultimately a psychological game, reliant on convincing potential adversaries that the cost of engaging in conflict far outweighs any potential gains. In the same way that nuclear weapons rely on the credibility of second-strike capabilities or missile defence systems depend on their readiness and operational accuracy, AI must operate within a wider ecosystem of strategic and military structures to be effective. Still, all else being equal, even if technology cannot determine deterrence outcomes on its own, the threat of widespread destruction can still contribute to a compelling argument. The implication is that having superior technology reduces the costs of aggressive action or raises the costs of it by defenders.

          The flow of information is not entirely automated, requiring human intervention for target prioritisation and mission execution, underscoring the human-machine collaboration at the heart of MUMT.

          Admittedly, measuring this contribution of technology to deterrence calculations is tricky. One technical method to describe this relationship is to compare the relative power between offensive and defensive technologies, or the offense-defense balance (ODB). One common argument nowadays is that reconnaissance has made the battlefield so transparent for a wide range of precise munitions that only a slow war of attrition is possible (incidentally, Mearsheimer argued that an expectation of a war of attrition is the most effective deterrent). If the broad slate of offensive technology and tactics cannot overcome their defensive counterparts, states are less motivated to go to war because the costs and risks of offensive action are high.

          In other words, the ODB theory asserts that state aggression and conflict are more likely the more dominant offensive technologies and tactics are.

          Critics of the ODB rightfully point to the duality of many weapons, such as Soviet-era S-300 air ʻdefenceʼ launchers being employed by Russians for ground strikes over the Ukrainian border.

          At the same time, visions of autonomous drone swarms intelligently (although perhaps not indiscriminately) saturating a battlefield and picking off targets certainly describe a dominant technological imbalance that alone can sway strategic outcomes. Indeed, both the trajectory of technological development and the various pressures to retain battlefield superiority and reduce exposure of personnel to danger points to militaries expanding the deployment of HMT, if not fully autonomous systems. Humans are certainly more flexible and creative, but algorithms have proven to be exponentially more efficient in processing large amounts of data. Fielding more unmanned and autonomous systems also both mitigates recruitment shortages and reduces costs. While human operators, like jet fighter pilots, need hundreds of expensive flight hours and years of training to master contemporary aircraft, machines could acquire the necessary algorithms at the touch of a button. Likewise, the ongoing invasion of Ukraine has forced Russian recruitment programs to triple and quadruple signing bonuses to entice a dwindling supply of volunteers from a national labour pool that is simultaneously drained by the domestic weapons industry (Perun, 2024). If humans could be replaced in a wide array of battlefield tasks, foreign interventions could be made not only financially cheaper but more politically palatable by losing ʻrobots-on-the- groundʼ instead of ʻboots-on-the-ground.ʼ Are these visions prescient, or does some contemporary speculation about the strategic consequences of autonomous weapons fail to adequately consider practical or economic obstacles?

          7. The Enemy gets a Vote, but so does Reality

          Take, for example, some arguments that autonomous drones will weaken the ability of nuclear submarine-launched ballistic missiles (SLBMs) to provide nuclear deterrence. As autonomous carriers of sensors, smart drones can effectively uncover the locations of previously hidden submarines. Indeed, nuclear deterrence begins with the survivability of nuclear arsenals, and many of the nuclear-armed states wield multiple methods of nuclear weapon delivery, combining ground-launched missile silos and air-launch delivery methods with SLBMs to create a ʻnuclear triad.ʼ The logic is simple: if a nuclear counterstrike is not possible because the delivery platforms have been incapacitated, then mutually assured destruction (MAD) in a given escalatory scenario is not credible. Because submarines hidden in the high seas, where they remain hidden deep under water for months at a time, are easier to conceal than missile silos or air-launched missiles, they are typically considered the most resilient counterstrike threat. However, if autonomous drones flood the ocean and create “ocean transparency,” then nuclear deterrence is weakened as the SLBMs become more vulnerable. If we consider nuclear SLBMs as a defensive tool of deterrence, then this would be a case of the ODB shifting away from the defence. Thankfully, the submarine drone threat is greatly overstated upon critical review. For one, certain militaries already use networks of ʻunintelligentʼ hydroacoustic sensors to assist in submarine detection. Still, part of the reason that submarines are difficult to detect by current platforms, such as ship and air-based acoustic detection (SONAR) or satellite-based detection of water disturbances, is obvious. As Mauro Gilli, senior researcher at ETH Zurich, told EPIS:

          “The ocean is humongous. Take a submarine and put a radius of 150, even 300, kilometres around it. With 300 kilometres in the ocean, you donʼt go very far. In the Atlantic or Pacific, thatʼs nothing. Then you add depth, where some submarines can go down 800 metres, some even one kilometre… The idea that ʻocean transparencyʼ is coming is something that many experts donʼt take seriously.” Consider, for example, current passive detection of submarines in the upper layer of the Sea of Japan or the Bay of Biscayne, which penetrates around 8-10 kilometres of water. In addition, because of the movement of sound waves in open water, submarines in certain blind spots at a depth of 200-300 meters are virtually impossible for vessels near the surface to detect. To cover the Sea of Japan alone, one would need hundreds of thousands of submarine drones to only impartially uncover the area; for the Pacific Ocean, tens of millions of such drones. And, as one 2024 study showed, changes to oceanographic composition from climate change are reducing the ability of submarine acoustic detection in some oceans by more than half (A. Gilli et al., 2024). Advances in propulsion noise reduction and hull cloaking will continue to augment the stealth of these vessels, requiring still more drones (Psallidas et al., 2010). Likewise, Gilli explains, other practical challenges complicate the drone strategy. Even assuming the drones were able to detect and discover the submarine, an extremely platform- and personnel-intensive task, they would be too slow to track and follow conventionally powered submarines, let alone nuclear designs. Submarines would be the first to detect if the drones actively used acoustic pings to hunt for the submarines instead of passively listening, or if the drones relied on a larger vessel nearby to help coordinate the networked drone swarm. The adversary can also employ their own drones as acoustic decoys, further disrupting and complicating the hunt. What the anti-submarine drone case helps reveal is that grandiose claims of disruptive effects should be scrutinised in case important factors are missed or omitted. On the battlefield, the saying goes, the enemy always gets a vote—but so does reality.

          Firstly, machines still need to be able to repeatedly distinguish between objects on the battlefield. The battlefield is highly dynamic; measures are responded to with countermeasures, which are in turn met with counter-countermeasures. When the U.S. Marine Corps tested one AI target detector, for example, it initially succeeded in identifying Marines tasked with discretely approaching it. But when they resorted to ad hoc tricks—dressing in bush leaves, skipping, or simply hiding in a cardboard box filled with the muffled laughter of a few entrepreneurial Marines—the machineʼs algorithm failed to notice them because its training data did not anticipate such behaviour. With similar systems, restricting target recognition to libraries of pre-approved target characteristics is not unusual to prevent friendly fire or civilian harm. Certainly, these kinds of countermeasures can be spotted and resolved with later iterations of the software, and there might be an upper limit to human creativity on the battlefield. However, rigid targeting selection leaves systems then hapless to unexpected threats and interactions.

          So long as the battlefield remains dynamic and without breakthroughs in data processing algorithms (or perhaps artificial general intelligence), warfighting should continue to be a highly manpower-intensive affair—with all the political and strategic costs that entails.

          Secondly, the economics of war are often omitted in such discussion, but they fundamentally determine procurement decisions. Peacetime military expenditure is a highly unpopular policy best done in private behind defence committee doors, but even autocratic leaders are beholden to the tradeoffs present behind any procurement decision. Both Ukrainian and Russian militaries, for example, have consistently opted for more vulnerable but simple and cost-effective drones over sophisticated systems because commanders are remembering that quantity has a quality all on its own. Even assuming that advanced visual data processing algorithms existed, the hardware necessary to support this software might be too expensive or impractical to install anywhere except for larger or more survivable platforms, like mechanised armour or aircraft.

          8. Conclusion

          This is not to say that disruption is not possible or even inevitable. The optimistic argument for deterrence (from the perspective of is that, for now, strategy-altering effects on a global level from AI and autonomy in battlefield weapons systems exist primarily in the realm of speculation. It is true that all experts, even military officials with intimate knowledge, will get predictions wrong—prognosis is notoriously hard, after all. Lt. General James H. Doolittle testified after WWII to a US Senate Committee that the aircraft carrier, which he even himself relied upon extensively in the Pacific Theatre, had reached its highest usefulness now and that it is going into obsolescence. The carrier has two attributes: one attribute is that it can move about; the other is that it can be sunk.

          As soon as airplanes are developed with sufficient range so that they can go any place that we want them to go, or when we have bases that will permit us to go any place we want to go, there will be no further use for aircraft carriers. (Polmar, 2008, p.2) Decades after Doolittleʼs testimony, it is now often said that when forward deterrence is needed in crises abroad, American presidents first ask where the nearest carrier is (Cohen, 2010). What is important to remember is that obsolescence is historically not a product of vulnerability but of the development of better alternatives. Just as carriers survived because no other platform could replace their long-distance force projection, as Lt. Gen. Doolittle had assumed would happen, contemporary equipment will not survive only if autonomous weapons do their battlefield job better. Under certain conditions, AI could play a decisive role in shaping deterrence strategies and have as significant an impact as existing systems like missile defence shields or stealth fighter fleets. Serious breakthroughs in AI and the mass production of computing are likely still required for this to happen. These will almost certainly be preceded by key milestones, like the extensive deployment of HMT concepts for logistical and casualty support or even the development of artificial general intelligence. First, AI must be integrated into trustworthy, autonomous command and control systems that inspire confidence in their decision-making capabilities without removing human oversight entirely. These systems must be robust enough to execute complex decisions rapidly, ensuring that adversaries believe in their readiness and reliability. Second, AIʼs ability to process and analyse vast amounts of intelligence data must be leveraged to deliver precision strikes and operational superiority.

          This technological edge could act as a significant deterrent, as adversaries would be faced with an opponent whose decision-making and battlefield operations are faster, more accurate, and less predictable than any human counterpart.

          Third, the development of robust countermeasures against AI systems will be crucial. The existence of credible defences against potential AI-driven cyberattacks or autonomous weapon systems would create a balance, preventing adversaries from believing that they could exploit vulnerabilities in AI systems. This ensures that AI-based deterrence is not easily undermined, adding a layer of security that reinforces the overall deterrence strategy. If these conditions are met, AI could offer a profound and transformative effect on military deterrence. It could introduce new complexities into how adversaries calculate risk, offering capabilities that extend beyond traditional warfare models. The ability to integrate AI-driven technologies into strategic frameworks could redefine deterrence as we know it, enabling it to serve as a powerful tool in the evolving landscape of autonomous warfare. However, when such milestones might occur is (at least from publicly available information) entirely unclear. Until those prerequisite factors can be answered empirically, it is unlikely that the current proliferation of AI and autonomy in weapons systems will make deterring threats to the status quo significantly harder. Without these foundational elements—credibility, reliability, and integrated human oversight—AI, like any other weapons system, will fall short of serving as an effective deterrent on its own.

          References

          Airbus. (2023). Manned-unmanned teaming – MUM-T technology of the future becoming a reality of today. Airbus. https://www.airbus.com/en/products-services/defence/uas/uas-solutions/manned-unmanned-teaming-mum-t

          Center for Strategic and Budgetary Assessments. (2024). Human-machine teaming for future ground forces. CSBA. https://csbaonline.org/research/publications/human-machine-teaming-for-future-ground-forces

          Clodfelter, M. (1995). Vietnam in Military Statistics: A History of the Indochina Wars, 1772–1991. McFarland, Jefferson, NC.

          Cohen, S. (2010). Where are the Carriers? Forbes. https://www.forbes.com/sites/stevecohen/2010/10/25/where-are-the-carriers/

          Filippidou, A. (2020). Deterrence: Concepts and approaches for current and emerging threats. Deterrence: Concepts and Approaches for Current and Emerging Threats, 1-18.Freedberg Jr., S.J. (2023, June 13). Dumb and cheap: When facing electronic warfare in Ukraine, small dronesʼ quantity is quality. Breaking Defense. https://breakingdefense.com/2023/06/dumb-and-cheap-when-facing-electronic-warfare-in-ukraine-small-drones-quantity-is-quality/

          Ghidotti, M. (2024, February 28). Manned-unmanned teaming (MUM-T) in military & civilian operations. Flysight. https://www.flysight.it/manned-unmanned-teaming-mum-t-in-military-civilian-operations/

          Gilli, A., et al. (2024). Climate Change and Military Power: Hunting for Submarines in the Warming Ocean. Texas National Security Review, Vol. 7, Iss. 2. https://doi.org/10.26153/tsw/52240

          Glaser, C. L., & Kaufmann, C. (1998). What is the Offense-Defense Balance and Can We Measure It? International Security, 22(4), 44–82.

          Hernandez, J. (2021). A Military Drone With A Mind Of Its Own Was Used In Combat, U.N. Says. NPR. https://www.npr.org/2021/06/01/1002196245/a-u-n-report-suggests-libya-saw-the-first-battlefield-killing-by-an-autonomous-d

          Hunder, M. (2024). Ukraine rushes to create AI-enabled war drones. Reuters. https://www.reuters.com/technology/artificial-intelligence/ukraine-rushes-create-ai-enabled-war-drones-2024-07-18/

          Joint Air Power Competence Centre. (2019). Manned-unmanned teaming: Enhancing tactical SA and pilot workload management. JAPCC. https://www.japcc.org/manned-unmanned-teaming

          Kofman, M., Lee, R. (2024, April 2). A Close Look at Drones in the Russo-Ukrainian War, Part 1. The Russia Contingency. [Podcast]. https://warontherocks.com/episode/therussiacontingency/30829/a-close-look-at-drones-in-the-russo-ukrainian-war-part-1/

          Kunertova, D., & Zürich. (2024, August). Learning from the Ukrainian battlefield: Tomorrowʼs drone warfare, todayʼs innovation challenge (CSS Study No. 2024). CSS Studies. https://doi.org/10.3929/ethz-b-000690448

          Kuzmin, V. (2017). File:MAKS Airshow 2013 (Ramenskoye Airport, Russia) (521-41).jpg – Wikimedia Commons. https://commons.wikimedia.org/wiki/File:MAKS_Airshow_2013_(Ramenskoye_Airport,_Russia)_(521-41).jpg

          Mendenhall, E. (2018). Fluid Foundations: Ocean Transparency, Submarine Opacity, and Strategic Nuclear Stability. Journal of Military and Strategic Studies, 19.

          Meyer, D. (201). Vladimir Putin Says Whoever Leads in Artificial Intelligence Will Rule the World. Fortune. https://fortune.com/2017/09/04/ai-artificial-intelligence-putin-rule-world/

          Moltz, J.C. (2012). Submarine and Autonomous Vessel Proliferation; Implications for Future Strategic Stability at Sea. Naval Postgraduate School. https://apps.dtic.mil/sti/citations/ADA578475

          Montgomery, E., Sharp, T., & Hacker, T. (2024, June 19). Quality Has a Quality All Its Own: The Virtual Attrition Value of Superior-Performance Weapons. TNSR War on the Rocks. https://warontherocks.com/2024/06/quality-has-a-quality-all-its-own-the-virtual-attrition-value-of-superior-performance-weapons/

          Perun. (2024, July 14). Russian Equipment Losses & Reserves – The Changing Russian Force in Ukraine. [Video File]. YouTube. https://www.youtube.com/watch?v=xF-S4ktINDU&t=5s

          Polmar, N. (2008). Aircraft Carriers: A History of Carrier Aviation and Its Influence on World Events, Volume II: 1946-2006 (Vol. II). Potomac Books, Inc.

          Psallidas, K., Whitcomb, C. A., & Hootman, J. C. (2010). Design of Conventional Submarines with Advanced Air Independent Propulsion Systems and Determination of Corresponding Theater-Level Impacts. Naval Engineers Journal, 122(1), 111-123.

          Schelling, T. C. (1980). The Strategy of Conflict: with a New Preface by the Author. Harvard University Press.

          Trevithick, J. (2024, March 13). Phalanx CIWS Costs $3,500 Per Second In Ammo to Fire. The Warzone. https://www.twz.com/sea/phalanx-ciws-costs-3500-per-second-in-ammo-to-fire

          Wirtz, J. J. (2018). How Does Nuclear Deterrence Differ from Conventional Deterrence? Strategic Studies Quarterly, 12(4), 58–75. https://www.jstor.org/stable/26533615


          NOTA:

          • As opiniões livremente expressas nas publicações da EuroDefense-Portugal vinculam apenas os seus autores, não podendo ser vistas como refletindo uma posição oficial do Centro de Estudos EuroDefense-Portugal.
          • Os elementos de audiovisual são meramente ilustrativos, podendo não existir ligação direta com o texto.
          Partilhar conteúdo:
          LinkedIn
          Share

          Formulário Contato