Avançar para o conteúdo

Introduction

Artificial intelligence is transforming warfare, becoming a key military asset that accelerates decisions, targeting, logistics and cyber operations. Its geopolitical importance has received growing attention within the European Union, as it is seen as a tool for economic, political and military influence. This perspective has been especially prominent in discussions about strengthening Europe’s strategic autonomy and technological sovereignty, particularly given the challenges of integrating essential technologies like military AI into European security and defense frameworks.

In recent years, the EU Commission has broadened its involvement in defense through market-driven and industrial initiatives designed to boost the competitiveness and innovation of the European Defence Technological and Industrial Base (EDTIB). Furthermore, there has been a growing integration of civilian science, technology and innovation programs with the EU’s security and defense research and development policies, particularly in promoting crucial dual-use technologies. EU leaders have also been committed to increasing defence spending, investing in critical and emerging technologies and innovation for security and defence, and fostering synergies between space, civilian and defence innovation and research.

The March 2025 White Paper on European Defence Readiness 2030 highlights that geopolitical tensions have triggered an armament and technology race, with advancements in AI, quantum technology, biotechnology, robotics and hypersonics, and stresses the urgent need for Europe to boost its defense capabilities by adopting disruptive technologies. To improve defense readiness, EU Member States should empower the European defense sector to develop technologies more rapidly and at scale. This means that increased investment in defense research and development is needed, especially through collaborative European projects and innovative industrial methods like AI. These developments raise not only strategic and industrial questions, but also pressing ethical and regulatory challenges regarding the use of AI in military operations.

Ethical Challenges of Military AI

As European states such as France and Germany invest heavily in military AI, through initiatives like France’s Artemis big-data platform and Germany’s Bundeswehr Cyber Innovation Hub, its use in operations raises a number of pressing ethical challenges. A major concern is the question of accountability, since the autonomous operation of AI systems can make it difficult to determine who bears responsibility if these systems produce unintended damage or harm during armed conflict. Another critical issue is ensuring that AI systems operate in accordance with international law. This includes following fundamental principles such as distinction, which requires them to be able to differentiate between military forces and civilians, and also proportionality, which ensures that military actions are appropriate relative to their facing threat.

It is crucial that AI technologies are capable of respecting these standards, making human supervision an essential element of their implementation. While AI can improve the speed and precision of decision-making, preserving human judgment in the process is necessary to maintain ethical and legal accountability. Since AI systems can act quickly and autonomously, there is also the risk of accelerating the escalation of conflicts, as military decisions may occur faster than human oversight allows, and therefore increase the chances of unintended consequences. On top of this, by minimizing the risk of human soldiers’ lives, these systems may lower the political and strategic hesitation to engage in combat, making it more likely that a state will engage in combat.

Foremost, the ethical responsibility doesn’t stop at the battlefield, it also includes the people and companies creating AI tools. The Human Rights Watch recently raised serious concerns after Alphabet, the parent company of Google, ended its prohibition on the use of AI for weapons and surveillance applications. By revising its AI principles and eliminating the clause that prohibited applications “likely to cause harm,” the company increased the difficulty of holding those in charge accountable for military actions with life or death implications. When confronted, Google defended the decision arguing that collaboration between private companies and democratic governments is necessary to develop AI that contributes to national security.

The debate surrounding the use of autonomous systems in warfare ultimately raises a fundamental ethical question: whether machines should be permitted to make life or death decisions. While removing soldiers from direct combat may reduce military casualties, delegating lethal decision-making to autonomous systems challenges moral and legal principles that guide armed conflicts. Critics argue that machines may struggle to interpret complex battlefield environments and human behaviour, potentially increasing the risk of civilian harm. Supporters, however, believe that such systems could enhance precision and reduce emotional or impulsive decisions in combat. Regardless of these competing perspectives, most scholars emphasise that the deployment of AI in warfare must remain consistent with international humanitarian law, particularly the principles of distinction, proportionality and accountability.

The Regulatory Dilemma in Defence AI

The rapid development of military AI has strengthened an international necessity of regulamentation, as a coherent and binding system of governance is not yet in place. Powerful states hold diverging positions on how AI in warfare should be regulated. The United States, while promoting ethical guidelines for the responsible use of military AI, opposes a pre-emptive ban on lethal autonomous weapons systems (LAWS), arguing that such technologies could potentially enhance compliance with humanitarian principles, and that existing international humanitarian law already provides an adequate legal framework. Russia similarly rejects the need for a ban, supporting that LAWS could improve precision in targeting and consequently reduce civilian harm. It also emphasises the absence of precedent for prohibiting an entire category of weapons before their widespread use. China, by contrast, has expressed support for banning the use of fully autonomous lethal weapons but not their development, a position that many analysts describe as strategically ambiguous.

At the multilateral level, the United Nations has repeatedly warned about the risks posed by AI in warfare and called for stronger international cooperation to address the growing fragmentation of AI governance. NATO has also sought to address these challenges through its Artificial Intelligence Strategy, which promotes principles for the responsible use of AI and encourages cooperation among member states, although these guidelines remain non-binding.

Against this fragmented international landscape, the European Union has attempted to position itself as a leading actor in the governance of artificial intelligence. Unlike other major powers that often prioritise technological dominance or strategic flexibility, the EU traditionally approaches emerging technologies through regulation and normative standards, positioning itself as a norms-based actor in global technology governance. This regulatory culture emphasises the establishment of clear rules, ethical guidelines and governance mechanisms designed to manage technological risks while safeguarding fundamental rights.

Emphasizing the regulatory vision, the EU adopted the Artificial Intelligence Act (AI Act) in December 2023, the first comprehensive and legally binding horizontal framework for AI regulation worldwide. Built on a risk-based classification system and a “human-centric” vision of technological development, the Act aims to promote innovation while ensuring that AI systems remain aligned with democratic values and human judgement. However, the regulation applies primarily to civilian and commercial uses of AI and explicitly excludes military applications and matters related to national security. During the negotiations, several Member States advocated for these exemptions in order to safeguard Europe’s strategic autonomy and avoid limiting defence AI development. Although military AI falls outside its direct scope, the regulation may still indirectly affect defence policy, particularly in areas involving dual-use technologies such as drones. In this sense, the EU’s regulatory framework could influence future debates on LAWS and the broader role of AI in European defence.

Although the Artificial Intelligence Act does not regulate military applications for reasons related to national security, many experts argue that this exemption creates an important regulatory gap. Because numerous AI technologies are dual-use and can be applied in both civilian and military contexts, completely separating the two areas may prove difficult in practice. As a result, some analysts suggest that the European Union should develop a dedicated framework for responsible AI in defence, potentially building on the AI Act’s risk-based classification system. Such a framework could strengthen the EU’s role as a global leader in AI governance while ensuring that military applications remain aligned with ethical and legal standards.

At the same time, others caution that Europe must avoid adopting overly restrictive rules. They argue that the AI Act already reflects the EU’s ambition to strengthen technological sovereignty and strategic autonomy, and that excessive regulation could slow innovation in a strategically important field. Critics also note that defence technologies often require rapid development and that strict regulatory constraints could weaken Europe’s competitiveness in the global race for military AI capabilities.

Conclusion

The tensions between technological development and regulation are also visible at the international level. At a recent Responsible AI in the Military Domain summit in A Coruña, European countries promoted a declaration calling for oversight, human responsibility and safeguards in military AI. However, the United States and China did not sign the declaration, highlighting a fundamental divide in global approaches to AI warfare. While the EU tends to focus on ethical frameworks and regulation, other powers prioritise rapid technological development in order to maintain strategic advantage. This competition makes the establishment of binding international rules difficult, as states may hesitate to accept restrictions that could slow their progress relative to rivals.

This dilemma raises a broader question about Europe’s role in the emerging landscape of military AI: will the EU primarily remain a regulatory power or also become a stronger strategic actor in the development of these technologies? Although European states have strong scientific and industrial capabilities, their defence innovation is slowed by complex political processes and fragmented decisions. As AI accelerates the speed of military operations and decision processes, the ability to adapt technologically may become as important as traditional military strength. At the heart of this challenge lies the question of life and death decisions, ensuring that autonomous systems remain under meaningful human oversight is essential not only for ethical compliance, but also for maintaining legitimacy in global conflict.

In this context, the challenge for Europe is not only to promote ethical standards for AI in warfare, but also to ensure that its regulatory ambitions do not limit its capacity to shape the future of military AI. Europe’s choices in balancing regulation with innovation will determine whether it can shape the future of military AI proactively, rather than reacting to developments driven by other powers. Ultimately, the EU’s ability to harmonize ethical responsibility, technological sovereignty, and strategic competitiveness will define its role in a new era of AI warfare.

Lisboa, 8 março 2026

Leonor Biscaia Gonçalves

EuroDefense-Jovem Portugal

References

Borrell, J. (2024, October). Defence technologies – Time to think big again. European External Action Service.

Csernatoni, R. (2023, July). Weaponizing innovation? Mapping artificial intelligence-enabled security and defence in the EU (Non-Proliferation and Disarmament Paper No. 84). EU Non-Proliferation and Disarmament Consortium.

Csernatoni, R. (2024, July 17). Governing military AI amid a geopolitical minefield. Carnegie Europe.

Clapp, S. (2024, September). European defence industrial strategy (PE 762.402). European Parliamentary Research Service (EPRS), Members’ Research Service.

Clapp, S. (2024, November). Reinforcing Europe’s defence industry. European Parliamentary Research Service, Members’ Research Service.

Davison, N. (2018). A legal perspective: Autonomous weapon systems under international humanitarian law. International Committee of the Red Cross.

Defencematters.eu Correspondents. (2026, February 7). AI warfare and Europe’s strategic illusion. Defencematters.eu.

European Commission & High Representative of the Union for Foreign Affairs and Security Policy. (2025, March). Joint white paper for European defence readiness 2030.

European Defence Agency. (2024). Defence data 2023–2024. European Defence Agency.

Fayet, H. (2023, November). French thinking on AI integration and interaction with nuclear command and control, force structure, and decision-making. European Leadership Network.

Franke, U. (2021, June). Artificial intelligence diplomacy: Artificial intelligence governance as a new European Union external policy tool (PE 662.926). Policy Department for Economic, Scientific and Quality of Life Policies, Directorate-General for Internal Policies, European Parliament.

Hasselberger, W. (2024). Will algorithms win medals of honor? Artificial intelligence, human virtues, and the future of warfare. Journal of Military Ethics, 23(3–4), 289–305.

Hooker, L., & Vallance, C. (2025, February). Concern over Google ending ban on AI weapons. BBC News.

Joshi, N. (2022, July). Is it ethical to use robots in war? What are the risks associated with it? Forbes.

Kmentt, A. (2025, January/February). Geopolitics and the regulation of autonomous weapons systems. Arms Control Association.

Madiega, T. (2024, September). Artificial Intelligence Act. European Parliamentary Research Service (EPRS), Members’ Research Service.

Payne, K. (2018). Artificial intelligence: A revolution in strategic affairs? Georgetown University Press.

Ruitenberg, R. (2024, June). France preps Europe’s fastest classified supercomputer for defense AI. Defense News.

Shaughnessey, I. M. (2024). The ethics of robots in war. Sergeants Major Academy.

Szczepański, M. (2024, September). The geopolitics of technology: Charting the EU’s path in a competitive world. European Parliamentary Research Service (EPRS), Members’ Research Service.

von Schubert, H. (2023, March). Addressing ethical questions of modern AI warfare. IPS.

Wagner, A. R. (2017, February). Ask an ethicist: Is it ethical to use robots to kill in a war? Penn State.

Partilhar conteúdo:

Ethics and Regulation of Military AI: Europe’s Strategic Dilemma

LinkedIn
Share

Formulário Contato