Avançar para o conteúdo
Civil-Military Roles in Defending Liberal Norms in the AI era: A dual approach to a dual threat

Introduction

While William Hasselberger’s Will Algorithms Win Medals of Honor? (2024) raises important philosophical concerns about how AI may erode military virtues, this paper shifts the focus to the institutional and political consequences of AI for democratic accountability and defense.

Drawing inspiration from Hasselberger’s third military virtue of situational discernment and wisdom, this study reinterprets and applies that concept to a broader context, examining how both civilians and military must navigate a context of dual threat derived from the development of AI, to defend liberal democracy.

The paper grounds its understanding of democracy in the constitutional-liberal tradition, particularly as defined by Karl Popper and Ralf Dahrendorf, emphasizing rule of law, accountability, and the peaceful removal of rulers.

Democratic Governance in the Liberal Tradition

Inspired by Popper’s definition of constitutional government, this paper defines liberal democracy as the set of rule of law, equality before the law, limited government through checks and balances (separation of power), private property, freedom of speech and elections as essential components

Free, transparent and inclusive elections ensure that citizens can choose their representatives and hold them accountable. This underpins the legitimacy of government authority and is the only way to ensure the Popperian principle of removing bad rulers without bloodshed. It also forms the basis of Popper’s concept of Constitutional Government (limited government accountable to Parliament and Parliament accountable to voters). By contemplating separation of powers of the state, it legally distributes authority among branches and ensures equality before the law, and the rule of law, preventing abuses of power. The same limitations are required to be implemented in the development of AI, especially as it is the defining tool of our era.

Defending the democratic values in this context depends on the interconnected roles of civilians and the military, which are subject to the same dual threat. 

Civilians, as voters and participants in civic life, are the ultimate source of political authority in liberal democracies, they elect and hold governments accountable for their actions. The primary role of the military is to defend society’s security and uphold constitutional order. Military leaders advise on security, but final decisions lie with elected civilians, reflecting the democratic principle of civilian coordination of the military.

Civilians and military will face the evolution of AI in distinct ways. The next section elaborates this dual impact.

AI’s Impact on human capacities and democratic duties

The rapid advancement of AI presents a double threat. In a 2025 CBS interview, 2024 Nobel Laureate Geoffrey Hinton warned that not only there is the threat of AI systems becoming uncontrollable, but there is also the danger of “bad actors” misusing AI for harmful purposes. When it comes to civilians and military, the threats are interconnected.

Threatening the inside of the Voting Ballot: Eroding Discernment and Wisdom

AI’s negative influence on (civic) democratic participation is most evident in the information ecosystem, which directly shapes voter’s capacity of decision-making. Generative AI can produce mass-disinformation, deepfakes, and persuasive synthetic content, making it increasingly difficult for citizens to distinguish between authentic and manipulated information. This directly impacts Hasselberger’s third virtue of critical thinking, discernment and situational wisdom, which will condition the capacity to exert phronesis inside the voting box. 

As AI-generated content becomes more sophisticated, either from black box algorithms (BBAs) or the weaponization from malicious actors, it will cause public confusion, erode trust and enable the so-called “liar’s dividend”, where genuine evidence can be dismissed as fake. Western liberal democracies, with freedom of press/speech, combined with the fast-paced evolution of AI might pose a more vulnerable target, as proliferation of information happens at a faster pace.

According to Hinton, if AI evolves into AGI and decides to tackle the human governance domain, the targeted state regime will not matter, as the danger of AGI is transversal to all regimes. 

The result is a diminished capacity for civilians to exert critical, pondered, decision-making when electing governments or choosing to get rid of bad rulers, all critical human capacities that are at the heart of Popper’s concepts of democratic governance. 

Threatening the Outside the Ballot box: the use of LAWS and Authoritarian Drift 

The integration of Lethal Autonomous Systems (LAWS) in modern warfare poses a great threat to democratic norms and human rights. 

Many experts agree that Operation Peace Storm in Libya (2020) is among the first documented cases where autonomous weapons systems were used in a way that key decisions, normally governed by principles of International Humanitarian Law (IHL), were taken without direct human involvement.

Since then, LAWS have spread to other countries’ military capabilities. From the current top three geopolitical actors (China, Russia, USA) only one is a democracy and only one is directly engaged in combat, since 2022. The ongoing fight with Ukraine has been one of the main hubs for the development of LAWS, with drones now responsible for most battlefield casualties (70% to 80%).

This paper highlights three IHL defense principles that will likely be negatively impacted by including LAWS in warfare:

  • Distinction: To distinguish civilians from military targets;
  • Proportionality: Connected with the previous point, it means the harm caused in war, must not be greater than the military advantage gained;
  • Accountability: Being held responsible for breaking the laws, which in case of war are directly connected with human rights;

Human Rights Watch has alerted for the severeness of this negative impact, while emphasizing the importance of protecting the right to life.

These principles relate directly with the core values of liberal democracy, which rests on trust, human rights and equality before the law. In the possible event that a LAWS takes the wrong decision on the battlefield, it can result in disproportional casualties of innocent civilians and mankind will be left with an accountability dilemma to solve. It will require a human a posteriori solution to problem created by a non-human entity. 

One of human’s primary instincts, when faced with injustice, is to find the culprit and judge accordingly. In this case, the virtue of situational discernment and wisdom is conditioned. AI, as culprit, cannot have the traditional judicial judgment, be arrested or held accountable. Humans will not be able to apply the human principles of justice (proportionality assessment, accountability) when trying to make justice with AI, it will always be an unfair “game”.

The concern increases when connecting BBAs and the use of LAWS. Even though results from BBA’s may be accurate, the process is opaque to researchers’ comprehension. Humans, when faced with uncertainty, will anticipate the worst scenario. When taken to an extreme, it is possible to consider the possibility of mankind reliving a Cold War Petrov situation (1983), only this time, subject to a potentially different outcome, as its final call may not depend on human judgement (at all).

Regulatory transversal Approach in defending Democratic Principles

Much like Popper’s endorsement of James Madison’s checks and balances, on the face of this dual threat, limiting the power AI could gain is essential to prevent uncontrolled abuse of power. 

Researchers and political leaders, aware of the current main challenges of implementing AI regulation (consensus, AI development pace and larger players advocating for less regulation), are taking prevention talks into concrete action. 

The 2024 EU AI Act represents the first comprehensive regulatory framework for AI, adopting a risk-based, human-centered approach. It gives mankind a solid place to start addressing the dual threat connected with AI, both for civilians and military.

Regulation as civilian armament to defend democratic values

  1. Establishing an Independent Board to boost election resilience: Drawing on IVADO’s recommendations establishing an independent board with AI and cybersecurity expertise (connected with electoral authorities, media, and platforms) would strengthen election integrity by coordinating proactive defenses, training stakeholders and implementing response plans against AI-driven threats.
  1. Extending EU’s AI Code of Conduct: As of mid-2025, the Code is only binding for platforms that voluntarily sign up. The EU DSA’s strictest requirements apply mainly to Very Large Online Platforms (VLOPs) and Very Large Online Search Engines (VLOSEs). By extending its binding obligations to all online platforms, regardless of size (under the DSA framework) would ensure inclusion, fairness, consistent standards and close regulatory gaps.
  1. Leveraging TRAIGA 2.0 and the EU AI Act for broader electoral integrity: By combining regulation from Texas Responsible AI Governance Act’s (TRAIGA), focused on government AI transparency, with the EU AI Act’s emphasis on risk and disclosure, this paper suggests a transversal, mandatory, real-time fact-checking mechanism. This mechanism would produce standardized authenticity labels for AI-generated political content on digital platforms, in the public and private sector. 

Regulation as military armament to defend democratic values 

Although the EU AI Act is multi-layered, it excludes AI development for military purposes from its regulatory scope. As of 2025, there is no binding worldwide treaty to efficiently regulate the development and use of AI in military armament. In this sense, researchers are calling for the importance of a “Paris AI agreement” . The United Nations and the Convention on Certain Conventional Weapons (CCW) can be leveraged for this purpose. 

To address the risks posed by the development of AI weapons (emphasis on the use of LAWS) and uphold human rights and democratic values, the following actions could be adopted:

  1. Military “Ethical black box” recorders, similar to black boxes on airplanes: As first defended by Winfield and Jirotka in 2017, the mandatory incorporation of all military AI systems with “black box” recorders would contribute to register actions and decisions. It would contribute to a meticulous post-incident investigation, strengthen compliance with international humanitarian law, clarifying accountability and trust.
  2. Threshold-Based Bans on Fully Autonomous Systems: Multilateral discussions, within UN framework, aimed at exploring the feasibility of an AI Weapons Limitation Treaty, as defended by Anna Heir. Such a treaty would prohibit the use of LAWS and black-box algorithms in lethal weaponry. Hybrid systems such as Human-On-The-Loop (HOTL) weapon systems would be allowed, as these systems are only partially autonomous, the final decision before armed engagement resides on humans. Following the latest informal UN high-level sessions on May 12, 2025, more than half of global representatives are in favor of advancing discussions on binding regulation.

iii.    Democratic Oversight and Mandatory Technical Audits at a global level: Inspired by NATO’s DATA and AI Review Board, this supervising body would operate within UN framework. It would be responsible for detecting bias, understanding and controlling the decision-making processes of algorithms used in weapons development. Its findings would be subject to both expert and public scrutiny. It would have authority to halt deployments, mandate and advise on system adjustments. It would serve as a reinforcement of the previous points.

Conclusion

The current democratic context would require Popper to update the question central to his thesis to “How to get rid of bad rulers, without bloodshed, if human voters are no longer the only entity involved in electing governments”.

Combining military and civilian roles is essential to preserving individual rights that are central to western liberal democracy, preventing the proliferation of authoritarian ideals and ensures that the values and policies of the state reflect the free choices of its people.  

It is essential to ensure that AI remains under human-imposed limitations and regulations, as a tool to strengthen democracy rather than threaten it. In line with Hasselberger’s emphasis on situational wisdom, defending democratic values in the AI era demands that both civilians and the military retain the ethical capacity to discern, judge and act. The principle of Human-On-The-Loop applies not only to machines, but to democracy.


September 3, 2025

Teresa Duarte Fernandes

EuroDefense-Jovem Portugal


Barcott, Bruce. “The Revised Guide to TRAIGA 2.0, the Texas Responsible AI Governance Act.” Transparency Coalition. March 18, 2025. Accessed June 18, 2025. https://www.transparencycoalition.ai/news/analysis-whats-in-traiga-the-texas-responsible-ai-governance-act.

Chesney, Robert, and Danielle Keats Citron. “Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security.” California Law Review 107, no. 6 (2019): 1753–1819. https://doi.org/10.2139/ssrn.3213954.

Espada, Prof. Doutor. João Carlos. Vital Center – Instituto de Estudos Políticos da Universidade Católica Portuguesa. Lisboa, Session 8 – Karl Popper, April 22, 2025. 

European Defence Agency. Trustworthiness for AI in Defence: White Paper. May 9, 2025. European Defence Agency. https://eda.europa.eu/docs/default-source/brochures/taid-white-paper-final-09052025.pdf.

European Commission. The Code of Conduct on Disinformation. Last modified February 13, 2025. Accessed June 18, 2025. https://digital-strategy.ec.europa.eu/en/library/code-conduct-disinformation.

European Commission. AI Act: Regulatory Framework on AI. Last modified [no date]. Accessed June 18, 2025. https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai.

Hasselberger, William. “Will Algorithms Win Medals of Honor? Artificial Intelligence, Human Virtues, and the Future of Warfare.” Journal of Military Ethics 23, no. 3–4 (2024): 289–305. https://doi.org/10.1080/15027570.2024.2437920.

Hehir, Anna. “Banning the Most Dangerous Autonomous Weapons.” Axios, April 4, 2024. Accessed June 17, 2025. https://www.axios.com/2024/04/04/ai-weapons-war-autonomous-regulation-ban.

Hinton, Geoffrey. “Transcript of Brook Silva-Braga Interviews Geoffrey Hinton on CBS Mornings.” Singju Post, April 28, 2025. Accessed June 17, 2025. https://singjupost.com/transcript-of-brook-silva-braga-interviews-geoffrey-hinton-on-cbs-mornings/

Human Rights Watch and International Human Rights Clinic. A Hazard to Human Rights: Autonomous Weapons Systems and Digital Decision-Making. April 28, 2025. https://www.hrw.org/report/2025/04/28/hazard-human-rights/autonomous-weapons-systems-and-digital-decision-making

Human Rights Watch. “UN: Start Talks on Treaty to Ban ‘Killer Robots.’” Human Rights Watch, May 21, 2025. Accessed June 17, 2025. https://www.hrw.org/news/2025/05/21/un-start-talks-treaty-ban-killer-robots

IVADO and CEIMIA. AI and Democracy – Understanding the Effects of AI on Elections. Montréal: IVADO and CEIMIA, January 21, 2025. Accessed June 18, 2025. https://ivado.ca/wp-content/uploads/2025/01/IVADO_Brief_AI-and-elections-FINALE-EV.pdf.

International Committee of the Red Cross. What Is International Humanitarian Law? Geneva: International Committee of the Red Cross, March 2022. Accessed June 18, 2025. https://www.icrc.org/sites/default/files/document/file_list/what_is_ihl.pdf.

Kreps, Sarah, and Doug Kriner. “How AI Threatens Democracy.” Journal of Democracy 34, no. 4 (October 2023): 122–31

Leonard, Jayne. “Anticipatory Anxiety: Definition, Symptoms, Coping, and More.” Medical News Today, December 18, 2023. https://www.medicalnewstoday.com/articles/anticipatory-anxiety.

Madison, James. Federalist No. 51. In The Federalist Papers, 1788. Accessed June 18, 2025. https://bri-wp-images.s3.amazonaws.com/wp-content/uploads/Federalist-Papers-No-51-1.pdf

Nakayama, Bryan. “Democracies and the Future of Offensive (Cyber-Enabled) Information Operations.” The Cyber Defense Review 7, no. 3 (Summer 2022): 49–61.

S. Pangambam, “Transcript of Brook Silva-Braga Interviews Geoffrey Hinton on CBS Mornings,” Singju Post, April 28, 2025, accessed June 18, 2025, Singju Post

Popper, Karl. The Open Society and Its Enemies. Vol. 1, The Spell of Plato. London: Routledge, 1945.

Principles of Democracy. “The Rule of Law.” Principles of Democracy. Accessed June 17, 2025. https://www.principlesofdemocracy.org/law.

Practical Guide to Humanitarian Law. “Responsibility.” Accessed June 18, 2025. https://guide-humanitarian-law.org/content/article/3/responsibility/.

Thomas, Richard. “Drones now account for 80 % of casualties in Ukraine-Russia war.” Army-Technology, April 8, 2025. https://www.army-technology.com/news/drones-now-account-for-80-of-casualties-in-ukraine-russia-war/

Yarlagadda, Shriya. “Envisioning an AI Paris Agreement.” Harvard International Review, February 6, 2025. Accessed June 17, 2025. https://hir.harvard.edu/envisioning-an-ai-paris-agreement/

Winfield, Alan F. T., and Marina Jirotka. “The Case for an Ethical Black Box.” In Towards Autonomous Robotic Systems 2017, Lecture Notes in Computer Science, vol. 10454, 262–73. Cham: Springer, 2017. https://doi.org/10.1007/978-3-319-64107-2_21.

Partilhar conteúdo:

Civil-Military Roles in Defending Liberal Norms in the AI era: A dual approach to a dual threat

LinkedIn
Share

Formulário Contato