On February 13th, the IEAI co-hosted with Amerikahaus an official side event of the Munich Security Conference 2026 titled “Ctrl-Alt-Deterrence: Rethinking Stability in the Age of Cyber, AI and Autonomy“.The panel brought together leaders from defence, academia, policy and industry to explore how artificial intelligence and autonomous systems are reshaping deterrence theory and practice.
The discussion was moderated by Brigadier General David Barnes, PhD (US Army, Retired) (Empowering AI) and Lance Lindauer (Partnership to Advance Responsible Technology). The panelists included Prof. Dr. Christoph Lütge (Director, TUM IEAI), Lauren Kahn (Senior Research Analyst, Georgetown’s Center for Security and Emerging Technology), Maryna Tymchenko (Director of Business Development and Operations, Blue Arrow Defense) and Felix Rank (Co-Founder, German Forum for Ethical Machine Decision Making).
At the centre of the discussion was a pressing question: Does AI fundamentally transform deterrence or does it intensify existing dynamics? As AI-enabled systems compress decision timelines and complicate attribution, long-standing assumptions about escalation, control and accountability face new strain. This evolving landscape raises important questions not only about technological capability, but also about responsibility, accountability, governance and the human role in decision-making.
Panelists emphasised that AI introduces opacity, automation bias and new escalation pathways into already fragile systems. Because AI systems are human designed and deployed, governance must address specific vulnerabilities such as accidents, misinterpretation of automated outputs and uneven technological adoption across regions. The challenge lies in adapting institutional and regulatory frameworks to these evolving operational realities.
A recurring theme was the importance of maintaining meaningful human judgment within AI-supported environments. Military planning has long relied on deliberation, clear intent and defined accountability structures. Machine speed systems change that rhythm. As autonomous tools assist with navigation, targeting and strategic analysis, questions arise about situational awareness, intervention capacity and the preservation of human agency in high-stakes decisions.
The conversation also explored responsibility within human-machine teaming. Moral and legal accountability remains grounded in human actors, yet AI integration across operational and organisational systems increases complexity. Education, interdisciplinary literacy and cross-sector collaboration were highlighted as essential conditions for informed oversight.
Ethics, as several speakers noted, must extend beyond abstract principles. Technical robustness, reliability and clearly defined intervention mechanisms form the foundation of meaningful human control. Transparency, system accuracy and accountability structures are central for sustaining trust in environments where decisions may have life-or-death implications.
The panel further discussed structural implications for global stability. AI differs from earlier strategic technologies in its ubiquity and commercial diffusion. Supply chains, computing infrastructure and data access are unevenly distributed, creating asymmetries between actors. Rather than focusing exclusively on competition, the panelists emphasised the importance of transparency, shared standards, cooperation among allies and ongoing dialog among policymakers, industry and technical experts.
alignAI Doctoral Candidates Julia Li and Simay Toplu joined the event and followed the discussion closely, reflecting on how questions of value alignment and human oversight surface in security-related AI contexts. For the network, conversations of this kind highlight how alignment challenges extend beyond individual applications into broader socio-technical systems that shape collective decision-making.
The evening concluded with a Q&A session exploring speed versus control, disinformation detection and the future of international governance in AI-enabled security contexts.
The recording of the event can be found here: