On September 24, 2025, the TUM Institute for Ethics in Artificial Intelligence (IEAI) hosted a panel discussion at the TUM Think Tank titled “Thinking about Ethical Use of AI in the Military – Implications for Organizations and Global Security”. The session, moderated by IEAI Executive Director and alignAI Project Lead Dr. Caitlin Corrigan, featured Brigadier General (Ret.) Dr. David Barnes (Empowering AI) and Lance Lindauer (Partnership to Advance Responsible Technology).
The panel opened with a reference to Nobel Peace Prize laureate Maria Ressa’s call for “red lines” in AI at the 80th anniversary of the United Nations General Assembly. Dr. Corrigan invited the speakers to reflect on how autonomous and agentic AI systems are reshaping security, governance and organisational responsibility.
Dr. Barnes noted that the current push for AI boundaries reflects a broader concern that critical security decisions must remain in human hands. He warned that AI may amplify existing inequalities between nations, as only a few countries possess the full infrastructure, data, algorithms and computing power to leverage AI effectively.
“The question is how much do we let artificial intelligence, with its ability to not only help inform people making those decisions, be able to be empowered to make decisions and replace the human decision maker? That becomes this really scary place for many people.” – David Barnes
Mr. Lindauer encouraged attendees to critically examine how language shapes perceptions of AI and power. He questioned the frequent use of terms like “AI arms race”, arguing for a shifted narrative that includes the growing role of private industry in the security sector. In contrast to technologies like nuclear weapons, he emphasised AI’s ubiquity and accessibility:
“People don’t have access to nuclear weapons and uranium …. The fact that [AI is] extraordinarily ubiquitous, makes it a little bit more scary, interesting and worth watching.” – Lance Lindauer
Building on this, Dr. Barnes proposed viewing AI as a “three-legged stool”, a combination of algorithm, data and computing power. He emphasised that AI should not be seen as a single “thing”, but as a method that enhances decision-making speed and efficiency across systems.
When asked about the role of academia in AI development, both panelists reflected on the dynamics between the public, private and academic sectors. Dr. Barnes observed a long-term transition from government-led innovation to private-sector dominance and called for renewed collaboration between sectors that “seem to speak the same language, but are talking past each other”. Mr. Lindauer underscored the need for interdisciplinary engagement and pointed to academia’s growing role in setting ethical and societal priorities.
The conversation concluded with reflections on the dual-use nature of AI, its potential for both beneficial and harmful applications and the challenges of establishing equitable regulation. The speakers noted the widening technological divide between the Global North and South, as well as differences in social norms, readiness and governance capacities, which complicate global efforts toward ethical AI use.
Audience questions explored topics such as transparency, accountability and the inclusion of the public in AI governance debates. The informal conversations that followed the panel demonstrated the ongoing need for dialogue across sectors.
alignAI Doctoral Candidates Julia Li and Mohaned Bahr and Project Coordinator Dr. Auxane Boch, were among the attendees. The event provided valuable insights into the intersection of ethics, security and AI, core issues that align closely with alignAI’s mission to foster trustworthy and value-aligned artificial intelligence across societal domains.
The recording of the event can be found here: