Let’s make sure AI development happens safely

AI Safety Aachen is a student-led initiative dedicated to reducing the risks posed by advanced AI Systems. We aim to empower students and academics in Aachen to address this pressing problem. Together we want to learn about the risks of AI, how we can reduce them and develop relevant skills to address the problem.

We do so by:

  • Educating ourselves with programs, hackathons, events and discussions.

  • Connecting ourselves with a community of people dedicated to this problem worldwide.

  • Supporting each other in making long-term plans to contribute to AI Safety through our career, engagement, or advocacy.

Every semester we run an introductory course on AI safety based on the AI safety fundamentals course developed by OpenAI researcher Richard Ngo. To participate in the next iteration you can sign up below or express interest at one of our events.

What we do

AI Safety

AI is advancing rapidly and brings huge potential for positive change. However, according to a recent survey 48% of AI experts think that the risk of human extinction from AI is >10% (Grace et al, 2022). AI could be used to develop bioweapons, deploy hazardous malware or empower oppressive regimes. Companies are investing billions of dollars and racing to deploy frontier models while the underlying technology is mostly black box with no rigorous theory of how to make systems safe. We believe there are still fundamental questions to answer and technical challenges to address to ensure that advanced AI systems are beneficial, rather than harmful to humanity.

We are committed to reduce catastrophic risks from advanced AI systems


If this sounds interesting to you, join us at one of our weekly events