AI could bring huge benefits
if we avoid its enormous risks
AI might cause the biggest technological leap in human history this century. Who is making sure that this will go well?
We are a group of students and researchers working towards the long-term safety of artificial intelligence. Our goal is to help ensure that future powerful AI systems are safe.
The Bern AI Safety (BAIS) Group meets regularly to discuss topics in AI alignment and AI safety, with a general focus on the technical aspects and implications of such topics. Additionally, we also organize workshop meetings where people meet and discuss project ideas, as well as anything else they are interested in.
A sample of papers we might discuss:
Concrete Problems in AI Safety
Dario Amodei, Chris Olah et al. (OpenAI, UC Berkely, Anthropic, Alignment Research Center)Clarifying AI x-risks
Zachary Kenton, Rohin Shah et al. (Deepmind)Locating and Editing Factual Associations in GPT
Kevin Meng, David Bau et al. (MIT and Northeastern University)
What are we doing?
Anyone interested in promoting the safe development of AI, whether from a technical or policy perspective, is welcome to join us. Our community includes both students and researchers.
Who is this for?
Next Events
Introduction Event Spring 2024
https://lu.ma/kmip223b
"You can't fetch the coffee if you're dead."
— Prof. Stuart Russel
All agents possess a self-preservation drive that compels them to strive for survival, leading to the emergence of behaviours like deception and power-seeking. If such behaviours arise in a highly advanced artificial intelligence, it poses a grave existential threat to humanity.
-
Leander
MSc Data Science
-
Hannes
Independent AI Researcher
Programming Enthusiast