Resources
Want to understand more about AI? We have collected some introductory resources to better understand the possible consequences of AI on society, what experts are saying, and what needs to be done in the policy and governance sphere.
Let's start from the beginning
To understand the need (or not) for AI policy in our society, we should start by looking at AI capabilities and what AI systems can do right now. Most of us have probably used ChatGPT or Claude before, but there are many more capabilities worth understanding.
AI Safety Atlas — Current capabilities
AI Safety Atlas
A more entry-level resource to understand current AI capabilities.
Frontier AI Trends Report
UK AI Security Institute
The UK AI Security Institute (a directorate of the Department of Science, Innovation, and Technology of the UK government) has published some relevant research on the current capabilities of frontier AI systems.
What about the risks?
Capable systems can be extremely useful for society, bringing a lot of positive improvements, but at the same time they can create risks. There is a wide debate about AI risks and what people are most concerned about.
We recommend two readings on this topic:
International AI Safety Report 2026 — Extended summary for policymakers
International AI Safety Report Organisation
This report was drafted by a large panel of experts — you can look at the panel here.
Why are experts sounding the alarm on AI risks?
Al Jazeera
A more general, broad overview of what concerns AI experts.
Why does it matter?
Is this really an issue? Is anyone worried about AI effects on our society?
We believe the effects of AI on our society should be taken seriously. We believe we should invest in and prepare policymakers to create resilience and future-proof regulation that allows us to get the most out of these emerging technologies. We are not the only ones to share these ideas. Last year, heads of state, Nobel Prize winners, and top scientists signed this public declaration on the need to establish AI regulations:
Among the signatories is Yoshua Bengio, one of the pioneers of deep learning and a leading voice on AI governance.
Interview with Yoshua Bengio on AI consequences
Time
Also mainstream newspapers have spoken about AI consequences for our societies.
What drives AI progress?
What influences AI progress? To better understand this, it is important to know the three main factors driving the increase in capabilities: data, compute (i.e. infrastructure such as advanced chips), and software (how good the algorithm is). This short briefing from Georgetown CSET provides a clear perspective on this.
Has AI really improved that much?
While some people may still be a bit sceptical of current AI models, there is empirical research trying to measure the increase in capabilities. Take a look at this:
Real-world examples
Ok, AI capabilities are increasing. There may be some consequences for society. Can we get some examples?
Opportunities board
BlueDot Impact
An intensive AI governance and safety course and one of the most respected programmes for getting up to speed.
SAIGE
The German node of the global AI safety ecosystem. Fellowships, events, and connections for students.
Fellowship Airtable
A curated collection of AI safety and governance fellowships, courses, and funding opportunities.
Publications and outputs
Student essays and policy memos will be published here as the club develops its research programme. Our goal is to have at least two pieces published by the end of 2026.
Join our community
Get access to events, shared resources, and collaboration opportunities.