
<aside>
💡 Welcome to the BASIS AI Safety Policy Fellowship program! Here, you can find a draft of the syllabus we’ll use throughout the fellowship. Note that this is a draft and subject to change. This curriculum is adopted from Harvard’s AI Student Safety Team and also built from the BASIS leadership team. Note: please constitute the first fellowship meeting as ‘Week 0’ (no reading is required).
</aside>
Week 0: AI Safety, a Brief Introduction
- AI Safety vs. AI Security: Demystifying the Distinction and Boundaries (Lin et al., 2025)
- What is AI Safety and Security? (Berger, 2025)
Week 1: Foundations of AI and the Rationale for Safety
- But what is a neural network? (3Blue1Brown, 2017). Watch first 12.5 min.
- The AI Triad and what it means for national security strategy (Buchanan, 2020) 21 min.
- Visualizing the deep learning revolution (Ngo, 2023). 15 min.
- 4 charts that show why AI progress is unlikely to slow down (Henshall, 2023). 8 min.
Additional readings:
Week 2: Overview of risks from advanced AI systems
- An overview of catastrophic AI risks (Hendrycks et al., 2023). Read pgs. 2, 6-7, bottom of page 8-11 (Executive summary; § 2.0, 2.1, 2.3, 2.4). 12 mins.
- Harms from increasingly agentic algorithmic systems (Chan et al., 2023). Read pgs. 2-6 (§ 1 & 2). 10 min.
- An overview of catastrophic AI risks (Hendrycks et al., 2023). Read pgs. 38–40 (§ 5.3, half of 5.4). 10 min.
- The alignment problem from a deep learning perspective (Ngo et al., 2022).
- International Scientific Report on the Safety of Advanced AI (AI Seoul Summit, 2024).