How to Make Hard Choices in AI | Atay Kozlovski, Researcher at the University of Zurich
Across the AI industry, high-stakes tools are being deployed in places where errors can harm people: sepsis alerts in hospitals, identity checks, welfare fraud detection, immigration enforcement, and recommendation systems that shape life outcomes. The pattern is familiar: scale and speed go up, while human review becomes rushed, shallow, or punished for disagreeing. In daily work, that can look like a nurse forced to act on false alarms, or a team using an LLM summary in ways the designers never planned. When should you slow down deployment? How do you detect new โwildโ use cases early? And what does responsible tracking and oversight look like under real pressure?
Atay Kozlovski is a Postdoctoral Researcher at the University of Zurichโs Center for Ethics. He holds a PhD in Philosophy from the University of Zurich, an MA in PPE from the University of Bern, and a BA from Tel Aviv University. His current research focuses on normative ethics, hard choices, and the ethics of AI.
In the episode, Richie and Atay explore why AI failures keep happening, from automation bias to opaque targeting and hiring models. They unpack โmeaningful human control,โ accountability, and design in healthcare, government, and warfare. Youโll also hear about deepfakes, consent, digital twins, and AI-driven civic engagement, and much more.
Find DataFramed on DataCamp https://www.datacamp.com/podcast
and on your preferred podcast streaming platform:
Apple Podcasts:
https://podcasts.apple.com/us/podcast/dataframed/id1336150688
Spotify:
https://open.spotify.com/show/02yJXEJAJiQ0Vm2AO9Xj6X?si=d08431f59edc4ccd
Links Mentioned in the Show:
โLavenderโ IDF recommendation system - https://www.972mag.com/lavender-ai-israeli-army-gaza/
Amnesty International reports on AI/automation in welfare systems - https://www.amnesty.org/en/latest/news/2025/07/uk-governments-unchecked-use-of-tech-and-ai-systems-leading-to-exclusion-of-people-with-disabilities-and-other-marginalized-groups/
โMeaningful Human