Skip to main content

How to Make Hard Choices in AI | Atay Kozlovski, Researcher at the University of Zurich

DataCampMarch 20, 20261:31ai_ml_education

Summary

This video delves into the ethical dilemmas and practical challenges of deploying high-stakes AI tools, highlighting the potential for harm and the complexities of ensuring human oversight as AI scales. It is highly useful for educators and students in AI to understand critical concepts like responsible AI development, ethical decision-making, and the societal impact of AI systems.

Description

Across the AI industry, high-stakes tools are being deployed in places where errors can harm people: sepsis alerts in hospitals, identity checks, welfare fraud detection, immigration enforcement, and recommendation systems that shape life outcomes. The pattern is familiar: scale and speed go up, while human review becomes rushed, shallow, or punished for disagreeing. In daily work, that can look like a nurse forced to act on false alarms, or a team using an LLM summary in ways the designers never planned. When should you slow down deployment? How do you detect new “wild” use cases early? And what does responsible tracking and oversight look like under real pressure? Atay Kozlovski is a Postdoctoral Researcher at the University of Zurich’s Center for Ethics. He holds a PhD in Philosophy from the University of Zurich, an MA in PPE from the University of Bern, and a BA from Tel Aviv University. His current research focuses on normative ethics, hard choices, and the ethics of AI. In the episode, Richie and Atay explore why AI failures keep happening, from automation bias to opaque targeting and hiring models. They unpack “meaningful human control,” accountability, and design in healthcare, government, and warfare. You’ll also hear about deepfakes, consent, digital twins, and AI-driven civic engagement, and much more. Find DataFramed on DataCamp https://www.datacamp.com/podcast and on your preferred podcast streaming platform: Apple Podcasts: https://podcasts.apple.com/us/podcast/dataframed/id1336150688 Spotify: https://open.spotify.com/show/02yJXEJAJiQ0Vm2AO9Xj6X?si=d08431f59edc4ccd Links Mentioned in the Show: “Lavender” IDF recommendation system - https://www.972mag.com/lavender-ai-israeli-army-gaza/ Amnesty International reports on AI/automation in welfare systems - https://www.amnesty.org/en/latest/news/2025/07/uk-governments-unchecked-use-of-tech-and-ai-systems-leading-to-exclusion-of-people-with-disabilities-and-other-marginalized-groups/ “Meaningful Human