Robotics Engineering Colloquium Speaking Series: Dr. Andrew Hundt

Friday, February 28, 2025
3:00 pm to 4:00 pm
Location
Floor/Room #
420 and Virtually (See Event Details for Zoom link)

Identity-Safe AI and Robotics

Preview

Andrew Hundt

Abstract: “Breaking my wheelchair is like breaking my legs”: For many, the risks of biased general-purpose AI in household and workplace robotics are not abstract, but deeply personal. Current robot learning methods prioritize efficiency, accuracy, and generalization, but often fail to account for diverse populations, such as disabled people. This can lead to harmful oversights, such as an AI approving robots to remove people’s essential mobility aids—posing serious risks. Analogously, self-driving vehicle companies that did not account for all pedestrians, and thus caused bystander injuries, have even been shut down. Consequently, there is an urgent need for advancements towards Identity-Safe AI and Robotics, ensuring robots and other learning systems are safe, effective, and just across all populations and backgrounds.
          In my talk, I will demonstrate how subtle biases in physical AI-driven robots can lead to significant physical safety risks and discriminatory actions on household and workplace robots. To address these issues, my research approach fuses AI, Robotics, Human-Computer Interaction (HCI), Human-Robot Interaction (HRI), and mixed methods to take steps towards true generalization—capabilities that work better for all people.
I will connect methods addressing the three goals of Identity-Safe AI and Robotics—safety, effectiveness, and justice—to my research vision. Regarding safety, I will demonstrate red-teaming methods to show how identity biases in robotics generate physical safety risks and discriminatory actions. To advance effective robot learning, I will present a Q-Learning based Reinforcement Learning (RL) algorithm for safe and efficient multi-step robot manipulation and connect it to the potential for explosive growth in the capabilities of robots. For justice, I will demonstrate empirical methods to detect and quantify identity-based limitations in robot learning at scale, discuss redirecting and refusing harmful actions, and describe how gathering community input then integrating it into algorithms improves outcomes. I will conclude by outlining a long-term vision for community-led safety metrics and robust algorithms to mitigate bias and safety risks in real-world AI and robotics.

Bio: Andrew Hundt researches human-centered processes, metrics, and algorithms that respect human rights and human needs in AI and Robotics. He earned the competitively-awarded Computing Innovation Fellow (CIFellow) sponsored by the Computing Research Association and National Science Foundation, and is at Carnegie Mellon's Robotics Institute working with Prof. Jean Oh. He has quantitatively demonstrated race and gender bias in Robot Learning.
          Andrew earned his PhD on “Effective Visual Robot Learning” in 2021 from The Johns Hopkins University with Profs. Gregory D. Hager and Peter Kazanzides. Andrew’s research has been published in top-tier venues such as RA-L, ICRA, IROS, FAccT, and CVPR. His work has been recognized by major media outlets such as Scientific American in a cover story, as well as The Washington Post and The BBC World Service, and he is also regularly quoted in the press on AI and Robotics.
 

Zoom link: Zoom meeting has ended.

Audience(s)

Department(s):

Robotics Engineering