Skip to main content

Diyi Yang: Technology & Social Behavior Colloquium

Human-AI Interaction in the Age of Large Language Models

Monday, May 6th, 1:30 pm.m to 2:30 p.m.

Center for Human-Computer Interaction + Design (Francis Searle 1-122)

Large language models have revolutionized the way humans interact with Al systems, transforming a wide range of fields and disciplines. In this talk, we discuss several approaches to enhancing human-AI interaction using LLMs. The first one looks at social skill training with LLMs by demonstrating how
we use LLMs to teach conflict resolution skills through simulated practice. The second part develops efficient learning methods for adapting LLMs to low-resource languages and dialects to reduce disparity in language technologies. We conclude by discussing how human-Al interaction via LLMs can empower individuals and foster positive change.

Diyi Yang is an assistant professor in the Computer Science Department at Stanford University. Her research focuses on human-centered natural language processing and computational social science. She is a recipient of IEEE "AI 10 to Watch" (2020), Microsoft Research Faculty Fellowship (2021), NSF CAREER Award (2022), an ONR Young Investigator Award (2023), and a Sloan Research Fellowship (2024). Her work has received multiple paper awards or nominations at top NLP and HCI conferences, (e.g., Best Paper Honorable Mention at ICWSM 2016, Best Paper Honorable Mention at SIGCHI 2019, and Outstanding Paper at ACL 2022).

Back to top