Our work on transforming non-speech captions with anchored generative models is headed to ASSETS 2025 in Denver!
Hello World!
I am a Human-Computer Interaction researcher and second-year Ph.D. student in Computer Science & Engineering at the University of Michigan, where I work with Prof. Dhruv Jain in the Soundability Lab.
Before starting my Ph.D., I completed an M.S. in Human-Computer Interaction at the University of Michigan School of Information and a B.S. in Data Science and Psychology at The Ohio State University.
Broadly speaking, my research explores how human-AI systems reimagine accessibility as a collaborative and adaptive process. I approach this from two perspectives:
Personal Auditory Intelligence
Building systems that enable people with disabilities to personalize how they perceive and navigate sound in their environments.
Collaborative Accessibility
Developing agents that support mixed-ability groups in co-creating and sharing accessible experiences.
Recent News
I will present SoundWeaver at CHI 2025 in Yokohama, sharing how we support real-time sensemaking of auditory environments.
Our Human-AI Collaborative Sound Awareness (HACS) paper was accepted to CHI 2024!
Thrilled to continue my Ph.D. journey at Michigan CSE and keep building accessible technologies with the Soundability Lab.
Our field study of mobile sound recognition systems received an Honorable Mention at ASSETS 2023.
Publications




Contact
I love connecting with researchers, designers, and community partners who care about creating a more accessible world. Feel free to reach out!
Find Me
Computer Science & Engineering
University of Michigan
Soundability Lab · Ann Arbor, MI