Loading...

About me

I am an Assistant Professor in Computer Science and Engineering at the University of Michigan. My research intersects human-computer interaction (HCI) and applied machine learning (ML), and focuses on sound accessibility. Specifically, I am interested in inventing novel sound sensing and visualization systems to support accessibility and healthcare applications. I completed my PhD from University of Washington, masters from MIT Media Lab, and have worked at Microsoft Research, Google, and Apple.

At University of Michigan, I direct the Accessibility Lab, where we are actively looking for passionate undergraduates, masters, PhD students, postdocs, and collaborators (see below). Our current focus is on sound, with the long-term vision of making sounds and sound information accessible to everyone, everywhere, be it in noisy restaurants when you have trouble understanding a conversation, or at your home when you are wearing your headphones and miss someone knocking. We are accomplishing this vision through designing user-centric systems with an aim to deliver sound information in alternative ways—using visual or haptic feedback or through enhanced audio. A key aspect of this research is the focus on the user—we are conducting a thorough design process to understand our users' concerns, building technology systems to address those concerns, and deploying and studying our built systems with the users in their natural environments.

Our lab publishes work in the most selective HCI venues such as CHI, UIST, and ASSETS. In the past, seven of our papers have been honored with best paper and honorable mention awards. As well, our systems have been publicly launched (e.g., one system has over 100,000 users), have received attention from industries such as Microsoft, Google, and Apple, and are frequently covered by the press.

Recent news

Nov 15: Invited talk at BostonCHI, the oldest SIGCHI chapter!
Nov 5: Invited talk at Michigan AI Symposium!
Sep 12: Started as an Assistant Professor at the University of Michigan!
Jul 27: Graduated with a PhD from the University of Washington!
May 26: SoundWatch featured in CACM Research Highlights!

Open Positions

Prospective PhD students: I am looking for PhD students with the following set of skills: (1) those with prior HCI and user study experience to lead projects focused on Deaf/disabled population, or (2) those with prior applied ML experience around sound and audio, but who also can build HCI systems and run independent HCI focused projects (through conducting user studies, collecting data, and evaluating end-to-end systems). If you think you fit any of these skills, please send me an email at profdj [at] umich [dot] edu with a brief justification of your skill set (e.g., through relevant research experience), a list of projects in my lab that you are interested in, and your CV. For more details, please read my research and teaching statements.

Undergraduates/Masters students:: Please complete this form and we will reach out to you!

Publications

A t-SNE low dimensional cluster visualization of lots of short mouth sounds

Sound Actions

Non-Verbal Sound Detection
Commercialized on iPhone and iPad (Try it out)
ICASSP 2022: PAPER | TALK
A close up shot of a person attending a 10-person video conference on a laptop.
AWARD

Talks

Sound Sensing for Deaf and Hard of Hearing Users

Navigating Graduate School with a Disability

Deep Learning for Sound Awareness on SmartWatches

Field Study of a Tactile Sound Awareness Device

First slide of the talk. A scene of a kitchen in the background with the talk title: Field Deployment of a Smarthome Sound Awareness System for Deaf and Hard of Hearing Users

Field Deployment of a In-Home Sound Awareness System

First slide of the talk. Shows DJ riding on a camel in a desert. The title of the talk reads: Autoethography of a Hard of Hearing Traveler

Autoethnography of a hard of hearing traveler

First slide of the talk. A person claps in front of a tablet interface that visaulizes the clapping sound using a pulsating bubble. The title reads: Exploring Sound Awareness in the Home for People who are Deaf or Hard of Hearing

Exploring sound awareness in the home

First slide of the talk with an image of an ear doning a hearing aid. The title reads: Deaf and Hard of Hearing Individuals' Preferences for Wearable and Mobile Sound Awareness Technologies​

Online Survey of Wearable Sound Awareness

First slide of the talk showing a person walking and talking with another person. The first person is wearing a HoloLens which shows ​real-time captions in Augmented Reality. Title is Towards Accessible Conversations in a Mobile Context for People who are Deaf and Hard of Hearing.

Towards accessible conversations in a mobile context

First slide of the talk showing a rocky beach with waves crashing over the beach. Talk title reads: Immersive Scuba Diving Simulator Using Virtual Reality

Immersive scuba diving simulator using virtual reality​

First slide of the talk showing a round table conversation with a person wearing a Google Glass. The directions of the active speakers in the conversation are visualized as arrows on the Glass. Talk title is Head-Mounted Display Visualizations to Support Sound Awareness for the Deaf and Hard of Hearing.

HMD Visualizations to Support Sound Awareness

Videos