Research  |  Publications  |  Bio

Erzhen Hu


I am a first-year Ph.D. student at the CS Department at the University of Virginia, where I work on Human-Computer Interaction. My PhD advisor is Seongkook Heo.

My journey from sociology and psychology to HCI has taught me the value of socio-spatial theories and methods and led me to the field of developing new interfaces for distributed/hybrid meetings.

I'm interested in Human-Computer Interaction, Collaborative and Social Computing, AR/VR for Remote Collaboration and Assistive Technologies.

GitHub  /  Twitter  /  LinkedIn  /  Google Scholar

profile photo



project image

FluidMeet: Enabling Frictionless Transitions Between In-Group, Between-Group, and Private Conversations During Virtual Breakout Meetings

E. Hu, M. A. R. Azim, S. Heo
Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (CHI), 2022
doi / pdf / 30s preview / video figure / 8-min talk / 40-min talk /

FluidMeet enables out-group members to overhear group conversations while allowing conversation groups to control their shared level of context. Users within conversation groups can also quickly switch between in-group and private conversations.

project image

Enabling Remote Hand Guidance in Video Calls Using Directional Force Illusion.

A. Narayanan, E. Hu, S. Heo
In Companion Publication of the 2022 Conference on Computer Supported Cooperative Work and Social Computing (CSCW'22 Adjunct), 2022
pdf /

The remote meeting system enables guiding the remote partner’s hand with a handheld device creating directional force illusion using asymmetric vibration.

project image

Enjoy the Ride Consciously with CAWA: Context-Aware Advisory Warnings for Automated Driving

E. Pakdamanian, E. Hu, S. Sheng, S. Kraus, S. Heo, L. Feng
Proceedings of the 14th International Conference on Automotive User Interfaces and Interactive Vehicular Applications (AutomotiveUI '22), 2022
doi / pdf /

This work proposed context-aware advisory warning method (CAWA) for automated driving that detects the NDRT in which the driver is engaged and selects the type of modality based on the detected activity.

Design and source code from Jon Barron's website