Research  |  Publications  |  Bio

Erzhen Hu

eh2qs[at]virginia[dot]edu

I am a second-year CS Ph.D. student (Aug. 2021-present) at the University of Virginia, where I work on Human-Computer Interaction. My PhD advisor is Seongkook Heo.

My research involves developing and evaluating new interactive systems for distributed/hybrid meetings, such as digitalizing proxemic interactions to improve ad-hoc and focused conversations and enhancing AI-enabled augmented communication.

I'm interested in Human-Computer Interaction, Video-Mediated Communication, Proxemics, and AR/VR for Remote Collaboration.

GitHub  /  Twitter  /  LinkedIn  /  Google Scholar

profile photo

News

Research

project image

ThingShare: Ad-Hoc Digital Copies of Physical Objects for Sharing Things in Video Meetings


Erzhen Hu, Jens Emil Grønbæk, Wen Ying, Ruofei Du, and Seongkook Heo
Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (CHI), 2023
pdf / 30s preview / video figure / 10-min talk / website /

ThingShare utilizes real-time instance segmentation model with image processing for object-sharing purposes during remote meetings. Users can quickly create digital copies of physical objects in the video feeds, which can then be magnified on a separate panel for focused viewing, overlaid on the user’s video feed for sharing in context, and stored in the object drawer for reviews.

project image

OpenMic: Utilizing Proxemic Metaphors for Conversational Floor Transitions in Multiparty Video Meetings


Erzhen Hu, Jens Emil Grønbæk, Austin Houck, and Seongkook Heo
Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (CHI), 2023
pdf / 30s preview / video figure / 10-min talk / website /

OpenMic is a videoconferencing system that utilizes proxemic metaphors for turn-taking by providing 1) a Virtual Floor that serves as a fixed-feature space for users to be aware of others’ intention to talk, and 2) Malleable Mirrors, which are video and screen feeds that can be continuously moved and resized for conveying speaking turns.

project image

FluidMeet: Enabling Frictionless Transitions Between In-Group, Between-Group, and Private Conversations During Virtual Breakout Meetings


Erzhen Hu, Md Aashikur Rahman Azim, and Seongkook Heo
Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (CHI), 2022
pdf / 30s preview / video figure / 10-min talk / 40-min talk /

Conceptualizing proxemic and F-Formation theories, FluidMeet is a video-conferencing system that enables out-group members to overhear group conversations while allowing conversation groups to control their shared level of context. Users within conversation groups can also quickly switch between in-group and private conversations.

project image

Enjoy the Ride Consciously with CAWA: Context-Aware Advisory Warnings for Automated Driving


Erfan Pakdamanian, Erzhen Hu, Shili Sheng, Sarit Kraus, Seongkook Heo, and Lu Feng
Proceedings of the 14th International Conference on Automotive User Interfaces and Interactive Vehicular Applications (AutomotiveUI '22), 2022
doi /

This work proposed context-aware advisory warning method (CAWA) for automated driving that detects the NDRT in which the driver is engaged and selects the type of modality based on the detected activity.

project image

Enabling Remote Hand Guidance in Video Calls Using Directional Force Illusion.


Archana Narayanan, Erzhen Hu, and Seongkook Heo
In Companion Publication of the 2022 Conference on Computer Supported Cooperative Work and Social Computing (CSCW'22 Adjunct), 2022
doi /

The remote meeting system enables guiding the remote partner’s hand with a handheld device creating directional force illusion using asymmetric vibration.





Design and source code from Jon Barron's website