Hey :)

I'm Hongni 虹霓

About Me

I am set to embark on a journey as a PhD candidate, joining the team of Prof. Dr. Xin Tong at HKUST(GZ). Previously, I served as a research assistant at DKU HCI Lab , under the supervision of Prof. Dr. Xin Tong . I earned my master's degree in Digital and Interaction from Politecnico di Milano, where I completed my thesis under the supervision of Prof. Dr. Xin Tong, Prof. Dr. Ray LC , and Prof. Mario Covarrubias Rodriguez. My research interests encompass Extended Reality (XR), generative AI, and embodied interaction design. For instance, I specialize in developing dynamic prototypes such as projection-based AR for language learning and AI-driven virtual pets. If you share a passion for these areas, I'm enthusiastic about connecting to exchange ideas and collaborate.

Publications

Year 2023

“I Keep Sweet Cats In Real Life, But What I Need In The Virtual World Is A Neurotic Dragon": Virtual Pet Designs With Personality Patterns.

Hongni Ye, Ruoxin You, Kaiyuan Lou, Yili Wen, Xin Yi and Xin Tong

In processing of Chinese CHI 2023

We designed the virtual pets' visual appearances by following the design guidelines from the Five Factor Model (FFM) in voxel format. We conducted a study to investigate people’s perceptions of virtual pets’ personalities and appearances through two user studies.

DOI PDF

RedCapes: Design and Evaluation of a Game Towards Improving Autistic Children's Privacy Awareness.

Xiaowen Yuan, Hongni Ye, Ziheng Tang, Xiangrong Zhu, Yaxing Yao and Xin Tong

In processing of Chinese CHI 2023

Research on typical development (TD) children's privacy learning often neglects autistic children's unique needs. Therefore, our study aims to understand their challenges in learning privacy and design an effective privacy education game for them. We designed a serious game, RedCapes, and recruited 9 autistic children and 6 TD children to evaluate the game.

DOI PDF

WooGu: Exploring an Embodied Tangible User Interface for supporting children to learn farm-to-table food knowledge.

Hongni Ye, Tong Wu, Lawrence H Kim, Min Fan, Xin Tong

Proceedings of the 22nd Annual ACM Interaction Design and Children Conference

Explored the design of a prototype named WooGu with tangible user interfaces (TUI) and embodied interactions, which aims to improve young children’s food literacy. WooGu presents three design features: a cube displaying user interfaces, step-by-step tasks guiding children to learn food from farm to table, and hands-on props made by cardboard empowering embodied interactions.

DOI PDF

PetGen: Design and Generation of Virtual Pets

Hongni Ye, Ruoxin You, Kaiyuan Lou, Yili Wen, Xin Yi, Xin Tong

2023 IEEE International Conference on Multimedia and Expo Workshops (ICMEW)

In this work, we propose PetGen, an application that can generate varying virtual pets' appearances following the Five-Factor Model (FFM) based on AI algorithms. The 3D voxel virtual pets' appearances were initialized under four initial archetypes and then underwent recombination, dyeing, and texturing stages, where AI algorithms were applied as filters.

DOI PDF

Year 2022

Twilight Rohingya: The Design and Evaluation of Different Navigation Controls in a Refugee VR Environment

Hongni Ye, Chaoyu Zhang, Hongsheng Xu, LC Ray, Xin Tong

2022 International Conference on Cyberworlds (CW)

In this research, we designed a 360° VR video-based prototype environment to depict real-life scenarios of a refugee camp in Southern Bangladesh. Our study consists of 2 conditions: active (i.e., selecting thematic video clips freely) and passive (i.e., watching sequence-determined clips passively).

DOI PDF