About me
My name is Jackie Tang, and I am a fourth-year PhD candidate at the University of Illinois, conducting research in Professor Bosch’s Human Learning Lab and the Bashir Privacy Lab.
My focus areas include:
Human-Computer Interaction, Human-AI Interaction, Human Factors, and Human-Agent Teaming
Trust plays a pivotal role in how individuals engage with automated systems, whether they are robots, virtual assistants, or AI-driven tools. As a human factors researcher, my focus is on understanding the intricate dynamics of trust in agents, particularly in the context of human-agent interaction. By investigating the psychological and environmental factors that influence this trust, I aim to develop models and guidelines that enhance collaboration between humans and agents.
Research Project
-
Exploring the Influence of Self-Avatar Similarity on Human-Robot Trust
Avatars as digital portrayals of humans play a pivotal role in fostering embodiment and immersion in virtual reality (VR) environments by providing users with a visual representation of their virtual presence.
This study systematically explores how the similarity between human users and their avatars influences trust dynamics in HRI within VR environments.
-
Can Students Understand AI Decisions Based on Variables Extracted via AutoML?
This study assessed students’ perceptions of predictive variables (i.e., “features”) used in machine learning models for predicting student outcomes; in particular, we explored features crafted by experts versus those extracted by methods for automatic machine learning (i.e., AutoML).
-
The Impact of Perceived Risk on Trust in Human-Robot Interaction
As robots increasingly assist humans in high-risk scenarios, understanding how perceived risk influences human-robot trust becomes crucial. This study investigates the effect of risk perception on trust dynamics in human-robot interaction (HRI) using a virtual reality (VR) fire evacuation scenario.
-
Examining Trust’s Influence on Autonomous Vehicle Perceptions
This research encompasses various elements, such as perceived reliability, safety features, user experience, and the impact of public opinion on personal trust levels.
-
When Robots Say Sorry in High-Stake Environment
This resarch investigating the conditions that shape an individual's decision to trust or distrust a robot in high-risk, time-critical situations is a crucial step toward developing reliable and acceptable robotic assistants for emergency response. In this study, we explored the role of different types of apologies (explanatory, emotional, and no apology) in trust repair within high-risk environments.oes here
-
PhD Thesis: The Role of Proximity in Human-Agent Teaming
This work aims to derive novel insights into the conceptual linkages between distinct proximity dimensions and human trust toward virtual agents