Human-Automation (AI) Interaction

Challenges and Solutions in Teleoperation: Enhancing Human-Robot Interaction in Virtual Reality

Even experienced individuals find teleoperating robots to be a demanding endeavor. This investigation delves into the obstacles associated with teleoperation and proposes remedies to enhance human-robot interaction. It scrutinizes input techniques and technological progress while emphasizing situational awareness and robot control interfaces. It is imperative to base the design of teleoperation systems on common human behaviors and cognitive processes. This particular case study explores the utilization of virtual reality headsets and handheld controllers for the remote operation of a humanoid torso robot. The outcomes underscore the need for further research and development efforts to address teleoperation challenges.

Safety and Trust in Human-Robot Collaboration (e.g., 10.1080/24725838.2023.2287015)

Collaborative robots, also known as cobots, have been utilized to assist human workers in manufacturing and production workplaces over the past years. The primary purpose of introducing cobots to the workplace is to maximize work performance and reduce industrial accidents. However, occupational safety and trust issues must still be overcome to achieve an efficient human-robot collaboration.

Effects of an intelligent virtual assistant on office task performance and workload in a noisy environment. Applied Ergonomics. DOI: 10.1016/j.apergo.2023.103969

This study examines the effects of noise and the use of an Intelligent Virtual Assistant (IVA) on the task performance and workload of office workers. Data were collected from forty-eight adults across varied office task scenarios (i.e., sending an email, setting up a timer/reminder, and searching for a phone number/address) and noise types (i.e., silence, non-verbal noise, and verbal noise). The baseline for this study is measured without the use of an IVA. Significant differences in performance and workload were found on both objective and subjective measures. In particular, verbal noise emerged as the primary factor affecting performance using an IVA. Task performance was dependent on the task scenario and noise type. Subjective ratings found that participants preferred to use IVA for less complex tasks. Future work can focus more on the effects of tasks, demographics, and learning curves. Furthermore, this work can help guide IVA system designers by highlighting factors affecting performance. 

Perceived trust in artificial intelligence technologies: A preliminary study. Human Factors and Ergonomics in Manufacturing & Service Industries. DOI:10.1002/hfm.20839.

Artificial intelligence (AI) is becoming increasingly prevalent in all spheres of society. Still, the perception of AI from users and customers remains the main barrier for its widespread adoption. Previous studies showed that the acceptance of new technologies in society depends on perceived characteristics. This study examined users’ perception of trust, the difficulty of the task, and application performance when using an AI‐based technology. These factors help us to elucidate the mechanisms for building trust in AI technology from the users’ perspective. A total of 18 participants took part in the experiment with the Google AutoDraw software as an AI tool. As a result, the difficulty of the task, perceived performance, and success/failure of the task can be regarded as the influential factors for the perceived trust evaluation. The perceived trust of users in new AI products would be increased by improving product performance and the successful implementation of the tasks. The obtained results and insights can serve AI product developers to increase the level of users’ trust and attraction towards their technologies and applications.

Relationships between Physiological Signals and Stress Levels in the case of Automated Technology Failure. Human-Intelligent Systems Integration. DOI: 10.1007/s42454-019-00003-w.

Although successful automation can bring abundance to people’s lives, the prolonged use of unreliable automation causes negative impacts on users. This study aims to examine how prolonged use of an unreliable auto-proofreading system affects users’ trust levels and physiological responses. Nineteen native English speakers participated in tasks that correct grammatical errors in each of the 20 sentences in reliable and unreliable proofreading conditions. During the tasks, the participants’ electrodermal activities (EDA) were recorded and their perceived trust in the proofreading system was evaluated. As the unreliable autoproofreading system worked improperly, perceived trust decreased gradually, and a noticeably increasing pattern of EDA signals was observed. In contrast, perceived trust increased gradually, and a stable or a decreasing pattern of EDA signals were observed in the reliable auto-proofreading system. Prolonged use of an unreliable system results in aggravating anxiety, causing an increase in distrust and EDA signals. The findings of this study provide empirical data that can be used for designing a fail-safe feature of automation by minimizing a user’s anxiety level.