この勉強会は終了しました。ご参加ありがとうございました。
As self-driving car technology matures, autonomous vehicle research is moving toward building more human-centric interfaces and accountable experiences. Driving simulators avoid many ethical and regulatory concerns about self-driving cars and play a key role in testing new interfaces or autonomous driving scenarios. However, apart from validity studies for manual driving simulation, the capabilities of driving simulators in replicating the experience of self-driving cars have not been widely investigated. In this paper, we build six self-driving simulation platforms with varying levels of visual and motion fidelities ranging from a screen-based in-lab simulator to the mixed-reality on-road simulator we propose. We compare the sense of presence and simulator sickness for each simulator composition, as well as its visual and motion fidelities with a user study. Our novel mixed-reality automotive driving simulator, named MAXIM, showed highest fidelity and presence. Our findings suggest how visual and motion configurations affect experience in autonomous driving simulators.
LEDs on a strip, when turned on and off in a specific order, result in the perception of apparent motion (i.e. beta movement). In the automotive domain such chase lights have been used to alter drivers' perception of driving speed by manipulating the pixel speed of LEDs. We argue that the perceived velocity of beta movement in the peripheral view is not only based on the actual pixel speed but can be influenced by other factors such as frequency, width and brightness of lit LED segments. We conducted a velocity matching experiment (N=25) by systematically varying these three properties, in order to determine their influence on a participant's perceived velocity in a vehicle mock-up. Results show that a higher frequency and stronger brightness increased perceived velocity, whereas segment width had no influence. We discuss how findings may be applied when designing systems that use beta movement to influence the perception of ambient light velocity.
Blind people have limited access to information about their surroundings, which is important for ensuring one's safety, managing social interactions, and identifying approaching pedestrians. With advances in computer vision, wearable cameras can provide equitable access to such information. However, the always-on nature of these assistive technologies poses privacy concerns for parties that may get recorded. We explore this tension from both perspectives, those of sighted passersby and blind users, taking into account camera visibility, in-person versus remote experience, and extracted visual information. We conduct two studies: an online survey with MTurkers (N=206) and an in-person experience study between pairs of blind (N=10) and sighted (N=40) participants, where blind participants wear a working prototype for pedestrian detection and pass by sighted participants. Our results suggest that both of the perspectives of users and bystanders and the several factors mentioned above need to be carefully considered to mitigate potential social tensions.
Driving is a task that is often affected by emotions. The effect of emotions on driving has been extensively studied. Anger is an emotion that dominates in such investigations. Despite the knowledge on strong links between scents and emotions, few studies have explored the effect of olfactory stimulation in a context of driving. Such an outcome provides HCI practitioners very little knowledge on how to design for emotions using olfactory stimulation in the car. We carried out three studies to select scents of different valence and arousal levels (i.e. rose, peppermint, and civet) and anger eliciting stimuli (i.e. affective pictures and on-road events). We used this knowledge to conduct the fourth user study investigating how the selected scents change the emotional state, well-being, and driving behaviour of drivers in an induced angry state. Our findings enable better decisions on what scents to choose when designing interactions for angry drivers.
Using a phone while driving is distracting and dangerous. It increases the accident chances by 400%. Several techniques have been proposed in the past to detect driver distraction due to phone usage. However, such techniques usually require instrumenting the user or the car with custom hardware. While detecting phone usage in the car can be done by using the phone's GPS, it is harder to identify whether the phone is used by the driver or one of the passengers. In this paper, we present a lightweight, software-only solution that uses the phone's camera to observe the car's interior geometry to distinguish phone position and orientation. We then use this information to distinguish between driver and passenger phone use. We collected data in 16 different cars with 33 different users and achieved an overall accuracy of 94% when the phone is held in hand and 92.2% when the phone is docked (≤1 sec. delay). With just a software upgrade, this work can enable smartphones to proactively adapt to the user's context in the car and and substantially reduce distracted driving incidents.