この勉強会は終了しました。ご参加ありがとうございました。
Preventing users from walking through virtual boundaries (e.g., walls) is an important issue to be addressed in room-scale virtual environments (VEs), considering the safety and design limitations. Sensory feedback from wall collisions has been shown to be effective; however, it can disrupt the immersion. We assumed that a greater sense of presence would discourage users from walking through walls and conducted a two-factor between-subjects experiment (N = 92) that controls the anthropomorphism (realistic or abstract) and visibility (full-body or hand-only) of self-avatars. We analyzed the participants' behaviors and the moment they first penetrated the wall in game-like VEs that gradually instigated participants to penetrate the walls. The results showed that the realistic full-body self-avatar was the most effective for discouraging the participants from penetrating the walls. Furthermore, the participants with lower presence tended to walk through the walls sooner. This study can contribute to applications that require realistic user responses in VEs.
Field studies on public displays can be difficult, expensive, and time-consuming. We investigate the feasibility of using virtual reality (VR) as a test-bed to evaluate deployments of public displays. Specifically, we investigate whether results from virtual field studies, conducted in a virtual public space, would match the results from a corresponding real-world setting. We report on two empirical user studies where we compared audience behavior around a virtual public display in the virtual world to audience behavior around a real public display. We found that virtual field studies can be a powerful research tool, as in both studies we observed largely similar behavior between the settings. We discuss the opportunities, challenges, and limitations of using virtual reality to conduct field studies, and provide lessons learned from our work that can help researchers decide whether to employ VR in their research and what factors to account for if doing so.
Virtual Reality experiences and games present believable virtual environments based on graphical quality, spatial audio, and interactivity. The interaction with in-game characters, controlled by computers (agents) or humans (avatars), is an important part of VR experiences. Pre-captured motion sequences increase the visual humanoid resemblance. However, this still precludes realistic social interactions (eye contact, imitation of body language), particularly for agents. We aim to make social interaction more realistic via social touch. Social touch is non-verbal, conveys feelings and signals (coexistence, closure, intimacy). In our research, we created an artificial hand to apply social touch in a repeatable and controlled fashion to investigate its effect on the perceived human-likeness of avatars and agents. Our results show that social touch is effective to further blur the boundary between computer- and human-controlled virtual characters and contributes to experiences that closely resemble human-to-human interactions.
Spatial recordings allow viewers to move within them and freely choose their viewpoint. However, such recordings make it easy to miss events and difficult to follow moving objects when skipping through the recording. To alleviate these problems we present the Who Put That There system that allows users to navigate through time by directly manipulating objects in the scene. By selecting an object, the user can navigate to moments where the object changed. Users can also view trajectories of objects that changed location and directly manipulate them to navigate. We evaluated the system with a set of sensemaking questions in a think-aloud study. Participants understood the system and found it useful for finding events of interest, while being present and engaged in the recording.
Sketching in virtual reality (VR) enhances perception and understanding of 3D volumes, but is currently a challenging task, as spatial input devices (e.g., tracked controllers) do not provide any scaffolding or constraints for mid-air interaction. We present VRSketchIn, a VR sketching application using a 6DoF-tracked pen and a 6DoF-tracked tablet as input devices, combining unconstrained 3D mid-air with constrained 2D surface-based sketching. To explore what possibilities arise from this combination of 2D (pen on tablet) and 3D input (6DoF pen), we present a set of design dimensions and define the design space for 2D and 3D sketching interaction metaphors in VR. We categorize prior art inside our design space and implemented a subset of metaphors for pen and tablet sketching in our prototype. To gain a deeper understanding which specific sketching operations users perform with 2D and which with 3D metaphors, we present findings of usability walkthroughs with six participants.