Repetitive indoor layouts frequently cause spatial disorientation. Current navigation systems typically intervene reactively with generic instructions, lacking insight into the environmental root causes or the user’s cognitive state. To address this problem, we conducted a VR experiment (N=40) systematically manipulating geometric symmetry and feature similarity while capturing multimodal behaviors. Results reveal a functional separation: geometric symmetry primarily drives exploratory body rotation, whereas feature similarity determines navigation outcomes. Critically, simultaneous cue failure triggers a performance collapse—increasing mean hesitation duration by 370%—and forces users to switch from active reorientation (scanning via body rotation) to locomotor compensation (e.g., wall-following) based on a dynamic cost-benefit trade-off. Leveraging these patterns, our CNN-BiLSTM model detects the behaviorally defined getting lost state with >90% agreement with heuristic labels. We contribute design principles for dual context-awareness systems. By integrating environment context (geometric or featural ambiguity) and user context (cognitive state), systems can deploy content-adaptive aid—specifically orienting or discriminating aids—to dynamically balance navigation efficiency with active spatial cognition.
ACM CHI Conference on Human Factors in Computing Systems