Visual Search in VR
Visual search involves a wide spectrum of cognitive abilities ranging from basic perceptual and attentional processing to object recognition, long- and short-term memory functions, navigation, planning, problem-solving, and decision-making. Much research in this area has centered on the relationship between bottom-up, sensory driven and top-down, goal-directed guidance cues in the allocation of attention. Although existing work has extensively studied psychophysical variables and the plausibility of neuro-computational models, no research group has yet documented how humans solve challenging search problems in complex “real world” environments.
It is precisely this question, however, that has become an increasingly urgent problem to grasp given the growing feasibility of devising smart technologies that can reduce cognitive load and increase human operator efficiency in a wide range of contexts that require visual search. Here, we aim to model ecologically valid visual search processes that involve navigation of a 3D virtual environment, uncertainty, and psychological stress.
Publication: ONGOING
This project is supported by the National Science Foundation and the Army Research Laboratory.
3D Visual Search Demo
Ying Wu February 1, 2021
For a recent ARL Capstone event, we created a video demo of our multi-modal visual search paradigm. Watch it here.
Adding the Last Variables
Christian Lay-Geng July 23, 2020
Initial Piloting
Christian Lay-Geng June 22, 2020
After completing the code of the basic structure of the game, I felt like the distractor bullets that I had were too different in shape and tip color, making it really easy for me to spot the targets from a good distance away. I opted to remove the green-tip bullet and instead put a non-colored bullet shaped like the target bullet instead.
Designing the Bullet Room
Christian Lay-Geng March 13, 2020