Introducing Christian Lay-Geng

 

Hi! I am Christian Lay-Geng and I am a 4th year Cognitive Science major at UCSD specializing in Human Design. I have been involved with VR since my 2nd year, building projects in Unity for the VR Club and hackathons on campus. I love how VR can rewrite reality and send you to different worlds. I am helping set up the assets in Dr. Ying Wu’s VR experiments, which investigate how people search or change their search patterns when under pressure.

I also participated in the Swartz Center’s NeuroLab for Experimental and Architectural Design workshop (NLEAD). The workshop brought architectural and neuroscience students together to investigate how architectural design affected students by running small experiments every week with biometric devices. It was a totally different experience using the biometrics from typical classroom learning, and running experiments made me sympathize with researchers regarding the many possible issues that can occur when collecting data. It was fun being able to run our own experiments, and the guest lecturers were extremely diverse.

Steps Toward Integrating Electrocardiographic Data with EEG

Electrocardiography (ECG) is an important bio-metric tool for estimating aspects of mental states related to stress, relaxation, and more. The HeartyPatch is a wireless ECG recording device with a low cost and easy setup. With the proper firmware configuration, the HeartyPatch device can stream raw ECG signals in real time under a pre-designated Wi-Fi network. The system developed at SCCN combines Windows PowerShell scripts and Python scripts in order to turn on the Mobile Hotspot function on a Windows PC and establish a local Wi-Fi network to allow the HeartyPatch device to connect to it. Once the device is connected to the network, the system will start receiving signals from the device and broadcast the received data via Lab Streaming Layer (https://github.com/sccn/labstreaminglayer). The device and scripts together yield a plug-and-play and one-click start ECG acquisition system.

Eye Tracking in VR

 

During Escape Room problem solving, eye gaze co-ordinates (x, y, z) detected by the Vive Pro Eye headset are converted into a stream of gaze direction vectors.  These vectors are combined with information about the user’s head direction and eye position inside the game world in order to create a ray.  In the top image, the player’s head is represented by the avatar, and her line of gaze is represented as the pink ray.   In the middle image, the spotlight is centered on a fixated object, which, in this case, is a book.  When the spotlight (or ray) collides with an object (e.g. the book), the unique name of the object and the amount of time spent in contact with the object are recorded (bottom image).   This information is synchronized with time-series EEG samples and other physiological data streams via Lab Streaming Layer.   https://github.com/sccn/labstreaminglayer

 

(Note: Spotlights and rays are not visible to the user in the VR environment.  They are only seen by the experimenter.)

Escape Room Playback

 

 

 

 

It is useful to retain a complete record of all actions and events in each Escape Room problem solving session for downstream visualization and validation purposes. We created an interface that records the unique names and positions of all active game objects at the start of each session, and then records position changes on a frame-by-frame basis at a rate of 30 frames per second. During playback in Unity, objects that did not move in the game remain in their original positions, whereas those that do move undergo a linear interpolation of rotation and translation in each frame until they settle in their new position.

Tracking non-stationary brain network dynamics

Segmenting high dimensional physiological or behavioral data during Escape Room problem solving can be challenge. Here, we propose to track changes in non-stationary brain network dynamics. While working to solve Escape Room puzzles, participants log their motivational state (happy, lost, ambivalent, angry, sad, thinking, curious), how close they feel to solving the puzzle, and how much they would like to continue searching at five-minute intervals by means of an in-game survey that appears in the virtual environment (see image above). Their responses are time-stamped and synchronized with the concurrently recorded EEG.

After preprocessing and artifact correction or removal, the complete EEG data set undergoes Adaptive Mixture Independent Component Analysis (AMICA), developed by Jason Palmer at the Swartz Center for Computational Neuroscience, and described in Hsu et al (2018) . AMICA is a general unsupervised-learning approach that uses a mixture of distinct ICA models – each representing a different set of statistically independent sources – to characterize underlying EEG source activities associated with distinct patterns of whole brain engagement. This approach allows the decomposition of the moment-to-moment fluctuations of activation of different brain systems from continuous, unlabeled EEG data during exploration of Escape Rooms. We will explore how different brain systems may come to be engaged at different intensities as a function of changes in motivational state recorded through the in-game surveys.

Hsu, S.-H., Pion-Tonachini, L., Palmer, J., Miyakoshi, M., Makeig, S., & Jung, T.-P. (2018). Modeling brain dynamic state changes with adaptive mixture independent component analysis. Neuroimage, 183, 47–61. doi:10.1016/j.neuroimage.2018.08.001

Creating Escape Room Scenarios

  

 

Throughout this process, our main goal has been to design a series of puzzles of equal difficulty in a virtual reality space that can be used to test problem solving and insight. With the desired test length of roughly 25 minutes, we aimed to have approximately 3 puzzles per environment.

In order to design these puzzle sequences, I researched a multitude of different puzzle types from more physical based to pattern recognition. As an escape room veteran myself, I’d seen firsthand a variety of different problem solving paradigms that could be used as the source of puzzles like cyphers, object placement, and more. Even so, one of the major challenges with this project was to make puzzles varied enough while still being the same difficulty. However, by tying the environments together with similar architecture, decor, and props like books, vases, and chairs, I was able to maintain quite similar test settings with the only main variable being these puzzles.

From there, I designed each series around the same objective: find the blue key. Each series would also have clues, and while I took a different approach for each puzzle itself like the map comparison puzzle in room 1 or the object orientation puzzle in room 2, we made sure to test it and reference the time and relative difficulty of each in relation to the others to make sure nothing was too difficult or too easy.

Thus far, our preliminary tests with rooms 1-3 have suggested that the difficulties of these puzzles could be similar with an average of 25 minutes each, but with norming testing we will be able to more fully tune and adjust the rooms for equal difficulty.

-Josh Pallag

Biovotion Integration with Labstreaming Layer (LSL)

Lab Streaming Layer is a system for the unified collection of time series data in research experiments. It handles both the networking, time-synchronization, (near-) real-time access, centralized collection, visualization and disk recording of the data. In short, it collects data streams from all pieces of compatible control software and combines them together temporally. LSL’s platform provides a really good chance for us as experimenters to analyze biosignals in a more complex way.

The Biovotion monitor detects multiple body signals, including electrodermal activity, temperature, and heart rate. I started to do software-level integration of Biovotion for the simple reason that I needed to use it in my little experiment (which was still in progress) about fear of heights, as well as work on problem solving. It took me two weeks to communicate with the original developer to understand the overall structure of the software. I was advised to get the integration done on the upper level of the code, but it seemed to me that LSL works on the code’s lower version, so I tried to squeeze the code to the software’s DLL part, and it worked well (which took me another 2 weeks).

So how did the code I added work? To make a program integrate with LSL, one needs to first copy the library into the software that you want to integrate with LSL, and then add a few lines of code that construct the LSL instance. After that, (specifically for this Biovotion software) I found the line of code which sends its data out, and I added one line of code which sends the data to LSL pipeline under that original line of code in the DLL, and it worked perfectly. Biovotion sends its data out in a “string” way. For each time stamp (1/51.2 seconds), it sends all the information including signal channel, time and data in one string. So I basically “hijacked” that information and sent it again in LSL pipeline.

-Qiwei Dong

Subathra Raj

 

My name is Subathra Raj and I am an incoming fourth year Cognitive Science/Neuroscience Specialization undergraduate student at UCSD. I am currently in the process of applying to medical schools, and I aspire to become a surgeon. I am fascinated by the brain since an AP Biology class about five years ago. I have been working with Dr. Ying Wu at the Swartz Center for the past year and a half, studying resting state brain activity associated with creativity and problem solving. My role has primarily been analyzing both EEG and behavioral data, recruiting subjects, and running experiments. I look forward to the incorporation of virtual reality tasks into this experiment and observing how this modality will impact brain activity and behavioral responses.

Multimodal Data during Escape Room Problem Solving

By simultaneously studying behavior — including the allocation of visual attention — brain activity, and autonomic arousal, we hope to obtain a more robust understanding of problem solving in realistic environments than would be possible by studying each modality individually.  The video below shows four types of data that we can record as individuals solve “Escape Room” puzzles in a 3D virtual environment.  Sensors include a Cognionics EEG head band, a Biovotions armband with a photoplethysmogram (PPG) monitor,  an aGlass eye tracker,  and a Notch IMU-based motion tracker.

Qiwei Dong

 

 

I’m a 3rd year Cognitive Science student (Specialized in Machine Learning & Neural Computation) in UCSD. I’m working at the Swartz Center for Computational Neuroscience on a project exploring EEG and physiological response in acrophobia patients experiencing extreme heights (for example, cliffs and building rooftops) in VR environment. I’m also working with professor Ying Wu on the integration of heart rate data from wearable biomonitor (Biovotion) with a signal streaming library called Lab Streaming Layer ().  Our goal is to synchronize time series EEG and PPG (photoplethysmogram) with cognitively relevant events during a set of VR-based problem solving tasks modeled after escape room games.