research

invisible maze task

The neuroscientific study of human navigation has been constrained by the prerequisite of traditional brain imaging studies that require participants to remain stationary. Such imaging approaches neglect a central component that characterizes navigation – the multisensory experience of self-movement. Navigation by active movement through space combines multisensory perception with internally generated self-motion cues. We investigated the spatial microgenesis during free ambulatory exploration of interactive sparse virtual environments using motion capture synchronized to high resolution electroencephalographic (EEG) data as well AS psychometric and self-report measures. In such environments, map-like allocentric representations must be constructed out of transient, egocentric first-person perspective 3-D spatial information.

Considering individual differences of spatial learning ability, we studied if changes in exploration behavior coincide with spatial learning of an environment. To this end, we analyzed the quality of sketch maps (a description of spatial learning) that were produced after repeated learning trials for differently complex maze environments.

We observed significant changes in active exploration behavior from the first to the last exploration of a maze: a decrease in time spent in the maze predicted an increase in subsequent sketch map quality. Furthermore, individual differences in spatial abilities as well as differences in the level of experienced immersion had an impact on the quality of spatial learning.

Our results demonstrate converging evidence of observable behavioral changes associated with spatial learning in a framework that allows the study of cortical dynamics of navigation.

Project on Researchgate

Publications:

Gehrke L., Iversen J.R., Makeig S., Gramann K. (2018) The Invisible Maze Task (IMT): Interactive Exploration of Sparse Virtual Environments to Investigate Action-Driven Formation of Spatial Representations. In: Creem-Regehr S., Schöning J., Klippel A. (eds) Spatial Cognition XI. Spatial Cognition 2018. Lecture Notes in Computer Science, vol 11034. Springer, Cham

Paper | Slides | VR Simulation

Gehrke, L., Gramann, K. (2017). “Mobile Brain/Body Imaging (MoBI) of Spatial Knowledge Acquisition during Unconstrained Exploration in VR”. In Proceedings of the First Biannual Neuroadaptive Technology Conference (pp. 22–23). Berlin, Germany. 

Talk Abstract

In the media

The invisible maze task was featured on RBB Television starring DAAD RISE guest Luke Guerdan.
Together with 6sept13, I produced an image film about the Berlin Mobile Brain/Body Imaging Lab.

detecting prediction errors during haptic immersion

Designing immersion is the key challenge in virtual reality; this challenge has driven advancements in displays, rendering and recently, haptics. To increase our sense of physical immersion, for instance, vibrotactile gloves render the sense of touching, while electrical muscle stimulation (EMS) renders forces. Unfortunately, the established metric to assess the effectiveness of haptic devices relies on the user’s subjective interpretation of unspecific, yet standardized, questions.

We explore a new approach to detect a conflict in visuo-haptic integration (e.g., inadequate haptic feedback based on poorly configured collision detection) using electroencephalography (EEG). We propose analyzing event-related potentials (ERPs) during interaction with virtual objects. In our study, participants touched virtual objects in three conditions and received either no haptic feedback, vibration, or vibration and EMS feedback. To provoke a brain response in unrealistic VR interaction, we also presented the feedback prematurely in 25% of the trials.

We found that the early negativity component of the ERP (so called prediction error) was more pronounced in the mismatch trials, indicating we successfully detected haptic conflicts using our technique. Our results are a first step towards using ERPs to automatically detect visuo-haptic mismatches in VR, such as those that can cause a loss of the user’s immersion.

Publications:

Lukas Gehrke, Sezen Akman, Pedro Lopes, Albert Chen, Avinash Kumar Singh, Hsiang-Ting Chen, Chin-Teng Lin, and Klaus Gramann. 2019. Detecting Visuo-Haptic Mismatches in Virtual Reality using the Prediction Error Negativity of Event-Related Brain Potentials. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI ’19). ACM, New York, NY, USA, Paper 427, 11 pages. DOI: https://doi.org/10.1145/3290605.3300657

Paper | Slides | Data/Code

Together with Sezen Akman who wrote her excellent Masterthesis within the project we won Best Project Award at “VDI Mensch und Technik 2019” endowed with 3000€.

Poster

In the media

mobile brain/body imaging methods

Mobile brain/body imaging (MoBI) is an integrative multimethod approach used to investigate human brain activity, motor behavior, and other physiological data associated with cognitive processes that involve active behavior.

The focus is on electroencephalography as the only portable method to image the human brain with sufficient temporal resolution to investigate fine-grained subsecond-scale cognitive processes.

Controlled and modifiable experimental environments can be used to investigate natural cognition and active behavior in a wide range of applications in neuroergonomics and beyond.

Project on Researchgate

Publications:

L. Gehrke*, L. Guerdan*, K. Gramann “Extracting Motion-Related Subspaces from EEG in Mobile Brain/Body Imaging Studies using Source Power Comodulation”, 9th International IEEE/EMBS Conference on Neural Engineering (NER), San Francisco, CA, USA, 2019, pp. 344-347.

Paper

E. Jungnickel, L. Gehrke, M. Klug, K. Gramann “MoBI – Mobile Brain/Body Imaging”, Academic Press, Neuroergonomics, 59—63, 2019

Book Chapter

M. Klug, L. Gehrke, F. U. Hohlefeld, K. Gramann, “The BeMoBIL Pipeline – Facilitating Mobile Brain/Body Imaging (MoBI) Data Analysis in MATLAB”,
In Proceedings of the 3rd International Conference on Mobile Brain/Body Imaging, 2018, https://doi.org/10.14279/depositonce-7236

Poster | BeMoBIL Pipeline Code

R. Wenzel, L. Gehrke, K. Gramann, “Prototypical Design of a Solution to Combine Head- Mounted Virtual Reality and Electroencephalography”,
In Proceedings of the 3rd International Conference on Mobile Brain/Body Imaging, 2018, https://doi.org/10.14279/depositonce-7236

Poster | 3D CAD model

neurofeedback while drawing (hackathon)

Engaging users within therapeutic and rehabilitative trainings is a challenge towards sparking, and maintaining motivation. We explored how electroencephalographic (EEG) signals may be used to engage patients, and promote creative rehabilitation and therapeutic interventions. We introduce a proof-of-concept measuring EEG during therapeutic drawing to adapt an interactive canvas online.

Publication:

Scott, S., Gehrke, L. Neurofeedback during Creative Expression as a Therapeutic Tool Springer Series on Bio- and Neurosystems, Vol. 10, Jose L. Contreras-Vidal et al. (Eds): Mobile Brain-Body Imaging and the Neuroscience of Art, Innovation and Creativity, 978-3-030-24325-8

Paper | PDF

spot rotation (M.Sc.)

The retrosplenial complex (RSC) plays a crucial role in spatial orientation by computing heading direction and translating between distinct spatial reference frames. While invasive studies allow investigating heading computation in moving animals, established non-invasive analyses of human brain dynamics are restricted to stationary setups. To investigate the role of the RSC in heading computation of actively moving humans, we used a Mobile Brain/Body Imaging approach synchronizing electroencephalography with motion capture and virtual reality.

Data from physically rotating participants were contrasted with rotations based only on visual flow. Varying rotation velocities were accompanied by pronounced beta synchronization during physical rotation. In addition, heading computation based only on visual flow replicated alpha desynchronization in the RSC, which was absent during physical rotation.

These results suggest an involvement of the human RSC in heading computation based on vestibular input and implicate revisiting traditional findings of alpha desynchronization during spatial orientation in movement-restricted participants.

Publications:

K. Gramann, F. U. Hohlefeld, L. Gehrke, M. Klug. 2018. “Heading computation in the human retrosplenial complex during full-body rotation”. bioRxiv (2018). https://doi.org/10.1101/417972

Preprint

L. Gehrke, K. Gramann, “Human Retrosplenial Activity during Physical and Virtual Heading Changes revealed by Mobile Brain/Body Imaging (MoBI)”, presented at yourbrainonart 2016, Cancun, Mexico.