Yevhen Palazhchenko 

Sumy State University, UKRAINE
Sep 2024 - Sep 2024
Twin Fellow

Yevhen Palazhchenko

Projects & Publications

Abstract

Virtual reality (VR) is increasingly used to create safe, cost-effective, and engaging learning environments.

It is commonly assumed that learning outcomes improve as simulation fidelity increases. Some aspects of real environments, however, are very hard to simulate: the feel (haptics) of objects, smell, or the perception of self-motion are prime examples.

Recent work in my lab has shown that these signals can be replaced by information-bearing signals in other (easier-to-simulate) modalities—an object may change color or sound to represent weight or motion.

We have shown that the introduction of these additional signals does enhance learning outcomes, user experience and, perhaps counterintuitively, learning transfer.

In the proposed project, I will build on these findings to develop a learning theoretical explanation for the observed effect, drawing on existing theories and literature.

While any signal can be used as a substitute for missing signals or to further augment the simulation, existing research suggests that some signals are more suitable than others. I will exploit existing research on cross-modal correspondences and natural scene statistics to develop cues that optimally enhance learning outcomes for specific tasks, signal types, and modalities.

Cross-modal correspondences are culture-independent and intuitive links between signals in two different modalities: we expect large objects to make lower frequency sounds than small ones.

Cooperation partner
Dr. Christoph Zetzsche, Universität Bremen