Cognitive Implications of Widespread VR (2/3)
Second of a three-part series based on the talk: Cognitive Implications of VR, at Vision Summit February 2016. The first article can be found here.
Previous + current interesting studies
These are interesting apps and studies we came across during our own research and work. We want to share them with the community in order to expose you to these interesting aspects of perception and cognition in VR.
Also called simulator sickness or virtual reality sickness. Most of you are familiar with it. Cyber sickness is “distinct from motion sickness in that the subject is stationary, but has a compelling sense of motion induced through exposure to changing visual imagery. Symptoms of simulator sickness are similar to those commonly experienced by subjects reporting motion sickness” (Arns and Cerney, 2005). Interestingly, while motion sickness is worse in young people, some studies have found that may not hold true for cybersickness. It looks like the younger you are, the less prone you will be to simulator sickness (Arns and Cerney, 2005; Sheridan, 1992).
As you probably know, cyber sickness is very common. For example, a 2001 NASA technical report compared the symptom profiles and total sickness scores of eight virtual environments with a large database from military flight simulators. In a scale of 0-20, where scores >20 indicate severe sickness symptoms (“a problem simulator”), the report found the average score for VR experiences was about 30 (however the range was 19-55). Another study from University of Queensland researchers in 2015 found the incidence of motion sickness during immersion in a VR environment for neck rehabilitation was 28%.
Ways to reduce simulator sickness have been researched for decades and are still being studied. Hardware manufacturers are going with OLED (Organic Light Emitting Diode) displays with low frame persistence, which means less motion blur and ghosting. On the software side, there are a couple of approaches that developers take: one is implementing smooth locomotion that avoids unrealistic movement like camera rolling which humans don’t naturally do, and also avoids aggressive changes of velocity or acceleration. The other approach, especially in applications where you must have aggressive movements like a roller coaster, is to provide fixed objects as visual references, like a big cockpit for flight simulators. The image below shows an example of the latter.
Dr. David Whitinghill, professor at Purdue University and Dio’s former classmate, had interesting results when rendering a nose in the virtual world. The team found people immersed in a simulation of a villa in Tuscany stayed an average of 94.2 seconds longer without feeling sick, while those in a roller coaster game played an average of 2.2 seconds longer.
Very interestingly, Oculus has researchers on staff that have re-run some of the experiments from decades ago, and found that simulator sickness today is not as big of an issue as in the past due to considerable improvements in aspects like tracking, fidelity and latency.
Perhaps one of the most overused words in VR, “presence,” has been widely accepted as a key aspect since it entered into the scientific debate in 1992. The term was used in the title of a newly created journal: Presence, Teleoperators and Virtual Environments. (Coehlo et al, 2006)
A 2012 study on smoking craving during virtual reality exposure showcases the importance of presence. Forty six smokers were exposed to seven complex virtual environments that reproduce typical situations in which people smoke: during breakfast, in a restaurant, a bus station and a pub. Researchers tracked three types of variables for predicting smoke craving: related to nicotine dependence, related to anxiety and impulsivity, and related to the sense of presence in the virtual environments. Sense of presence was the only (!) predictor of self-reported craving in all the experimental virtual environments. This finding is significant for cue exposure treatment (CET) which consists of controlled and repeated exposure to drug- related stimuli in order to reduce cue-reactivity.
Realistic rendering and stereoscopy are not the only factors that increase sense of presence. VR researchers and developers leverage those principles we mentioned in our previous article about how the brain perceives time, space and the sense of self. Experiments similar to the classical rubber hand illusion (RHI) have been done in VR to see how we are able to feel the same sensations towards a virtual body as the biological body: Petkova & Ehrsson, 2008, have users swapping bodies; Normand et al, 2011, create the illusion of users’ belly increasing; Slater et al, 2010, use first person perspective to generate a full body transfer illusion. These and more studies are discussed by researchers from Universitat de Barcelona in a paper on the Sense of Embodiment in Virtual Reality. As discussed in a 2015 article in Frontiers in Psychology, studies indicate that presence seems correlated to emotions; experiences in virtual environments that include strong emotions result in higher sense of presence. Another interesting finding is that high immersion (i.e. high technical quality) only results in high presence when no emotions are involved. These results have already been used for utilizing VR to treat mental disorders such as phobia and anxiety.
The Vision Summit 2016 in Los Angeles, California included an interesting talk about emotion and presence in VR: the making of Sisters and Café Ame, two apps for Gear VR by Otherworld Interactive.
Also at Vision Summit 2016, a talk from scientists at the Institute for Creative Technologies discussed Immersive Clinical Care with VR. They have done amazing work on treating PTSD and other disorders. Another place doing virtual reality exposure therapy is The Virtual Reality Medical Center where treatment and training programs have been going on for over 10 years.
For a study at the University of Texas at San Antonio, researchers implemented a virtual reality soccer game for training young adults with autism on hand-eye coordination. They found participants actually did better when they were able to customize the virtual humans in the game, which again goes back to the point that sense of presence and immersion is affected by many aspects other than realistic graphics. This interesting project was featured on the Voices of VR podcast.
Alzheimer’s disease is another condition where virtual reality shows great potential for training, assessing and treating. A 2015 article in Frontiers in Aging Neuroscience summarizes significant examples of virtual environments for diagnostic assessment and cognitive training in mild cognitive impairment and Alzheimer’s disease, concluding high levels of immersion and interaction are yet to be seen in these apps.
“Tell me and I forget; teach me and I may remember; involve me and I will learn.”
Curiously, this quote and some variations has been attributed to Benjamin Franklin, Confucius and Xunzi. It apparently evolved around the ‘60s and is related to learning by doing. Whomever the author is, the idea emerges again with virtual reality because of its power is to put students into the environment and perform active learning as opposed to typical lecture scenarios.
Vision Summit had a great talk on VR in the Classroom. Another example is Immersive VR education, a company that creates educational experiences in VR. Their Apollo 11 experience is quite beautiful and has won several awards. They are also working on medical training, where medical students could go into a virtual emergency room and practice their skills.
One thing that is exciting about companies and projects like these is how VR technology is becoming more widespread and affordable. Over 10 years ago when I (Dio) was a graduate student, I saw many research centers and universities across the country implementing VR training apps for hospitals, factories for learning how to operate machinery, and even the interactive reproduction of Radharaman, a Hindu temple dedicated to the worship of the god Krishna by VR pioneer Dr. Carolina Cruz-Neira when she was at University of Iowa. But until recent years, VR technology was quite expensive, the hardware was cumbersome, and limited to academic centers with big budgets. Now, the hardware is increasingly affordable, so we see the emergence of companies such as Immersive VR. It will be great when even high school teachers will assign VR homework to kids.
Another quite interesting recent example is a joint work by scientists from Yale University and University of Dallas that uses virtual reality technologies for teaching cognitive skills to young adults with autism. The objective is to enhance social skills, social cognition and social functioning. “One or more virtual characters join in as the therapist presents the day’s situation. It may be a job interview, a new neighbor or a blind date. The counselor also describes the social skills they’ll be practicing. The task may involve recognizing the unspoken intentions behind a behavior or sharing an opinion in a socially acceptable way.” Their results are fascinating. Brain scans from before and after treatment show that after the VR experience, young adults with autism had increased activation in brain regions associated with social understanding. A second set of brain scans comparing pre and post treatment shows increased connectivity between brain regions that exchange information during effective social interactions. Parts of the brain became active after virtual reality sessions! These young adults are actually learning new cognitive skills.
We have to mention Cerevrum, a game for Samsung Gear VR aimed at improving the whole spectrum of cognitive abilities: “memory, perceptual speed, multitasking, executive function, and attention.” They use machine learning to track and adapt to the user’s specific strengths and weaknesses. It will be interesting to see how they do, and to see them expand to using VR systems with tracking. Cerevrum looks gorgeous. It is exciting to see VR technology in the hands of artists to make beautifully looking experiences.
Back in 2007, I (Dio) was a games programming professor. Second Life and 3D worlds in general were at their peak popularity. I used Second Life in a class whose main purpose was to study the relationship between digital entertainment systems and cultural aspects of societies in which they exist.
This image shows my class attending a machinima festival that was going on in New York, where they had sessions with filmmakers speaking about their creations, and screenings of movies. We were all in Singapore, and it was great how we actually attended the event and interacted with other attendees without getting on a plane.
This is from a lecture I gave inside the virtual world. The lecture focused on ethical issues in games development. Part of the lecture included traveling to different locations, from a first-person shooter game to play and discuss violence in games, to a location labeled “safe” for children where they had games and weapons were disabled by script. It provided more of a practical experience.
And this is one of my students’ projects. One of their assignments was to design and implement a game, but not only the mechanics but also the actual business model so they could learn brainstorming and economic analysis tools like SCAMPERR and SWOT. They implemented games using Second Life’s scripting tools and even ran trials using the in-world currency.
Overall, using virtual worlds proved useful and was very promising for engaging and making learning experiences more fun. But of course, this was not VR. For those of you who used Second Life and other 3D worlds in those days, the interaction was all through mouse and keyboard input. You were in front of your flat screen, pushing buttons and seeing your avatar do crazy dance moves. Now, imagine how powerful this can be when using virtual reality technology.
In April 2016, the makers of Second Life announced applications are open for a Creator Preview of their new VR-creation platform, Project Sansar. It is developed with integration with Oculus Rift and slated to launch this year.
On the other hand, High Fidelity was created by Philip Rosedale, founder of Second Life. High Fidelity integrates with HTC Vive, Oculus Rift and other VR systems. It looks gorgeous, too. Interesting fact: I went to visit them in their SF office and learned that the company has their weekly all hands meeting inside the world. Very cool.
These and all other examples of world recreations are the community’s realization of the dream of a Metaverse. There are many other virtual worlds that are specific for musicians, and there is even a recreation of Hogwarts for you Harry Potter fans. We’re seeing the intersection of immersive technologies with high speed broadband for collaborative and multi-user virtual reality.
Imagine using virtual worlds like Sansar and High Fidelity in classroom settings and taking advantage of location based memory: reading a book in a place that looks like the book, meeting your instructor in a virtual world, or doing any other work activity in a different setting. Or the world can be one you build yourself using Unity *wink, wink*. This is why we are focused on creating tools for everyone to build VR experiences.
There are many interesting aspects of perception and cognition in VR that relate to health, learning, and experiences in virtual worlds. In this article we presented some studies and experiences we came to know as part of our research. In the next one we will present some of our own early findings and anecdotes. Thanks for reading!
Timoni West & Dio Gonzalez work at Unity Labs; Dio is a VR principal engineer, and Timoni is a principal designer. As part of their work in VR, they’ve done some research into how the brain processes types of information, and how this can affect how the brain processes environments in VR.