C.A.R.VR – Contextual Activity Recognition in VR

Activity recognition in VR is comparatively easier than real world due to limited/ restricted locomotion space, action commands given by the user to perform a task, and contextual information. But, if we look at it from physiology point of view, how different would it be that the user is making their avatar jump with just a trigger, run by swaying the controllers or touchpad interactions whereas in real world, users would actually need to run or walk to perform those actions. I feel the HRV, GSR and to some extent EEG would be definitely different based on activity and consequently the emotional states. People have explored the importance of contextual information in emotion recognition from text, speech, facial expressions, images, and videos using machine/ deep learning algorithms (https://link.springer.com/content/pdf/10.1007/s12652-019-01485-x.pdf). But no one has explored this (along with activity correlation) in VR even though the information is easily available especially using physiological information.

Previous
Next

I collected human movement data with a Kinect device and trained an AI model for activity recognition on Unity platform. Through the experiment, I found that specific movements were harder to be 

distinguished from their similars. Therefore, I designed a two-stage classifiers to significantly

improve the recognition rate, where the movements were first classified into a group of similar

movements and then refined to a specific prediction.

Detect Example
Detect Example
Detect Example
Previous
Next
The mainly information will be shown after paper review = =