[Sep 7, 2016] Notifications have been send out. Thank you to everyone who contributed and submitted their work! And congratulations to those who got it accepted!
Tweets for #mvar2016
Virtual reality (VR) and augmented reality (AR) are currently two of the "hottest" topics in the IT industries. Many consider them to be the next wave in computing with a similar impact as the shift from desktop systems to mobiles and wearables. This hype is rooted in technological improvements (such as mass-produced high-resolution mobile displays), which resulted in cheaper, high-performant devices for the common consumer market. Examples include Facebook's Oculus Rift and Microsoft's HoloLens for VR and AR, respectively.
Despite this progress, we are still far from the ultimate goal of creating new virtual environments or augmentations of existing ones that feel and react similarly as their real counterparts. Many challenges and open research questions remain - mostly in the areas of multimodality and interaction. For example, current setups predominantly focus on visual and auditory senses, neglecting other modalities such as touch and smell that are an integral part of how we experience the real world around us. Likewise, it is still an open question how to best interact and communicate with a virtual world or virtual objects in AR.
Multimodal interaction offers great potential to not only make AR and VR experiences more realistic, but also to provide more powerful and efficient means of interacting with virtual and augmented worlds. This workshop wants to explore these opportunities and thus invites contributions on all kinds of works related to interaction or multimodality in the context of VR and AR computing; for details see the Call for Submissions.
July 29, 2016: Submission deadline (passed)
September 7, 2016: Acceptance notifications
October 5, 2016: Camera ready versions due
(Note: date is pending final confirmation of publication chairs)