Background
This is an ongoing project that has been in the works since summer 2018. It started as part of my internship under the guidance of Prof. Jayesh Pillai and in collaboration with my colleague and filmmaker, Amal Dev . We were initially exploring the idea of dynamically altering the visuals of a VR film in areas that are beyond the field of view of the viewer. This approach led to the current framework, Cinévoqué, whose name is a portmanteau of the words Cinema and Evoke, which espouses the ability of the framework to evoke a narrative that corresponds to the viewer's gaze behavior over the course of the movie.
Introduction
Virtual Reality as a medium of storytelling is relatively less developed than storytelling in traditional films. The viewer is empowered with the ability to change the framing in VR, and they may not follow all the points of interest intended by the storyteller. As a result, the filmmaking and storytelling practices of traditional films do not directly translate. Researchers and filmmakers have studied how people watch VR films and have suggested guidelines to nudge the viewer to follow along with the story. However, the viewers are still in control, and they can consciously choose to rebel against such nudges. Accounting for this, and taking advantage of the affordances of VR, Cinévoqué alters the narrative shown to the viewers based on the events of the movie they have followed or missed. Furthermore, it also estimates their interest in particular events by measuring the time they spend gazing at it and shows them an appropriate storyline. Consequently, the experience doesn't have to be interrupted for the viewer to make conscious choices about the storyline branching, unlike existing interactive VR films.
This project is being built as a plugin for Unity 3D that could be used by filmmakers to create a responsive live-action immersive film. We have chosen to focus on live-action film over real-time 3d movies as passively responsive narratives have been explored in the context of games and interactive experiences previously. Additionally, the technical implementation is more novel in the case of live-action films as the content (videos) cannot be changed dynamically like real-time rendered scenes.
Using a game engine such as unity to power live-action Cinemative VR brings forth extra features that weren't implementable before, for example, we could add a virtual body that orients to the viewer's physical body by using rotational data from 6DOF controllers, which allow for the viewer to be a more integrated character in the narrative than before.
To learn more about the design and implementation of the framework please refer to my publications. We had the opportunity to present our work at reputed international conferences such as VRST, VRCAI and INTERACT. We have also been invited speakers at national and international events such as SIGCHI Asian Symposium, UNITE India and IndiaHCI.