Hi there! I'm Aayush 👋

  • 🌱 Learning more about and studying: Distributed Systems, Python, Java, Open Source, CS Algorithms
  • 💬 Ping me about: open source, diversity & inclusion, mentorship
  • 🔥 Interests: Books 📚, Running 🏃, Travelling 🌎
  • ⚙️ Passionate about building accessible software and helping underrepresented folks get into tech
Avatar

Featured Projects

  • Knowledge Infused AI and Inference

    January 2023 - Present

    Graduate Research Student

  • Center for Research in Emergent Manufacturing

    February 2024 - August 2024

    Graduate Research Assistant

  • Astek Diagnostics Inc

    December 2023 - January 2024

    Software Engineer

  • GSoC'23 @ Red Hen Lab

    May 2023 - September 2023

    Google Summer of Code 2023

  • Apple Inc

    December 2019 - August 2022

    Apple Inc Special Projects Group

  • Naresh IT

    Jan 2022 - June 2022

    Full-Stack Java, Python Developer

  • Algoshelf

    Dec 2018 - March 2019

    SAX Library for Anomaly Detections

×

Background

This is an ongoing project that has been in the works since summer 2018. It started as part of my internship under the guidance of Prof. Jayesh Pillai and in collaboration with my colleague and filmmaker, Amal Dev . We were initially exploring the idea of dynamically altering the visuals of a VR film in areas that are beyond the field of view of the viewer. This approach led to the current framework, Cinévoqué, whose name is a portmanteau of the words Cinema and Evoke, which espouses the ability of the framework to evoke a narrative that corresponds to the viewer's gaze behavior over the course of the movie.

Introduction

Virtual Reality as a medium of storytelling is relatively less developed than storytelling in traditional films. The viewer is empowered with the ability to change the framing in VR, and they may not follow all the points of interest intended by the storyteller. As a result, the filmmaking and storytelling practices of traditional films do not directly translate. Researchers and filmmakers have studied how people watch VR films and have suggested guidelines to nudge the viewer to follow along with the story. However, the viewers are still in control, and they can consciously choose to rebel against such nudges. Accounting for this, and taking advantage of the affordances of VR, Cinévoqué alters the narrative shown to the viewers based on the events of the movie they have followed or missed. Furthermore, it also estimates their interest in particular events by measuring the time they spend gazing at it and shows them an appropriate storyline. Consequently, the experience doesn't have to be interrupted for the viewer to make conscious choices about the storyline branching, unlike existing interactive VR films.
This project is being built as a plugin for Unity 3D that could be used by filmmakers to create a responsive live-action immersive film. We have chosen to focus on live-action film over real-time 3d movies as passively responsive narratives have been explored in the context of games and interactive experiences previously. Additionally, the technical implementation is more novel in the case of live-action films as the content (videos) cannot be changed dynamically like real-time rendered scenes. Using a game engine such as unity to power live-action Cinemative VR brings forth extra features that weren't implementable before, for example, we could add a virtual body that orients to the viewer's physical body by using rotational data from 6DOF controllers, which allow for the viewer to be a more integrated character in the narrative than before.

To learn more about the design and implementation of the framework please refer to my publications. We had the opportunity to present our work at reputed international conferences such as VRST, VRCAI and INTERACT. We have also been invited speakers at national and international events such as SIGCHI Asian Symposium, UNITE India and IndiaHCI.
×

ScholAR

ScholAR is a research project that is exploring how AR-based educational content can be democratized and deployed in schools. The project is being undertaken as part of Pratiti Sarkar's Ph.D. and is funded by the Tata Center for Technology and Design. I have been developing the AR applications used in the experiments while also assisting in conducting them.

Scholar for Classrooms

This part of the project focused on scaffolding classroom sessions with AR based content moderated by the teacher. This is proposed to provide a more enganging and interactive learning experience compared to solutions like smartboards. We have focused on building and testing AR content for maths, specifically for geometry over the last three years. Our expeirments explored learning efficacy, collaboration and interactions in rural schools where classes were held using our applications. Our apps were also built based on pedagogical models to help the students better grasp the concepts. I was reponsible for building the apps and further helped with content creation and experiments.

Scholar for Remote Learning

Due to the COVID-19 pandemic we starting working on a remote learning solution that could bridge the work we have done for physical classrooms. I created a prototype that focused on creating a virtual classroom where the teacher is able to control a shared AR artefact and teach concepts to students who are spatially present in the virtual space. Visual cues & spatial audio were added to give a better sense of other users’ relative position. Depending on the active artifact, the users could place a marker or draw on top, and these interactions are reflected for everyone in the session. We had conducted preliminary tests with students from our department to better understand the opportunities and challenges that arise as a consequence.

Apart from education this project also brought out challenges that extend beyond the educational use case of Scholar, for example, the best avatar representation for mobile AR. With just the positional information of the hand held device it may not be possible to create avatars with the similar accuracy as those in VR or HMD based devices. I have taken up the lead in this work, where we study avatars that vary in both visual and behavioural fidelity. The following image shows the avatar space in our study.
The following video demonstrates both direct and procedural networked avatars in our current prototype

×

Graphics Programming

Raytracing

This is my first graphics project. Following the Raytracing in one weekend ebook series, I implemented a Raytracer from scratch in C++. The project served as an useful refresher for the maths used in graphics and contexualized the concepts. It also helped me learn some advanced and new concepts in C++. The first book focused on implementing vector math functions, rays, ray-sphere interactions, shading, aliasing, positionable camera and lens blur. The following images show scenes that implement all the above mentioned features.

The second book covered more advanced topics such as motion blur, BVH, procedural textures, image texture mapping, lights and volumes. Beyond the contents of the book I had implemented multi-threading and non-uniform volumes. The code for the project can be found here.

Shaders

After the raytracing projects I have implemented a couple of shaders in Shadertoy. The first was an interactive mandelbrot set, which served as a refresher for complex math and its uses in graphics. The second shader is an interactive mandelbulb, which helped me understand raymarching and SDFs.
Additionally, I have implemented custom shaders in Unity as part of my work in IMXD lab. Which would be updated here after their completion.

Next Steps

Recently I've come across multiple graphics courses that were published publicly, and have been going through them while trying to complete the assignments. Currently I'm following Prof Keenan Crane's Computer Graphics course (CMU 15-462/662) and Prof Cem Yuksel's Interactive Computer Graphics (CS 5610/6610). The completed assignments of the Interactive CG course are being updated here.

Publications

  • Aayush Kumar and Dr G Vadivu. "Hand Written Recognition System using Neural Network and Guided Inputs"

  • Aayush Kumar. "Symbolic Aggregate ApproXimate (SAX): An Innovative outlook to time series representation" paper accepted for publication at IJSER

  • Venkatesh R, P Yadhu, Aayush Kumar and Projjal Gupta. "Developing an integrative framework of Artificial Intelligence and Blockchain for augmenting smart governance"

Resume

Thank you for your interest in my profile, you can find my Resume here