This semester, the focus of the Graduate Design Studio course is learning and implementing emerging technologies, definitely none of which I had ever seen or used before. I had been going back and forth on whether I should begin exploring augmented reality (AR) or virtual reality (VR). I ultimately decided to start with learn about making with VR. I had the mindset that what I would learn in VR would trickle down into AR since the making process were similar (Using the Unity game building software, developer toolkits, etc.). But over time, I realize that the difference in the perception of the experience or concept is more important. To start, I wanted to take advantage of VR’s immersive and practically limitless nature when making something. Moreover, I wanted to treat that limitless space as one for creativity and catharsis for the negative and visceral feelings created when struggling with anxiety and depression, which both often create negative beliefs about the self.
Firstly, I had gone through tutorials in making simple 3D objects in Maya based on what I wanted to see in an end product. Then I learned how to navigate, use, and manage objects, materials and scripts in Unity. I had set the goal of making a custom Tilt Brush for the HTC Vive and had my sights set on a single tutorial comprehensive tutorial. But then I had some hindering realizations:
- Although the tutorial was labeled “for beginners”, that can be considered subjective. There was a lot of C# coding that would take me much longer than I had to meet my goals, especially if I had no idea how to trouble shoot.
- The tutorial became dated. Very recent changes were made to an asset package for the code attached to the HTC controllers, so anything declared in the code for the Tilt Brush would give me endless errors.
I was too ambitious, so I scaled back to just learning how to pickup and throw objects, which was just a matter of dragging and dropping scripts from the SteamVR folder to the game scene and objects in Unity as opposed to needing to code from scratch (the more you know). After figuring out how to make interaction happen, it was a matter of figuring out how to assign meaning to the interaction. Early on, I was questioning what a coping mechanism for compartmentalizing or casting away negative beliefs, primarily visualized in the mind in a guided meditative manner, would look like in a more tangible form. I was asking how would the user feel to see actualized representation of a negative belief being stored or thrown away.
By the end of the project, I applied the negative self-talk as materials to basic cubes, made them interactable (pick up/drop/throw), and made the jarlid interactable (pick up/place) to simulate the coping mechanism of containment.
I received a glimpse of how VR could be useful for a user guiding themselves to reduce the negative self-beliefs. The immersive aspect of VR, I believe, would make it easier to leave troubling thoughts behind and return to the actual world. I’d be curious to test this concept on users who use this sort of coping mechanism and see if there are significant changes in self-efficacy or behavior. I am also curious about adding a guiding element to this idea, similar to the guided meditation already demonstrated in may VR simulations like this one.
Now, I want to apply this to something more accessible, which I think AR can offer. It’s not very convenient to go through this activity through a heavy VR lens (and it could get expensive) if a user wanted to practice this at their own pace and at the time that they need it. I have the following goals in mind for this task:
- Apply other objects suitable for containing negative self-talk
- Encourage engagement through the lens of the actual world
- Consider a guided element to encourage interaction
- Create an appropriate user interface
The concept would lose the immersion, but it allows exploration in effective ways of assigning roles and meaning to the actual world and picturing how more users can experience that meaning through mobile devices.