Japanese Tea Ceremony Experience: Final Progress Report

Japanese Tea Ceremony Experience: Final Progress Report

Introduction

For the final project in Immersive environments, I decided to continue the virtual reality experience in Japanese Tea Ceremony made in collaboration with Sara Caudill. The goal of the environment is to introduce the user to a cultural tradition as a guest and transition to a participatory role as a performer of the tea ceremony. The user is welcomed in a garden setting and lead to a tea room, where voice narrations guide them through the movements and interactions of the tea ceremony objects.

Rationale

Sara and I determined a project stemming from both of our background in studying Japanese language and culture. So we decided on the method of Japanese tea ceremony, a cultural experience that would provide both an education and an intimate look into an experience that seemed miles away. We also pursued personas on opposite sides of the spectrum to see if we could address a wide range of needs (the older American who doesn’t travel vs. the younger Japanese native who is in closer proximity and access to the real life experience).

Process

In the first iteration of the project, we accomplished the build of a traditional tea room setting and enabling interaction with the tea ceremony objects. While a lot of progress was made to immerse the user into an experience with cultural significance, many elements were left out due to time constraints and limited knowledge in our goals.

In the next iteration as the final project I wanted to accomplish tasks in the following areas:

 Environment

-Garden environment to welcome the user to the tea ceremony experience

I created a garden opening scene for the exterior of the tea house using a terrain tools asset package. One of the obstacles that emerged was the bulk of data that couldn’t be sustained on the Oculus Quest, so I made a fairly easy transition to the Rift available in ACCAD.

-Provide an option for the user to enter the narrated “Guide Mode” or “Free Mode”

The goal of creating free mode and guide mode option fell wayside when I started incorporating a limited amount of narrations, which limited the experience of a guided mode, and failed to consider a main menu scene. For now, the experience is at a default free mode.

-Rebuild the sliding doors and create transparency in the paper

The screen door was also a task of creating cubes and setting a shader for transparency.

Lighting

-Create realistic interior and exterior daylight

Before

After

Sound

-Realistic nature sounds for the garden

-Narrations made by Sara: The introduction into the tea room from the wall scroll + directions for making the tea

I was only able to code Sara’s narration for the scroll after the user enters the tea room. She had two others that provided direction in handling the objects, but much of the time regarding the objects was dedicated to water interact-ability.

-Sounds from the objects when colliding with the table and other objects

Both the bowls on the table have ceramic-to-surface sounds playing when they collide with the table or each other. The water ladle also has a wooden thud with the same conditions.

Interactions/Animations

-Incorporating water and its interact-ability; Being scooped from the kettle into the teacup

Shadrick generously provided the Obi Fluid Renderer package and tutorials for how to emit fluids and I started working on emitting water and powder.

Liquid Test
Unsuccessful powder emitter test

-Matcha powder appearing when it’s scooped and creating tea when combined with the water from the kettle

I realized pretty quickly that the emitters greatly slow down the experience. I decided not to include one for the powder because having more than one emitter conflicted with the overall user experience.

-Transitions between scenes

I initially wanted to animate the door to the tea room open and close behind the user after entry. But I’ve instead added entrance and exit signs to the doors and the scenes transition through the Scene Loader Scripts.

Final Walkthrough

Reflection + Challenges

While there is certainly more immersion and interaction to the experience, there are still many elements left out of the experience, such as the main menu, cues for the narration, the directions for handling the objects, and an appropriate conclusion to the tea experience.

Another longstanding challenge is in maintaining the authenticity of the Japanese tea ceremony. Sara and I are voyeurs to Japanese cultural traditions, so a richer VR experience could be drawn in consultation with a seasoned expert in tea ceremony.

However, in continuation of this project, I would like to address the incomplete portions of the experience and include a multi-sensorial element for the user. This would include the user feeling a light breeze from the garden and incorporating scents of nature and the matcha tea outside of the headset.

Japanese Tea Ceremony VR Experience: 11.11 – 11.15 Progress Report

Japanese Tea Ceremony VR Experience: 11.11 – 11.15 Progress Report

Recap: For the final project in the Immersive Environments course (ACCAD 7103), I’ve decide to continue the virtual reality experience created by Sara Caudill and myself.

Goals

I’ve prioritized the following to further build upon the experience with my current skillset:

Environment
-Garden environment to welcome the user to the tea ceremony experience
-Provide an option for the user to enter the narrated “Guide Mode” or “Free Mode”
-Rebuild the sliding doors and create transparency in the paper

Lighting
-Create realistic interior and exterior daylight

Sound

-Realistic nature sounds for the garden
-Narrations made by Sara: The introduction into the tea room from the wall scroll + directions for making the tea
-Sounds from the objects when colliding with the table and other objects

Then, there are the things that are aspirational at this point and include new skills that I have yet to learn:

Interactions/Animations
-Incorporating water and its interact-ability; Being scooped from the kettle into the the teacup
-Matcha powder appearing when it’s scooped and creating tea when combined with the water from the kettle

I’m assuming that these involve some semblance of animation in Maya (such as the water pouring) and a series of hiding/showing the animations based on trigger interactions (showing “tea water” when the matcha and water are within distance of each other). But again, I’m guessing without an idea of where to start.

Progress

So I was able to make considerable progress in terms of lighting and the environment. I built a terrain and created a garden with swaying grass and trees.

I’ve also added an audio source with nature/forest sounds:

I’ve also used baked lighting from point lights for the tea room interior, making it look considerably more natural than in the first iteration. I positioned the lights to highlight the main points of interaction, the wall scroll and the table.

Before (First Prototype)
After

The sliding screen doors were previously images mapped onto cubes. I rebuilt them in Unity and made the “paper” slightly transparent to give the user a small preview of the tea room.

Although I’ve yet to test them, I’ve also added scripts and audio sources to the objects in the tea room. For the wall scroll, I’ve added a script where if the player enters the collider attached to the wall scroll, it would trigger to play Sara’s narration. For objects such as the tea bowl, I’ve added a script where a ‘clink’ sound would play on collision with the table or other objects. There is also a sliding door, conceptualized similarly to the wall scroll, for the user to enter the tea room from the garden. But I will consider separating the garden and tea room into different scenes. It may also provide an advantage in adding a mode menu for the user at the entrance from the garden to the tea room.

Challenges

One of the main issues had been side-loading my experience onto the Quest to test. When I was able to finally test on the Quest, using the terrain and its animations attached to had significantly slowed down the experience and created lag. At the suggestion of the instructor Shadrick and fellow colleagues, I will build the project on the available Oculus Rift moving forward. And for the sake of a faster experience, I may replace the terrain with a more compact skybox (although I am concerned for the loss of the subtle presence in the moving trees).
I’ve also lost the interaction of the hands/controllers when testing my experience, so I’ll have to investigate this further when I start testing on the Rift.

Remaining Tasks

Environment
-Garden environment to welcome the user to the tea ceremony experience
(Consider Skybox replacement)
-Provide an option for the user to enter the narrated “Guide Mode” or “Free Mode”
-Rebuild the sliding doors and create transparency in the paper

Lighting
-Create realistic interior and exterior daylight

Sound

-Realistic nature sounds for the garden
-Narrations made by Sara: The introduction into the tea room from the wall scroll + directions for making the tea (In Progress)
-Sounds from the objects when colliding with the table and other objects (In Progress)

Interactions/Animations (Tutorials Needed)
-Incorporating water and its interact-ability; Being scooped from the kettle into the the teacup
-Matcha powder appearing when it’s scooped and creating tea when combined with the water from the kettle

Other
-Build and develop using Oculus Rift

*BONUS IDEA*
I told my advisor Maria about this project and she posed the question of making the experience multi-sensorial. This could entail:
-the smell of the grass and a gentle breeze blowing on the user as they start the experience in the garden
-the scent of fresh flowers or incense when they enter the tea room
-an emergent aroma of matcha as they’re making the tea
-finally being presented with tea after removing the headset

So there is definitely an opportunity to increase the level of immersion for the tea ceremony. But I’ll focus on my core chores for the remaining time of the project and revisit the idea later.

Intervention Signage for Mental Health Crises

Intervention Signage for Mental Health Crises

What sits at the intersection of “user experience” and “mental health management”? This is the basic question that I’ve been repeating in this course throughout the semester. But the first question to myself should have been “what do you mean by mental health management?” At the start, I was wondering if I should focus on the symptoms, if it would be helpful to identify ways of diminishing the effects of anxiety or depression respectively and pointedly. But I realized this was going deep into a rabbit hole that would make it harder to define a design problem space where I wanted a result that appeals to the average user.

So at some point, I had to stop and think of a sequence in which mental health problems can occur, and choose a point in that sequence. So I started to think about the importance of having timely access to resources and support. A UK study about help-seeking behavior in young adults identified that while generally 3/4 of psychiatric disorders emerge before age 25, participants age 18-24 are least likely to get care for mental health problems. With this known, I wanted to focus on the time frame between emerged symptoms and self-harm/attempts on life, where intervention can take place.

In October, I went to a talk held by rhetorician and Chair of Disability Studies Dr. Margaret Price. The talk was titled “Sustaining Mental Health on College Campuses”, which focused on how academia was not built to support faculty or students with mental illness and presented ideas for what design thinking can offer to this area. While the entire presentation exhibited many thought-provoking ideas, one aspect that I found interesting was a critical assessment of the sign placed in one of the campus parking garages by OSU’s newly established Mental Health and Suicide Task Force after the recent tragedies on campus.

While the intent is sound, the sign itself leaves a lot of room (possibly too much) for interpretation. The number’s source is unidentifiable. Dr. Price called the number when she spotted the sign, but it had gone to voicemail and asked to call back later. It turned out that the number led to the OSU Counseling helpline and its hours of operation were from 8AM to 5PM, which means availability had also not been identified. Here are other questions to consider: Is this for students or faculty? Both? What about those unaffiliated with OSU? Also, what happens next if someone doesn’t answer? Additionally, these voicemail messages instruct the caller dial 9-1-1 in the case of mental health emergency. There is also the concern for how prepared local law enforcement are for addressing mental health emergencies, but that is another topic to address in the future. I started to consider examples of this kind of signage in public spaces. At the base of Mount Fuji lives the scenic Aokigahara forest aka “the Sea of Trees”, aka, “The suicide forest”. Rates have varied since 1993, but the police have reported over 200 attempts and 54 complete suicides in 2010.

While the deeper parts of the forest have been blocked off, there are also signs placed at entrances to the forest. To summarize, the sign above reads “Your life is something precious given to you by your parents. Think of them, your siblings and your children once more. Seek counsel if you are feeling alone,” and provides a phone number to Japan’s Suicide Prevention Association.

It seems that in these public spaces, the signage seems to only become necessary when a death has already happened. They also stand as an effort to stave off potential copycats, which is a valid goal. But they also run the risk of creating a misrepresentation of a space (example below). This is one of the reasons why the authorities near Aokigahara had chosen to stop reporting the forest suicide rates in recent years.

Starring Natalie Dormer, The Forest is a horror movie was released in 2016 based on and set in Aokigahara Forest.

Next, I wanted to see how this plays out online. Search engines like Google have a similar set up, mostly if your query specifically states intent or ideation of ending your life. It highlights the National Suicide prevention phone number and online chat service before other related search results.

Just to note, this appeared mainly when searching with direct reference to suicide. There is a slightly weaker call to action for queries citing depression, anxiety, grief, or even self-harm as seen below:

There are also varying results for what hotline number to suggest.

So next, I look at social media, which provided a couple unique experiences. Tumblr has a pop-up it for when you search for posts related to self-harm, anorexia/bulimia, and suicidal ideation. Instead of a phone number, it provides services that connect the user to someone anonymous who will listen (by phone, chat, etc.) and suggests using positive reinforcement by following blogs that post affirmations and other supportive content.

Facebook provides a more robust tool giving you resources whether you are at risk or you might know someone at risk. However, this doesn’t necessarily shield you from triggering user-generated content.

[Note: Image has been cropped at the bottom due to a user’s photo exhibiting self-harm.]

Facebook presents a variety of tools for the user in distress: A way to contact a friend on your friends list, hotline numbers, including those for LGBTQ users and veterans/military personnel.

There’s also a list for self-supporting activities. The page gives you a moment to breath before advancing to the list. It provides activities such as going outside, writing in a journal, or “just relaxing”.

I do, however, wonder how this is received by someone who’s query may suggest that they could be self harming or planning suicide. Twitter and Instagram do not have signage according to search query by individual users, but they have a protocol listed under the Safety section of their websites. A user can report any posts that suggest self-injury or ideation. A web administrator can then reach out to the poster. And because Instagram is a Facebook product, the same resources are provided.

Of course, I have only scratched the surface of my search, there are many other examples of intervention such as adding to structures, social movements such as “RUOK?”, and organizational guidelines.

Suicide net added onto Golden Gate Bridge

It is unclear which stakeholders were involved in the development of this signage, whether they included researchers, designers, those with experience in mental illness or those that simply want to help. But this has become the point of exploration that I want to pursue.
So ultimately my question could be this: what does the design of intervention and suicide prevention look like for the target group, a non-target audience, and those involved in the creation of these elements?

Next steps:

  • Explore other types of intervention signage, mental health-related or otherwise. I plan to investigate into how viewers react to other types of signage and what may be applied to signage trying to prevent suicide.
  • Establish relationships with people who are tasked with creating this kind of signage in order to understand the process of their development. This would ideally include OSU’s Task Force, support groups for mental health, and other initiatives dedicated to mental wellness.

Studio Project: Exploration of AR, VR & Mental Health

This semester, the focus of the Graduate Design Studio course is learning and implementing emerging technologies, definitely none of which I had ever seen or used before. I had been going back and forth on whether I should begin exploring augmented reality (AR) or virtual reality (VR). I ultimately decided to start with learn about making with VR. I had the mindset that what I would learn in VR would trickle down into AR since the making process were similar (Using the Unity game building software, developer toolkits, etc.). But over time, I realize that the difference in the perception of the experience or concept is more important. To start, I wanted to take advantage of VR’s immersive and practically limitless nature when making something. Moreover, I wanted to treat that limitless space as one for creativity and catharsis for the negative and visceral feelings created when struggling with anxiety and depression, which both often create negative beliefs about the self.

Firstly, I had gone through tutorials in making simple 3D objects in Maya based on what I wanted to see in an end product. Then I learned how to navigate, use, and manage objects, materials and scripts in Unity. I had set the goal of making a custom Tilt Brush for the HTC Vive and had my sights set on a single tutorial comprehensive tutorial. But then I had some hindering realizations:

  • Although the tutorial was labeled “for beginners”, that can be considered subjective. There was a lot of C# coding that would take me much longer than I had to meet my goals, especially if I had no idea how to trouble shoot.
  • The tutorial became dated. Very recent changes were made to an asset package for the code attached to the HTC controllers, so anything declared in the code for the Tilt Brush would give me endless errors.

I was too ambitious, so I scaled back to just learning how to pickup and throw objects, which was just a matter of dragging and dropping scripts from the SteamVR folder to the game scene and objects in Unity as opposed to needing to code from scratch (the more you know). After figuring out how to make interaction happen, it was a matter of figuring out how to assign meaning to the interaction. Early on, I was questioning what a coping mechanism for compartmentalizing or casting away negative beliefs, primarily visualized in the mind in a guided meditative manner, would look like in a more tangible form. I was asking how would the user feel to see actualized representation of a negative belief being stored or thrown away.

By the end of the project, I applied the negative self-talk as materials to basic cubes, made them interactable (pick up/drop/throw), and made the jarlid interactable (pick up/place) to simulate the coping mechanism of containment.

Reflection

I received a glimpse of how VR could be useful for a user guiding themselves to reduce the negative self-beliefs. The immersive aspect of VR, I believe, would make it easier to leave troubling thoughts behind and return to the actual world. I’d be curious to test this concept on users who use this sort of coping mechanism and see if there are significant changes in self-efficacy or behavior. I am also curious about adding a guiding element to this idea, similar to the guided meditation already demonstrated in may VR simulations like this one.

Now, I want to apply this to something more accessible, which I think AR can offer. It’s not very convenient to go through this activity through a heavy VR lens (and it could get expensive) if a user wanted to practice this at their own pace and at the time that they need it. I have the following goals in mind for this task:

  • Apply other objects suitable for containing negative self-talk
  • Encourage engagement through the lens of the actual world
  • Consider a guided element to encourage interaction
  • Create an appropriate user interface

The concept would lose the immersion, but it allows exploration in effective ways of assigning roles and meaning to the actual world and picturing how more users can experience that meaning through mobile devices.